Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Slack’s AI search now works across an organization’s entire knowledge base

      July 17, 2025

      In-House vs Outsourcing for React.js Development: Understand What Is Best for Your Enterprise

      July 17, 2025

      Tiny Screens, Big Impact: The Forgotten Art Of Developing Web Apps For Feature Phones

      July 16, 2025

      Kong AI Gateway 3.11 introduces new method for reducing token costs

      July 16, 2025

      Got ChatGPT Plus? You can record and summarize meetings on a Mac now – here’s how

      July 17, 2025

      I put this buzzworthy 2-in-1 robot vacuum to work in my house – here’s how it fared

      July 17, 2025

      AI agents will change work and society in internet-sized ways, says AWS VP

      July 17, 2025

      This slick gadget is like a Swiss Army Knife for my keys (and fully trackable)

      July 17, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The details of TC39’s last meeting

      July 17, 2025
      Recent

      The details of TC39’s last meeting

      July 17, 2025

      Notes Android App Using SQLite

      July 17, 2025

      How to Get Security Patches for Legacy Unsupported Node.js Versions

      July 17, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft says it won’t change Windows 11’s system tray design after users feedback

      July 17, 2025
      Recent

      Microsoft says it won’t change Windows 11’s system tray design after users feedback

      July 17, 2025

      How Rust’s Debut in the Linux Kernel is Shoring Up System Stability

      July 17, 2025

      Microsoft is on track to become the second $4 trillion company by market cap, following NVIDIA — and mass layoffs

      July 17, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Advancing Vision-Language Reward Models: Challenges, Benchmarks, and the Role of Process-Supervised Learning

    Advancing Vision-Language Reward Models: Challenges, Benchmarks, and the Role of Process-Supervised Learning

    April 3, 2025

    Process-supervised reward models (PRMs) offer fine-grained, step-wise feedback on model responses, aiding in selecting effective reasoning paths for complex tasks. Unlike output reward models (ORMs), which evaluate responses based on final outputs, PRMs provide detailed assessments at each step, making them particularly valuable for reasoning-intensive applications. While PRMs have been extensively studied in language tasks, their application in multimodal settings remains largely unexplored. Most vision-language reward models still rely on the ORM approach, highlighting the need for further research into how PRMs can enhance multimodal learning and reasoning.

    Existing reward benchmarks primarily focus on text-based models, with some specifically designed for PRMs. In the vision-language domain, evaluation methods generally assess broad model capabilities, including knowledge, reasoning, fairness, and safety. VL-RewardBench is the first benchmark incorporating reinforcement learning preference data to refine knowledge-intensive vision-language tasks. Additionally, multimodal RewardBench expands evaluation criteria beyond standard visual question answering (VQA) tasks, covering six key areas—correctness, preference, knowledge, reasoning, safety, and VQA—through expert annotations. These benchmarks provide a foundation for developing more effective reward models for multimodal learning.

    Researchers from UC Santa Cruz, UT Dallas, and Amazon Research benchmarked VLLMs as ORMs and PRMs across multiple tasks, revealing that neither consistently outperforms the other. To address evaluation gaps, they introduced VILBENCH, a benchmark requiring step-wise reward feedback, where GPT-4o with Chain-of-Thought achieved only 27.3% accuracy. Additionally, they collected 73.6K vision-language reward samples using an enhanced tree-search algorithm, training a 3B PRM that improved evaluation accuracy by 3.3%. Their study provides insights into vision-language reward modeling and highlights challenges in multimodal step-wise evaluation.

    VLLMs are increasingly effective across various tasks, particularly when evaluated for test-time scaling. Seven models were benchmarked using the LLM-as-a-judge approach to analyze their step-wise critique abilities on five vision-language datasets. A Best-of-N (BoN) setting was used, where VLLMs scored responses generated by GPT-4o. Key findings reveal that ORMs outperform PRMs in most cases except for real-world tasks. Additionally, stronger VLLMs do not always excel as reward models, and a hybrid approach between ORM and PRM is optimal. Moreover, VLLMs benefit from text-heavy tasks more than visual ones, underscoring the need for specialized vision-language reward models.

    To assess ViLPRM’s effectiveness, experiments were conducted on VILBENCH using different RMs and solution samplers. The study compared performance across multiple VLLMs, including Qwen2.5-VL-3B, InternVL-2.5-8B, GPT-4o, and o1. Results show that PRMs generally outperform ORMs, improving accuracy by 1.4%, though o1’s responses showed minimal difference due to limited detail. ViLPRM surpassed other PRMs, including URSA, by 0.9%, demonstrating superior consistency in response selection. Additionally, findings suggest that existing VLLMs are not robust enough as reward models, highlighting the need for specialized vision-language PRMs that perform well beyond math reasoning tasks.

    In conclusion, Vision-language PRMs perform well when reasoning steps are segmented, as seen in structured tasks like mathematics. However, in functions with unclear step divisions, PRMs can reduce accuracy, particularly in visual-dominant cases. Prioritizing key steps rather than treating all equally improves performance. Additionally, current multimodal reward models struggle with generalization, as PRMs trained on specific domains often fail in others. Enhancing training by incorporating diverse data sources and adaptive reward mechanisms is crucial. The introduction of ViLReward-73K improves PRM accuracy by 3.3%, but further advancements in step segmentation and evaluation frameworks are needed for robust multimodal models.


    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on OPEN SOURCE AI: FREE REGISTRATION + Certificate of Attendance + 3 Hour Short Event (April 12, 9 am- 12 pm PST) + Hands on Workshop [Sponsored]

    The post Advancing Vision-Language Reward Models: Challenges, Benchmarks, and the Role of Process-Supervised Learning appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleSnowflake Proposes ExCoT: A Novel AI Framework that Iteratively Optimizes Open-Source LLMs by Combining CoT Reasoning with off-Policy and on-Policy DPO, Relying Solely on Execution Accuracy as Feedback
    Next Article SEO vs Google Ads vs Omni-Channel: What Really Works in 2025?

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 17, 2025
    Machine Learning

    Apple Intelligence Foundation Language Models Tech Report 2025

    July 17, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    This ultraportable LG laptop gives my 15-inch MacBook Air some serious competition

    News & Updates

    CVE-2025-7074 – Vercel Hyper Regular Expression Complexity

    Common Vulnerabilities and Exposures (CVEs)

    CISA Adds Two New Exploited Vulnerabilities to Its Catalog: CVE-2024-38475 and CVE-2023-44221

    Security

    CVE-2018-1359 – Apache HTTP Server Authentication Bypass

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    New Rust Botnet Hijacking Routers to Inject Commands Remotely

    April 22, 2025

    New Rust Botnet Hijacking Routers to Inject Commands Remotely

    A sophisticated new botnet malware written in the Rust programming language has been discovered targeting vulnerable router devices worldwide.
    Dubbed “RustoBot” due to its Rust-based implementation, t …
    Read more

    Published Date:
    Apr 22, 2025 (2 hours, 22 minutes ago)

    Vulnerabilities has been mentioned in this article.

    CVE-2024-12987

    Google Pixel 9a vs. iPhone 16e: My camera comparison has a clear winner

    April 23, 2025

    Your Galaxy Watch could get a major sleep apnea upgrade, thanks to AI and Stanford

    April 9, 2025

    CVE-2025-34089 – Aexol Studio Remote for Mac Remote Code Execution Vulnerability

    July 3, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.