Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Gemini 2.5 Pro and Flash are generally available and Gemini 2.5 Flash-Lite preview is announced

      June 19, 2025

      CSS Cascade Layers Vs. BEM Vs. Utility Classes: Specificity Control

      June 19, 2025

      IBM launches new integration to help unify AI security and governance

      June 18, 2025

      Meet Accessible UX Research, A Brand-New Smashing Book

      June 18, 2025

      I’ve tested dozens of robot vacuums. These are the three I recommend most to family and friends

      June 20, 2025

      These apps are quietly draining your phone battery – how to find and shut them down

      June 20, 2025

      184 million passwords for Google, Microsoft, Facebook, and more leaked in massive data breach

      June 20, 2025

      I tested the world’s thinnest SSD enclosure – here’s why it’s the perfect PC accessory for me

      June 20, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Importance of Performance Adaptation in Frontend Development

      June 20, 2025
      Recent

      Importance of Performance Adaptation in Frontend Development

      June 20, 2025

      Proactive, Not Reactive – The Key to Inclusive and Accessible Design

      June 20, 2025

      Reset Rate Limits Dynamically with Laravel’s clear Method

      June 20, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Stage – Git GUI client for Linux desktops

      June 20, 2025
      Recent

      Stage – Git GUI client for Linux desktops

      June 20, 2025

      Edit: L’editor di testo a riga di comando di Microsoft anche per GNU/Linux

      June 20, 2025

      Splitcat – split and merge files

      June 20, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»ReVisual-R1: An Open-Source 7B Multimodal Large Language Model (MLLMs) that Achieves Long, Accurate and Thoughtful Reasoning

    ReVisual-R1: An Open-Source 7B Multimodal Large Language Model (MLLMs) that Achieves Long, Accurate and Thoughtful Reasoning

    June 19, 2025

    The Challenge of Multimodal Reasoning

    Recent breakthroughs in text-based language models, such as DeepSeek-R1, have demonstrated that RL can aid in developing strong reasoning skills. Motivated by this, researchers have attempted to apply the same RL techniques to MLLMs to enhance their ability to reason across both visual and textual inputs. However, these attempts haven’t been entirely successful; MLLMs still struggle with complex reasoning tasks. This suggests that simply reusing RL strategies from text-only models may not work well in multimodal settings, where the interaction between different data types introduces new challenges that require more tailored approaches. 

    Evolution of Multimodal Language Models

    Recent research in MLLMs builds on the progress of LLMs by combining visual inputs with language understanding. Early models, such as CLIP and MiniGPT-4, laid the groundwork, followed by instruction-tuned models like LLaMA. While closed-source models demonstrate strong reasoning through lengthy CoT outputs, open-source models have primarily focused on fine-tuning and CoT adaptations. However, these often yield brief answers that limit in-depth rationale. RL, including techniques like RLHF and GRPO, has shown promise for enhancing reasoning in LLMs. Inspired by this, recent work now aims to apply RL in MLLMs to improve visual reasoning and support richer, longer outputs. 

    Introduction of ReVisual-R1

    Researchers from Tsinghua University, Shanghai Jiao Tong University, and the Shanghai Artificial Intelligence Laboratory have introduced ReVisual-R1, a 7B-parameter open-source MLLM that sets a new standard in multimodal reasoning. Their study reveals three key insights: (1) Careful text-only pretraining provides a strong cold-start, outperforming many existing MLLMs even before RL; (2) The commonly used GRPO algorithm suffers from gradient stagnation, which they address with a novel method called Prioritized Advantage Distillation (PAD); and (3) Adding a final text-only RL phase after multimodal RL further enhances reasoning. Their three-stage approach, which includes text pretraining, multimodal RL, and final text RL, strikes an effective balance between visual grounding and deep cognitive reasoning. 

    Developing the GRAMMAR Dataset

    The GRAMMAR dataset was developed after it was noticed that existing multimodal cold-start datasets lack the depth necessary to train strong reasoning models. Text-only datasets, like DeepMath, showed better gains in both text and multimodal tasks, suggesting that textual complexity better stimulates reasoning. To address this, GRAMMAR combines diverse textual and multimodal samples using a multi-stage curation process. This data fuels the Staged Reinforcement Optimization (SRO) framework, which first trains models using multimodal RL, enhanced by Prioritized Advantage Distillation to avoid stalled learning and an efficient-length reward to curb verbosity, followed by a text-only RL phase to boost reasoning and language fluency. 

    Three-Stage Training Pipeline

    The experiments for ReVisual-R1 followed a structured three-stage training process: starting with pure text data to build a language foundation, then incorporating multimodal reinforcement learning for visual-text reasoning, and finally fine-tuning with text-only RL to refine reasoning and fluency. It was tested across various benchmarks and outperformed both open-source and some commercial models in multimodal and math reasoning tasks. The model achieved top results on 9 out of 10 benchmarks. Ablation studies confirmed the importance of training order and the Prioritized Advantage Distillation method, which helped focus learning on high-quality responses, resulting in a significant improvement in overall performance. 

    Summary and Contributions

    In conclusion, ReVisual-R1 is a 7B open-source MLLM built to tackle the challenges of complex multimodal reasoning. Instead of relying solely on scale, it uses a well-designed three-stage training process: starting with high-quality text data for foundational rationale, followed by a multimodal RL phase enhanced with a new PAD technique for stability, and ending with a final text-based RL refinement. This thoughtful curriculum significantly boosts performance. ReVisual-R1 sets a new benchmark among 7B models, excelling in tasks like MathVerse and AIME. The work highlights how structured training can unlock deeper reasoning in MLLMs. 


    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post ReVisual-R1: An Open-Source 7B Multimodal Large Language Model (MLLMs) that Achieves Long, Accurate and Thoughtful Reasoning appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleOpenAI Releases an Open‑Sourced Version of a Customer Service Agent Demo with the Agents SDK
    Next Article HtFLlib: A Unified Benchmarking Library for Evaluating Heterogeneous Federated Learning Methods Across Modalities

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 20, 2025
    Machine Learning

    From Backend Automation to Frontend Collaboration: What’s New in AG-UI Latest Update for AI Agent-User Interaction

    June 20, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    AI agent deployments will grow 327% during the next two years. Here’s what to do now

    News & Updates

    CVE-2025-5601 – Wireshark Column Handling Crash Denial of Service

    Common Vulnerabilities and Exposures (CVEs)

    CISA Adds CVE-2025-27363 to KEV Catalog

    Security

    Case Study: Ciel Rose

    News & Updates

    Highlights

    Critical Erlang/OTP SSH RCE bug now has public exploits, patch now

    April 21, 2025

    Critical Erlang/OTP SSH RCE bug now has public exploits, patch now

    Public exploits are now available for a critical Erlang/OTP SSH vulnerability tracked as CVE-2025-32433, allowing unauthenticated attackers to remotely execute code on impacted devices.
    Researchers at …
    Read more

    Published Date:
    Apr 19, 2025 (1 day, 16 hours ago)

    Vulnerabilities has been mentioned in this article.

    CVE-2025-32433

    New Runescape game hits number one on Steam

    April 16, 2025

    Apache Tomcat Security Update Fixes DoS and Rewrite Rule Bypass Flaws

    April 29, 2025

    Meet two Apple Swift Student Challenge champs building apps that solve real-world problems

    May 9, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.