Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: An Echo In Here in here

      September 19, 2025

      How To Minimize The Environmental Impact Of Your Website

      September 19, 2025

      Progress adds AI coding assistance to Telerik and Kendo UI libraries

      September 19, 2025

      Wasm 3.0 standard is now officially complete

      September 19, 2025

      Development Release: Ubuntu 25.10 Beta

      September 18, 2025

      Development Release: Linux Mint 7 Beta “LMDE”

      September 18, 2025

      Distribution Release: Tails 7.0

      September 18, 2025

      Distribution Release: Security Onion 2.4.180

      September 18, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      GenStudio for Performance Marketing: What’s New and What We’ve Learned

      September 19, 2025
      Recent

      GenStudio for Performance Marketing: What’s New and What We’ve Learned

      September 19, 2025

      Agentic and Generative Commerce Can Elevate CX in B2B

      September 19, 2025

      AI Momentum and Perficient’s Inclusion in Analyst Reports – Highlights From 2025 So Far

      September 18, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Denmark’s Strategic Leap Replacing Microsoft Office 365 with LibreOffice for Digital Independence

      September 19, 2025
      Recent

      Denmark’s Strategic Leap Replacing Microsoft Office 365 with LibreOffice for Digital Independence

      September 19, 2025

      Development Release: Ubuntu 25.10 Beta

      September 18, 2025

      Development Release: Linux Mint 7 Beta “LMDE”

      September 18, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»ReVisual-R1: An Open-Source 7B Multimodal Large Language Model (MLLMs) that Achieves Long, Accurate and Thoughtful Reasoning

    ReVisual-R1: An Open-Source 7B Multimodal Large Language Model (MLLMs) that Achieves Long, Accurate and Thoughtful Reasoning

    June 19, 2025

    The Challenge of Multimodal Reasoning

    Recent breakthroughs in text-based language models, such as DeepSeek-R1, have demonstrated that RL can aid in developing strong reasoning skills. Motivated by this, researchers have attempted to apply the same RL techniques to MLLMs to enhance their ability to reason across both visual and textual inputs. However, these attempts haven’t been entirely successful; MLLMs still struggle with complex reasoning tasks. This suggests that simply reusing RL strategies from text-only models may not work well in multimodal settings, where the interaction between different data types introduces new challenges that require more tailored approaches. 

    Evolution of Multimodal Language Models

    Recent research in MLLMs builds on the progress of LLMs by combining visual inputs with language understanding. Early models, such as CLIP and MiniGPT-4, laid the groundwork, followed by instruction-tuned models like LLaMA. While closed-source models demonstrate strong reasoning through lengthy CoT outputs, open-source models have primarily focused on fine-tuning and CoT adaptations. However, these often yield brief answers that limit in-depth rationale. RL, including techniques like RLHF and GRPO, has shown promise for enhancing reasoning in LLMs. Inspired by this, recent work now aims to apply RL in MLLMs to improve visual reasoning and support richer, longer outputs. 

    Introduction of ReVisual-R1

    Researchers from Tsinghua University, Shanghai Jiao Tong University, and the Shanghai Artificial Intelligence Laboratory have introduced ReVisual-R1, a 7B-parameter open-source MLLM that sets a new standard in multimodal reasoning. Their study reveals three key insights: (1) Careful text-only pretraining provides a strong cold-start, outperforming many existing MLLMs even before RL; (2) The commonly used GRPO algorithm suffers from gradient stagnation, which they address with a novel method called Prioritized Advantage Distillation (PAD); and (3) Adding a final text-only RL phase after multimodal RL further enhances reasoning. Their three-stage approach, which includes text pretraining, multimodal RL, and final text RL, strikes an effective balance between visual grounding and deep cognitive reasoning. 

    Developing the GRAMMAR Dataset

    The GRAMMAR dataset was developed after it was noticed that existing multimodal cold-start datasets lack the depth necessary to train strong reasoning models. Text-only datasets, like DeepMath, showed better gains in both text and multimodal tasks, suggesting that textual complexity better stimulates reasoning. To address this, GRAMMAR combines diverse textual and multimodal samples using a multi-stage curation process. This data fuels the Staged Reinforcement Optimization (SRO) framework, which first trains models using multimodal RL, enhanced by Prioritized Advantage Distillation to avoid stalled learning and an efficient-length reward to curb verbosity, followed by a text-only RL phase to boost reasoning and language fluency. 

    Three-Stage Training Pipeline

    The experiments for ReVisual-R1 followed a structured three-stage training process: starting with pure text data to build a language foundation, then incorporating multimodal reinforcement learning for visual-text reasoning, and finally fine-tuning with text-only RL to refine reasoning and fluency. It was tested across various benchmarks and outperformed both open-source and some commercial models in multimodal and math reasoning tasks. The model achieved top results on 9 out of 10 benchmarks. Ablation studies confirmed the importance of training order and the Prioritized Advantage Distillation method, which helped focus learning on high-quality responses, resulting in a significant improvement in overall performance. 

    Summary and Contributions

    In conclusion, ReVisual-R1 is a 7B open-source MLLM built to tackle the challenges of complex multimodal reasoning. Instead of relying solely on scale, it uses a well-designed three-stage training process: starting with high-quality text data for foundational rationale, followed by a multimodal RL phase enhanced with a new PAD technique for stability, and ending with a final text-based RL refinement. This thoughtful curriculum significantly boosts performance. ReVisual-R1 sets a new benchmark among 7B models, excelling in tasks like MathVerse and AIME. The work highlights how structured training can unlock deeper reasoning in MLLMs. 


    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post ReVisual-R1: An Open-Source 7B Multimodal Large Language Model (MLLMs) that Achieves Long, Accurate and Thoughtful Reasoning appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleOpenAI Releases an Open‑Sourced Version of a Customer Service Agent Demo with the Agents SDK
    Next Article HtFLlib: A Unified Benchmarking Library for Evaluating Heterogeneous Federated Learning Methods Across Modalities

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Top 6 MySQL Database Management Struggles for Laravel Developers (And Smart Fixes)

    Development

    Parallels Desktop 26 Update Improves Ubuntu VMs on macOS

    Linux

    CVE-2025-48947: Session Cookies at Risk in Auth0 Next.js SDK

    Security

    CVE-2025-46805 – Screen Privilege Escalation TOCTOU Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    These old iPhones, Macs, and iPads won’t run Apple’s latest updates – did yours make the cut?

    June 11, 2025

    Apple just announced a slew of attention-grabbing software updates, but not all devices are supported.…

    Anthropic’s Claude dives into financial analysis. Here’s what’s new

    July 15, 2025

    OpenAI tells ChatGPT users to take a break — Altman asks “What have we done?” just days before GPT-5 launch

    August 5, 2025

    All in one Business management system

    June 10, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.