Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      8 Key Questions Every CEO Should Ask Before Hiring a Node.js Development Company in 2025

      July 11, 2025

      Vibe Loop: AI-native reliability engineering for the real world

      July 10, 2025

      Docker Compose gets new features for building and running agents

      July 10, 2025

      Why Enterprises Are Choosing AI-Driven React.js Development Companies in 2025

      July 10, 2025

      This discounted SSD fixed my gaming handheld’s biggest weakness — Extra storage space for Steam Deck, ASUS ROG Ally, and Lenovo Legion Go

      July 11, 2025

      These are the 5 Prime Day deals I’d buy if I weren’t about to have a baby

      July 11, 2025

      OpenAI’s $6.5 billion purchase fuels Sam Altman’s quest to build next-gen computers for “transcendentally good” AI — The biggest tech disruption since the iPhone?

      July 11, 2025

      Don’t miss out on the best ROG Ally accessory deals going on now — Improve your gaming handheld PC with a microSD card, power bank, dock, and more

      July 11, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Regolith – A JavaScript library immune to ReDoS attacks

      July 11, 2025
      Recent

      Regolith – A JavaScript library immune to ReDoS attacks

      July 11, 2025

      Create Your Own Redux: Build a Custom State Management in React

      July 11, 2025

      Perficient Nagpur Celebrates Contentstack Implementation Certification Success!

      July 11, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      This discounted SSD fixed my gaming handheld’s biggest weakness — Extra storage space for Steam Deck, ASUS ROG Ally, and Lenovo Legion Go

      July 11, 2025
      Recent

      This discounted SSD fixed my gaming handheld’s biggest weakness — Extra storage space for Steam Deck, ASUS ROG Ally, and Lenovo Legion Go

      July 11, 2025

      These are the 5 Prime Day deals I’d buy if I weren’t about to have a baby

      July 11, 2025

      OpenAI’s $6.5 billion purchase fuels Sam Altman’s quest to build next-gen computers for “transcendentally good” AI — The biggest tech disruption since the iPhone?

      July 11, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»This AI Paper Introduces GRIT: A Method for Teaching MLLMs to Reason with Images by Interleaving Text and Visual Grounding

    This AI Paper Introduces GRIT: A Method for Teaching MLLMs to Reason with Images by Interleaving Text and Visual Grounding

    May 25, 2025

    The core idea of Multimodal Large Language Models (MLLMs) is to create models that can combine the richness of visual content with the logic of language. However, despite advances in this field, many models struggle to connect the two domains effectively, leading to limited performance in complex reasoning tasks that involve visual components.

    A major challenge in building such models is their limited ability to combine visual understanding with logical thinking. Current systems often produce textual outputs that explain reasoning but fail to reference the specific parts of an image they rely on. This creates a gap where models may arrive at an answer without clearly showing how the visual evidence contributed to their decision. It’s also difficult to ensure that models generate visual reasoning steps directly connecting to their answers. The fundamental problem lies in how to naturally train models to interleave text and image reasoning without needing large datasets annotated with visual references, which are scarce and expensive to produce.

    Existing methods try to address this by using reinforcement learning or prompting strategies. Some systems generate bounding box coordinates as answers, while others produce step-by-step textual reasoning chains. However, these approaches have limitations. Models that only produce bounding boxes lack explanation, while those generating only text risk ignoring visual evidence. Previous methods often separate visual grounding and reasoning, making it hard for models to explain why a particular visual element leads to a certain conclusion. While some models use dense supervision data or additional tools, they generally require heavy annotation and do not scale well. This makes it difficult for developers to create models that can explain their reasoning transparently and handle various visual tasks with minimal data.

    Researchers from UC Santa Cruz and eBay introduced a new method called Grounded Reasoning with Images and Text (GRIT) that allows MLLMs like Qwen 2.5-VL and InternVL 3 to generate reasoning chains that mix natural language with explicit bounding box coordinates pointing to relevant image regions. This unified approach enables models to reason about and visually ground their answers without requiring dense annotations or labeled reasoning chains. GRIT also uses a lightweight reinforcement learning algorithm called GRPO-GR, which optimizes both the accuracy of the final answer and the structure of the reasoning, encouraging models to include specific tokens like <think> and <rethink>, as well as bounding box formats. This design eliminates the need for costly annotated data while ensuring that models learn to reference visual content meaningfully within their logical steps.

    The methodology in GRIT focuses on generating outputs that combine textual reasoning and visual grounding seamlessly. Instead of requiring models to process cropped images or additional visual data after generating bounding boxes, GRIT teaches models to use their internal understanding of the image. Bounding boxes are generated during the reasoning process, and models learn to reflect on these coordinates within their logical reasoning. The reinforcement learning framework rewards the correct use of bounding box formats and reasoning structure, and it guides models to produce coherent, grounded reasoning chains. GRIT demonstrates remarkable data efficiency by using only 20 image-question-answer triplets sourced from Visual Spatial Reasoning and TallyQA datasets. The model training was conducted on NVIDIA A100 GPUs, with optimization techniques like AdamW and a cosine scheduler applied over 200 training steps, which shows the method’s scalability despite limited data.

    Performance evaluations revealed that GRIT-trained models outperform several baselines in reasoning and grounding accuracy. For example, Qwen 2.5-VL trained with GRIT achieved 72.9% answer accuracy on Visual Spatial Reasoning, 47.8% on TallyQA, and 62.8% on GQA datasets. It also reached a grounding IoU score of 0.325 on VSR and 0.447 on TallyQA. In contrast, baseline models like Direct Query or Chain-of-Thought often performed significantly lower, showing limited ability to unify reasoning with visual grounding. GRIT models demonstrated a strong correlation between visual regions and textual reasoning, producing outputs that reflected a meaningful connection between image evidence and logical thought. GRIT also showed improvements on out-of-domain benchmarks, though gains were more pronounced on in-domain data, highlighting the importance of training data diversity for broader generalization.

    In conclusion, the research addressed the problem of disconnected reasoning and visual grounding in MLLMs by introducing GRIT. The method allows models to reason with images through a simple, efficient approach that requires minimal data. GRIT successfully teaches MLLMs to combine visual evidence with logical reasoning in a unified output, achieving strong performance across multiple benchmarks and demonstrating a promising step toward more interpretable AI systems.


    Check out the Paper, Project, and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

    The post This AI Paper Introduces GRIT: A Method for Teaching MLLMs to Reason with Images by Interleaving Text and Visual Grounding appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAutomated Accessibility Testing: Tools, CI/CD Integration, and Business Benefits
    Next Article Packet lets you share files

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 11, 2025
    Machine Learning

    Build an MCP application with Mistral models on AWS

    July 10, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    How Imports Work in React Server Components (RSC)

    Development

    I changed 3 settings on my PS5 to instantly give it a performance boost

    News & Updates

    Google DeepMind at NeurIPS 2023

    Artificial Intelligence

    Building a custom text-to-SQL agent using Amazon Bedrock and Converse API

    Machine Learning

    Highlights

    CVE-2025-40909: Perl Threads Vulnerability Exposes File Operation Race Condition

    May 31, 2025

    CVE-2025-40909: Perl Threads Vulnerability Exposes File Operation Race Condition

    A newly disclosed vulnerability in Perl’s threading mechanism, tracked as CVE-2025-40909, exposes systems to race conditions involving the working directory, potentially enabling local attackers to ma …
    Read more

    Published Date:
    May 31, 2025 (3 hours, 26 minutes ago)

    Vulnerabilities has been mentioned in this article.

    CVE-2025-40909

    CVE-2025-48912

    CVE-2024-56406

    CVE-2024-45321

    CVE-2023-47039

    CVE-2023-47038

    CVE-2023-47100

    Critical ASUS Router Vulnerability Let Attackers Malicious Code Remotely

    April 21, 2025

    CVE-2025-4593 – WordPress WP Register Profile With Shortcode Sensitive Information Exposure

    July 11, 2025

    CVE-2025-5270 – “Firefox SSL/TLS Man-in-the-Middle Vulnerability”

    May 27, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.