Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      This week in AI dev tools: Gemini API Batch Mode, Amazon SageMaker AI updates, and more (July 11, 2025)

      July 11, 2025

      JFrog finds MCP-related vulnerability, highlighting need for stronger focus on security in MCP ecosystem

      July 11, 2025

      8 Key Questions Every CEO Should Ask Before Hiring a Node.js Development Company in 2025

      July 11, 2025

      Vibe Loop: AI-native reliability engineering for the real world

      July 10, 2025

      51% claimed already: This Xbox Edition mechanical keyboard is at its lowest price yet while this sale lasts — Nostalgic green transparency for the win

      July 11, 2025

      This RDR2 deal feels like highway robbery — grab the “Wild West masterpiece” today before it rides off into the sunset

      July 11, 2025

      Grab these 7 Xbox games all under $40 — you don’t have long before Amazon Prime Day ends, so act fast

      July 11, 2025

      After 24 hours with Samsung’s Galaxy Z Flip 7, one big thing stands out

      July 11, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The details of TC39’s last meeting

      July 11, 2025
      Recent

      The details of TC39’s last meeting

      July 11, 2025

      Francisco Bergeret Paves the Way Through Strong Leadership at Perficient

      July 11, 2025

      Intelligent Automation in the Healthcare Sector with n8n, OpenAI, and Pinecone

      July 11, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      51% claimed already: This Xbox Edition mechanical keyboard is at its lowest price yet while this sale lasts — Nostalgic green transparency for the win

      July 11, 2025
      Recent

      51% claimed already: This Xbox Edition mechanical keyboard is at its lowest price yet while this sale lasts — Nostalgic green transparency for the win

      July 11, 2025

      This RDR2 deal feels like highway robbery — grab the “Wild West masterpiece” today before it rides off into the sunset

      July 11, 2025

      Grab these 7 Xbox games all under $40 — you don’t have long before Amazon Prime Day ends, so act fast

      July 11, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»This AI Paper Introduces GRIT: A Method for Teaching MLLMs to Reason with Images by Interleaving Text and Visual Grounding

    This AI Paper Introduces GRIT: A Method for Teaching MLLMs to Reason with Images by Interleaving Text and Visual Grounding

    May 25, 2025

    The core idea of Multimodal Large Language Models (MLLMs) is to create models that can combine the richness of visual content with the logic of language. However, despite advances in this field, many models struggle to connect the two domains effectively, leading to limited performance in complex reasoning tasks that involve visual components.

    A major challenge in building such models is their limited ability to combine visual understanding with logical thinking. Current systems often produce textual outputs that explain reasoning but fail to reference the specific parts of an image they rely on. This creates a gap where models may arrive at an answer without clearly showing how the visual evidence contributed to their decision. It’s also difficult to ensure that models generate visual reasoning steps directly connecting to their answers. The fundamental problem lies in how to naturally train models to interleave text and image reasoning without needing large datasets annotated with visual references, which are scarce and expensive to produce.

    Existing methods try to address this by using reinforcement learning or prompting strategies. Some systems generate bounding box coordinates as answers, while others produce step-by-step textual reasoning chains. However, these approaches have limitations. Models that only produce bounding boxes lack explanation, while those generating only text risk ignoring visual evidence. Previous methods often separate visual grounding and reasoning, making it hard for models to explain why a particular visual element leads to a certain conclusion. While some models use dense supervision data or additional tools, they generally require heavy annotation and do not scale well. This makes it difficult for developers to create models that can explain their reasoning transparently and handle various visual tasks with minimal data.

    Researchers from UC Santa Cruz and eBay introduced a new method called Grounded Reasoning with Images and Text (GRIT) that allows MLLMs like Qwen 2.5-VL and InternVL 3 to generate reasoning chains that mix natural language with explicit bounding box coordinates pointing to relevant image regions. This unified approach enables models to reason about and visually ground their answers without requiring dense annotations or labeled reasoning chains. GRIT also uses a lightweight reinforcement learning algorithm called GRPO-GR, which optimizes both the accuracy of the final answer and the structure of the reasoning, encouraging models to include specific tokens like <think> and <rethink>, as well as bounding box formats. This design eliminates the need for costly annotated data while ensuring that models learn to reference visual content meaningfully within their logical steps.

    The methodology in GRIT focuses on generating outputs that combine textual reasoning and visual grounding seamlessly. Instead of requiring models to process cropped images or additional visual data after generating bounding boxes, GRIT teaches models to use their internal understanding of the image. Bounding boxes are generated during the reasoning process, and models learn to reflect on these coordinates within their logical reasoning. The reinforcement learning framework rewards the correct use of bounding box formats and reasoning structure, and it guides models to produce coherent, grounded reasoning chains. GRIT demonstrates remarkable data efficiency by using only 20 image-question-answer triplets sourced from Visual Spatial Reasoning and TallyQA datasets. The model training was conducted on NVIDIA A100 GPUs, with optimization techniques like AdamW and a cosine scheduler applied over 200 training steps, which shows the method’s scalability despite limited data.

    Performance evaluations revealed that GRIT-trained models outperform several baselines in reasoning and grounding accuracy. For example, Qwen 2.5-VL trained with GRIT achieved 72.9% answer accuracy on Visual Spatial Reasoning, 47.8% on TallyQA, and 62.8% on GQA datasets. It also reached a grounding IoU score of 0.325 on VSR and 0.447 on TallyQA. In contrast, baseline models like Direct Query or Chain-of-Thought often performed significantly lower, showing limited ability to unify reasoning with visual grounding. GRIT models demonstrated a strong correlation between visual regions and textual reasoning, producing outputs that reflected a meaningful connection between image evidence and logical thought. GRIT also showed improvements on out-of-domain benchmarks, though gains were more pronounced on in-domain data, highlighting the importance of training data diversity for broader generalization.

    In conclusion, the research addressed the problem of disconnected reasoning and visual grounding in MLLMs by introducing GRIT. The method allows models to reason with images through a simple, efficient approach that requires minimal data. GRIT successfully teaches MLLMs to combine visual evidence with logical reasoning in a unified output, achieving strong performance across multiple benchmarks and demonstrating a promising step toward more interpretable AI systems.


    Check out the Paper, Project, and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

    The post This AI Paper Introduces GRIT: A Method for Teaching MLLMs to Reason with Images by Interleaving Text and Visual Grounding appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAutomated Accessibility Testing: Tools, CI/CD Integration, and Business Benefits
    Next Article Packet lets you share files

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 11, 2025
    Machine Learning

    Overcoming Vocabulary Constraints with Pixel-level Fallback

    July 11, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Token Limit – Monitor token usage in AI context files

    Development

    CVE-2025-53614 – OpenVAS vulnerability in OpenVAS

    Common Vulnerabilities and Exposures (CVEs)

    Distribution Release: Xubuntu 25.04

    News & Updates

    Turn Your Database into a Smart Chatbot with Azure OpenAI, LangChain, and ChromaDB

    Development

    Highlights

    CVE-2022-46734 – Apache HTTP Server Unvalidated Redirect

    May 28, 2025

    CVE ID : CVE-2022-46734

    Published : May 28, 2025, 7:15 p.m. | 2 hours, 13 minutes ago

    Description : Rejected reason: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority because it is Unused

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    This week in AI dev tools: A2A donated to Linux Foundation, OpenAI adds Deep Research to API, and more (June 27, 2025)

    June 27, 2025
    Universal Design in Pharmacies – WCAG – Understandable

    Universal Design in Pharmacies – WCAG – Understandable

    April 10, 2025

    Jest: How do you change the Snapshot Folder?

    April 17, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.