Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      Handling JavaScript Event Listeners With Parameters

      July 21, 2025

      ChatGPT now has an agent mode

      July 21, 2025

      Scrum Alliance and Kanban University partner to offer new course that teaches both methodologies

      July 21, 2025

      Is ChatGPT down? You’re not alone. Here’s what OpenAI is saying

      July 21, 2025

      I found a tablet that could replace my iPad and Kindle – and it’s worth every penny

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 21, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 21, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Execute Ping Commands and Get Back Structured Data in PHP

      July 21, 2025
      Recent

      Execute Ping Commands and Get Back Structured Data in PHP

      July 21, 2025

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 21, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 21, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      I Made Kitty Terminal Even More Awesome by Using These 15 Customization Tips and Tweaks

      July 21, 2025
      Recent

      I Made Kitty Terminal Even More Awesome by Using These 15 Customization Tips and Tweaks

      July 21, 2025

      Microsoft confirms active cyberattacks on SharePoint servers

      July 21, 2025

      How to Manually Check & Install Windows 11 Updates (Best Guide)

      July 21, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal Reasoning

    This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal Reasoning

    June 3, 2025

    Multimodal large language models (MLLMs) are designed to process and generate content across various modalities, including text, images, audio, and video. These models aim to understand and integrate information from different sources, enabling applications such as visual question answering, image captioning, and multimodal dialogue systems. The development of MLLMs represents a significant step toward creating AI systems that can interpret and interact with the world in a more human-like manner.

    A primary challenge in developing effective MLLMs lies in integrating diverse input types, particularly visual data, into language models while maintaining high performance across tasks. Existing models often struggle with balancing strong language understanding and effective visual reasoning, especially when scaling to complex data. Further, many models require large datasets to perform well, making it difficult to adapt to specific tasks or domains. These challenges highlight the need for more efficient and scalable approaches to multimodal learning.

    Current MLLMs predominantly utilize autoregressive methods, predicting one token at a time in a left-to-right manner. While effective, this approach has limitations in handling complex multimodal contexts. Alternative methods, such as diffusion models, have been explored; however, they often exhibit weaker language understanding due to their restricted architectures or inadequate training strategies. These limitations suggest a gap where a purely diffusion-based model could offer competitive multimodal reasoning capabilities if designed effectively.

    Researchers from the Renmin University of China and Ant Group introduced LLaDA-V, a purely diffusion-based masked language modeling (MLLM) model that integrates visual instruction tuning with masked diffusion models. Built upon LLaDA, a large language diffusion model, LLaDA-V incorporates a vision encoder and an MLP connector to project visual features into the language embedding space, enabling effective multimodal alignment. This design represents a departure from the autoregressive paradigms dominant in current multimodal approaches, aiming to overcome existing limitations while maintaining data efficiency and scalability.

    LLaDA-V employs a masked diffusion process where text responses are gradually refined through iterative prediction of masked tokens. Unlike autoregressive models that predict tokens sequentially, LLaDA-V generates outputs by reversing the masked diffusion process. The model is trained in three stages: the first stage aligns vision and language embeddings by mapping visual features from SigLIP2 into LLaDA’s language space. The second stage fine-tunes the model using 10 million single-image samples and 2 million multimodal samples from MAmmoTH-VL. The third stage focuses on reasoning, using 900K QA pairs from VisualWebInstruct and a mixed dataset strategy. Bidirectional attention improves context comprehension, enabling robust multimodal understanding.

    In evaluations across 18 multimodal tasks, LLaDA-V demonstrated superior performance compared to hybrid autoregressive-diffusion and purely diffusion-based models. It outperformed LLaMA3-V on most multidisciplinary knowledge and mathematical reasoning tasks like MMMU, MMMU-Pro, and MMStar, achieving a score of 60.1 on MMStar, close to Qwen2-VL’s 60.7, despite LLaDA-V using the weaker LLaDA-8B language tower. LLaDA-V also excelled in data efficiency, outperforming LLaMA3-V on MMMU-Pro with 1M samples against LLaMA3-V’s 9M. Although it lagged in chart and document understanding benchmarks, such as AI2D, and in real-world scene tasks, like RealworldQA, LLaDA-V’s results highlight its promise for multimodal tasks.

    In summary, LLaDA-V addresses the challenges of building effective multimodal models by introducing a purely diffusion-based architecture that combines visual instruction tuning with masked diffusion. The approach offers strong multimodal reasoning capabilities while maintaining data efficiency. This work demonstrates the potential of diffusion models in multimodal AI, paving the way for further exploration of probabilistic approaches to complex AI tasks.


    Check out the Paper and GitHub Page . All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

    The post This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal Reasoning appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleMeta Releases Llama Prompt Ops: A Python Package that Automatically Optimizes Prompts for Llama Models
    Next Article Why 2025 Is the Perfect Time to Invest in AI-Powered Business Solutions🚀

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 21, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 21, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Microsoft Urges TPM 2.0 for Windows 11 Upgrade as Win 10 Support Nears End

    Security

    Infortrend EonStor CS2000 NAS Storage India | Cost, Dealer & Supplier

    Web Development

    CVE-2025-5420 – Juzaweb CMS Cross Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Microsoft Authenticator is losing autofill but the tech giant already has a replacement

    News & Updates

    Highlights

    CVE-2025-7806 – Tenda FH451 Stack-Based Buffer Overflow Vulnerability

    July 18, 2025

    CVE ID : CVE-2025-7806

    Published : July 18, 2025, 9:15 p.m. | 1 hour, 44 minutes ago

    Description : A vulnerability classified as critical was found in Tenda FH451 1.0.0.9. This vulnerability affects the function fromSafeClientFilter of the file /goform/SafeClientFilter. The manipulation of the argument Go/page leads to stack-based buffer overflow. The attack can be initiated remotely. The exploit has been disclosed to the public and may be used.

    Severity: 8.8 | HIGH

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-5402 – Chaitak-Gorai Blogbook SQL Injection Vulnerability in GET Parameter Handler

    June 1, 2025

    The best Xbox controller money can buy just got even better thanks to one update from Razer — especially if you’re also a PC gamer

    April 14, 2025

    CVE-2025-49867 – InspiryThemes RealHomes Privilege Escalation Vulnerability

    July 4, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.