Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Slack’s AI search now works across an organization’s entire knowledge base

      July 17, 2025

      In-House vs Outsourcing for React.js Development: Understand What Is Best for Your Enterprise

      July 17, 2025

      Tiny Screens, Big Impact: The Forgotten Art Of Developing Web Apps For Feature Phones

      July 16, 2025

      Kong AI Gateway 3.11 introduces new method for reducing token costs

      July 16, 2025

      Microsoft is on track to become the second $4 trillion company by market cap, following NVIDIA — and mass layoffs

      July 17, 2025

      The wireless gaming mouse I’ve used for 5 years is down to $30 — that’s less than 2 cents a day (and it’s still my favorite)

      July 17, 2025

      Researchers from OpenAI, Anthropic, Meta, and Google issue joint AI safety warning – here’s why

      July 17, 2025

      You’ll soon be able to chat with Copilot and attend Teams meetings while driving your Mercedes-Benz — now there’s no excuse to miss your meetings

      July 17, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The details of TC39’s last meeting

      July 17, 2025
      Recent

      The details of TC39’s last meeting

      July 17, 2025

      Tinkerwell v5 is now released

      July 17, 2025

      Tinkerwell v5 is now released

      July 17, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft is on track to become the second $4 trillion company by market cap, following NVIDIA — and mass layoffs

      July 17, 2025
      Recent

      Microsoft is on track to become the second $4 trillion company by market cap, following NVIDIA — and mass layoffs

      July 17, 2025

      The wireless gaming mouse I’ve used for 5 years is down to $30 — that’s less than 2 cents a day (and it’s still my favorite)

      July 17, 2025

      You’ll soon be able to chat with Copilot and attend Teams meetings while driving your Mercedes-Benz — now there’s no excuse to miss your meetings

      July 17, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»This AI Paper from Salesforce Introduces VLM2VEC and MMEB: A Contrastive Framework and Benchmark for Universal Multimodal Embeddings

    This AI Paper from Salesforce Introduces VLM2VEC and MMEB: A Contrastive Framework and Benchmark for Universal Multimodal Embeddings

    April 11, 2025

    Multimodal embeddings combine visual and textual data into a single representational space, enabling systems to understand and relate images and language meaningfully. These embeddings support various tasks, including visual question answering, retrieval, classification, and grounding. The technology is especially important for AI models that interpret real-world content through visual and linguistic lenses, such as document analysis, digital assistants, or visual search engines.

    A pressing challenge has been the inability of current models to generalize across diverse tasks and modalities effectively. Most models are trained for highly specific tasks or underperform when applied to unfamiliar datasets. Furthermore, without a broad and unified benchmark, evaluating performance across multimodal tasks becomes inconsistent and fragmented. This limits the models’ capability to handle the variety of functions required in realistic, cross-domain applications, especially when new data distributions are introduced.

    Several tools, such as CLIP, BLIP, and SigLIP, have been proposed for generating visual-textual embeddings. These models typically use separate encoders for images and text, merging their outputs through simple operations like score-level fusion. While these approaches offer baseline utility, they suffer from limited cross-modal reasoning and generalization ability. Their performance in zero-shot conditions tends to decline due to shallow fusion strategies and the lack of task-specific instruction handling during training.

    In a collaboration between researchers from Salesforce Research and the University of Waterloo, a new model called VLM2VEC was introduced alongside a comprehensive benchmark named MMEB. MMEB comprises 36 datasets across four major tasks: classification, visual question answering, retrieval, and visual grounding. It divides datasets into 20 used for training and 16 for evaluation, including out-of-distribution tasks. The VLM2VEC framework is designed to convert any vision-language model into an embedding model using contrastive training. It allows it to handle any input combination of text and images while following task instructions.

    To build VLM2VEC, the research team used backbone models such as Phi-3.5-V and LLaVA-1.6. The method begins by constructing task-specific instruction-based queries and targets, processed through a vision-language model to generate embeddings. Contrastive training is employed using the InfoNCE loss function with cosine similarity, aligning embeddings by maximizing the similarity between matching query-target pairs while minimizing it for mismatches. To support large batch sizes, critical for training with diverse negatives, the researchers used GradCache, which splits batches into memory-manageable sub-batches and accumulates gradients. This process ensures efficient training even with the high memory demands of multimodal inputs. Task-specific instructions are embedded within the training pipeline to help the model adapt its encoding to the nature of the task, such as grounding or retrieval, further boosting its generalization capabilities.

    Performance results demonstrate the advantage of the proposed method. The best-performing version of VLM2VEC used LLaVA-1.6 as its backbone, applied LoRA tuning, and processed images at 1344 × 1344 resolution. This configuration achieved a Precision@1 score of 62.9% across all 36 MMEB datasets. In zero-shot tests on the 16 out-of-distribution datasets, it maintained a strong 57.1% score. Compared to the best-performing baseline model without fine-tuning, which scored 44.7%, VLM2VEC showed an 18.2-point improvement. Compared to the top fine-tuned baseline at 47.2%, the improvement was 15.7 points. Across all task categories—classification, VQA, retrieval, and grounding—the model consistently scored above 50%, a level of performance not matched by any baseline. The results also indicate that LoRA-tuned variants outperformed those trained with full fine-tuning, showing that parameter-efficient training strategies can deliver higher accuracy.

    The research clearly outlines a solution to the problem of task-specific multimodal embedding tools that lack generalization. By combining a well-structured training framework and a robust benchmark, the study demonstrates a universal embedding model that handles varied tasks effectively using contrastive training and instruction-following. This development marks a meaningful step forward in scalable, adaptable multimodal AI.


    Check out Paper and Project. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

    The post This AI Paper from Salesforce Introduces VLM2VEC and MMEB: A Contrastive Framework and Benchmark for Universal Multimodal Embeddings appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleCan LLMs Debug Like Humans? Microsoft Introduces Debug-Gym for AI Coding Agents
    Next Article Building an AIOps chatbot with Amazon Q Business custom plugins

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 17, 2025
    Machine Learning

    Building enterprise-scale RAG applications with Amazon S3 Vectors and DeepSeek R1 on Amazon SageMaker AI

    July 17, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    I switched to a $129 Android phone from my Pixel 9 Pro for a week – and didn’t mind it

    News & Updates

    How AlphaChip transformed computer chip design

    Artificial Intelligence

    Learn How to Display WordPress Custom Field Data With Blocks

    Learning Resources

    CVE-2025-6041 – WordPress yContributors CSRF

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Sophos Unmasks Sakura RAT: Hackers Hacking Hackers with Backdoored Malware!

    June 4, 2025

    Sophos Unmasks Sakura RAT: Hackers Hacking Hackers with Backdoored Malware!

    In a recent deep dive, Sophos X-Ops uncovered a sophisticated campaign that’s not targeting enterprises or governments, but instead other hackers and game cheaters. And it all starts with a backdoored …
    Read more

    Published Date:
    Jun 05, 2025 (3 hours, 3 minutes ago)

    Vulnerabilities has been mentioned in this article.

    CVE-2025-3858 – WordPress Formality Plugin Stored Cross-Site Scripting Vulnerability

    May 2, 2025

    Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment

    July 17, 2025

    Perplexity’s Comet AI browser is hurtling toward Chrome – how to try it

    July 10, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.