Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 19, 2025

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 19, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 19, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 19, 2025

      My latest hands-on could be the best value AI laptop of the summer, but I still have questions

      May 19, 2025

      DOOM: The Dark Ages had the lowest Steam launch numbers in series history — Is it suffering from the ‘Game Pass Effect’?

      May 19, 2025

      Microsoft won’t be left exposed if something “catastrophic” happens to OpenAI — but may still be 3 to 6 months behind ChatGPT

      May 19, 2025

      Microsoft Copilot gets OpenAI’s GPT-4o image generation support — but maybe a day late and a dollar short for the hype?

      May 19, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      ES6: Set Vs Array- What and When?

      May 19, 2025
      Recent

      ES6: Set Vs Array- What and When?

      May 19, 2025

      Transform JSON into Typed Collections with Laravel’s AsCollection::of()

      May 19, 2025

      Deployer

      May 19, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      My latest hands-on could be the best value AI laptop of the summer, but I still have questions

      May 19, 2025
      Recent

      My latest hands-on could be the best value AI laptop of the summer, but I still have questions

      May 19, 2025

      DOOM: The Dark Ages had the lowest Steam launch numbers in series history — Is it suffering from the ‘Game Pass Effect’?

      May 19, 2025

      Microsoft won’t be left exposed if something “catastrophic” happens to OpenAI — but may still be 3 to 6 months behind ChatGPT

      May 19, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Nomic Open Sources State-of-the-Art Multimodal Embedding Model

    Nomic Open Sources State-of-the-Art Multimodal Embedding Model

    April 2, 2025

    Nomic has announced the release of “Nomic Embed Multimodal,” a groundbreaking embedding model that achieves state-of-the-art performance on visual document retrieval tasks. The new model seamlessly processes interleaved text, images, and screenshots, establishing a new high score on the Vidore-v2 benchmark for visual document retrieval. This advancement is particularly significant for retrieval augmented generation (RAG) applications working with PDF documents, where capturing both visual and textual context is crucial.

    Breaking New Ground in Visual Document Retrieval

    The Nomic Embed Multimodal 7B model has achieved an impressive 62.7 NDCG@5 score on the Vidore-v2 benchmark, representing a 2.8-point improvement over previous best-performing models. This advancement marks a significant milestone in the evolution of multimodal embeddings for document processing.

    Unlike traditional retrieval systems that primarily rely on extracted text and often miss crucial visual elements, Nomic’s new model captures the full richness of documents by embedding both text and visual components directly. This approach eliminates the need for complex, error-prone processing pipelines commonly used in document analysis.

    Solving Real-World Document Challenges

    Documents are inherently multimodal, conveying information through text, figures, page layouts, tables, and even fonts. Traditional text-only systems struggle with this complexity, often requiring separate encoders for visual and text inputs or complex preprocessing pipelines.

    Nomic Embed Multimodal provides an elegant solution by supporting interleaved text and image inputs in a single model, making it ideal for:

    • PDF documents and research papers
    • Screenshots of applications and websites
    • Visually rich content where layout matters
    • Multilingual documents where visual context is important

    A Complete Embedding Ecosystem

    With the release of Nomic Embed Multimodal, Nomic has finalized a comprehensive suite of embedding models that achieve state-of-the-art performance across multiple domains:

    • Nomic Embed Multimodal: The latest addition that achieves state-of-the-art performance on interleaved text, images, and screenshots. It is ideal for document retrieval workflows.
    • Nomic Embed Text v2: A powerful multilingual text embedding model that achieves state-of-the-art performance on the MIRACL benchmark. It is ideal for text retrieval workflows in any language.
    • Nomic Embed Code: An embedding model that is specialized for code search applications, achieving a state-of-the-art score on the CodeSearchNet benchmark. It is ideal for code agent applications.

    This complete ecosystem provides developers with cutting-edge tools for handling diverse data types, from pure text to complex multimodal documents and specialized code repositories. Each model in the ecosystem is designed to work seamlessly with modern RAG workflows while delivering best-in-class performance in its domain.

    Availability

    Nomic has made their multimodal embedding models available on Hugging Face, along with the corresponding dataset and GitHub repository, making this cutting-edge technology accessible to researchers and developers worldwide.

    This release represents a significant step forward in multimodal representation learning and document understanding, completing Nomic’s vision of providing state-of-the-art embedding solutions across the full spectrum of data modalities.

    Availability is upcoming in the (Nomic Atlas Data and Embedding Platform)


    Thanks to the Nomic team for the thought leadership/ Resources for this article. Nomic team has supported us financially and by content for this article.

    The post Nomic Open Sources State-of-the-Art Multimodal Embedding Model appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleMitigating Hallucinations in Large Vision-Language Models: A Latent Space Steering Approach
    Next Article Mutual Reinforcement of LLM Dialogue Synthesis and Summarization Capabilities for Few-Shot Dialogue Summarization

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    May 19, 2025
    Machine Learning

    LLMs Struggle to Act on What They Know: Google DeepMind Researchers Use Reinforcement Learning Fine-Tuning to Bridge the Knowing-Doing Gap

    May 19, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    How AI-powered Zoom Docs can improve your meeting workflow

    Development

    New Video Series: The Laravel Ecosystem

    Development
    Huawei Noah’s Ark Lab Released Dream 7B: A Powerful Open Diffusion Reasoning Model with Advanced Planning and Flexible Inference Capabilities

    Huawei Noah’s Ark Lab Released Dream 7B: A Powerful Open Diffusion Reasoning Model with Advanced Planning and Flexible Inference Capabilities

    Machine Learning

    Orthogonal Paths: Simplifying Jailbreaks in Language Models

    Development
    GetResponse

    Highlights

    Microsoft begins turning off Mail & Calendar apps on Windows 11, forces new Outlook

    January 22, 2025

    Reliable Microsoft sources have confirmed to Windows Latest that the company has already started the…

    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024

    June 10, 2024

    I’ve been testing the giant Echo Show 21 for weeks – here’s who should buy it (and who shouldn’t)

    December 27, 2024

    Samsung confirms Unpacked date for Galaxy S25 series – and $1,250 off preorder deal

    January 6, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.