Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»SEED-Bench-2-Plus: An Extensive Benchmark Specifically Designed for Evaluating Multimodal Large Language Models (MLLMs) in Text-Rich Scenarios

    SEED-Bench-2-Plus: An Extensive Benchmark Specifically Designed for Evaluating Multimodal Large Language Models (MLLMs) in Text-Rich Scenarios

    April 29, 2024

    Evaluating Multimodal Large Language Models (MLLMs) in text-rich scenarios is crucial, given their increasing versatility. However, current benchmarks mainly assess general visual comprehension, overlooking the nuanced challenges of text-rich content. MLLMs like GPT-4V, Gemini-Pro-Vision, and Claude-3-Opus showcase impressive capabilities but lack comprehensive evaluation in text-rich contexts. Understanding text within images requires interpreting textual and visual cues, a challenge yet to be rigorously addressed. 

    SEED-Bench-2-Plus, developed by researchers from Tencent AI Lab, ARC Lab, Tencent PCG, and The Chinese University of Hong Kong, Shenzhen, is a specialized benchmark for evaluating MLLMs’ understanding of text-rich visual content. It consists of 2.3K meticulously crafted multiple-choice questions covering three broad categories: charts, Maps, and Webs, encompassing diverse real-world scenarios. Human annotators ensure accuracy, and evaluation involves 34 leading MLLMs like GPT-4V, Gemini-Pro-Vision, and Claude-3-Opus. 

    Recent research has seen a surge in MLLMs, aiming to enhance understanding across text and images. While some studies integrate video inputs, others focus on generating images from text. However, the proficiency of these models in text-rich contexts still needs to be explored. SEED-Bench-2-Plus addresses this gap by offering a comprehensive benchmark to evaluate MLLMs’ performance in understanding text-rich visual content. Unlike existing benchmarks, SEED-Bench-2-Plus encompasses a broad spectrum of real-world scenarios and avoids biases introduced by human annotators, providing a valuable tool for objective evaluation and advancement in this domain.

    SEED-Bench-2-Plus presents a comprehensive benchmark comprising 2K multiple-choice questions across three main categories: Charts, Maps, and Webs. Each category encompasses a variety of data types, totaling 63 in all. The dataset is meticulously curated, including charts, maps, and website screenshots rich in textual information. Utilizing GPT-4V, questions are generated and further refined by human annotators. Evaluation employs an answer ranking strategy, assessing MLLMs’ performance based on the likelihood of developing the correct answer for each choice. Unlike previous methods, this approach avoids dependency on model instruction-following capabilities and mitigates the impact of multiple-choice option order on performance.

    The evaluation encompassed 31 open-source MLLMs and three closed-source ones across various categories of SEED-Bench-2-Plus. GPT-4V outperformed many MLLMs, showing superior performance across most evaluation types. However, most MLLMs struggled with text-rich data, achieving an average accuracy rate of less than 40%, indicating the complexity of comprehending such data. Maps posed significant challenges due to their multidimensional nature, while performance varied across different data types within categories. These observations underscore the need for further research to enhance MLLMs’ proficiency in text-rich scenarios, ensuring adaptability across diverse data types.

    In conclusion, SEED-Bench-2-Plus is a comprehensive benchmark for assessing MLLMs in text-rich contexts. With 2K human-annotated multiple-choice questions covering 63 data types across three broad categories, it offers a thorough evaluation platform. By examining 31 open-source and three closed-source MLLMs, valuable insights have been gleaned to guide future research. Complementing SEED-Bench-2, both the dataset and evaluation code are publicly accessible, accompanied by a leaderboard to foster advancements in text-rich visual comprehension with MLLMs.

    Check out the Paper and Project page. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 40k+ ML SubReddit

    The post SEED-Bench-2-Plus: An Extensive Benchmark Specifically Designed for Evaluating Multimodal Large Language Models (MLLMs) in Text-Rich Scenarios appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleResearchers at UC San Diego Propose DrS: A Novel Machine Learning Approach for Learning Reusable Dense Rewards for Multi-Stage Tasks in a Data-Driven Manner
    Next Article Gradformer: A Machine Learning Method that Integrates Graph Transformers (GTs) with the Intrinsic Inductive Bias by Applying an Exponential Decay Mask to the Attention Matrix

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 17, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-40906 – MongoDB BSON Serialization BSON::XS Multiple Vulnerabilities

    May 17, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Deploy a Slack gateway for Amazon Bedrock

    Development
    ⚡ THN Weekly Recap: iOS Zero-Days, 4Chan Breach, NTLM Exploits, WhatsApp Spyware & More

    ⚡ THN Weekly Recap: iOS Zero-Days, 4Chan Breach, NTLM Exploits, WhatsApp Spyware & More

    Security

    How Typography Can Transform Your Design and Brand

    Development

    Strapi:Unleash the Power to Build Modern,Highly Customizable Websites with the Ultimate Headless CMS

    Development
    Hostinger

    Highlights

    Dragora – independent Linux distribution

    January 17, 2025

    Dragora is an independent/Linux distribution, designed from scratch to provide a system composed only by…

    March 2025 Baseline monthly digest

    May 2, 2025

    How to get Open NAT on Xbox Series X|S, Xbox One with port forwarding

    June 25, 2024

    Using Agentic RAG to Transform Retail With MongoDB

    December 23, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.