Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sentry launches MCP monitoring tool

      August 14, 2025

      10 Benefits of Hiring a React.js Development Company (2025–2026 Edition)

      August 13, 2025

      From Line To Layout: How Past Experiences Shape Your Design Career

      August 13, 2025

      Hire React.js Developers in the US: How to Choose the Right Team for Your Needs

      August 13, 2025

      I’ve tested every Samsung Galaxy phone in 2025 – here’s the model I’d recommend on sale

      August 14, 2025

      Google Photos just put all its best editing tools a tap away – here’s the shortcut

      August 14, 2025

      Claude can teach you how to code now, and more – how to try it

      August 14, 2025

      One of the best work laptops I’ve tested has MacBook written all over it (but it’s even better)

      August 14, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Controlling Execution Flow with Laravel’s Sleep Helper

      August 14, 2025
      Recent

      Controlling Execution Flow with Laravel’s Sleep Helper

      August 14, 2025

      Generate Secure Temporary Share Links for Files in Laravel

      August 14, 2025

      This Week in Laravel: Filament 4, Laravel Boost, and Junie Review

      August 14, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      KDE Plasma 6 on Wayland: the Payoff for Years of Plumbing

      August 14, 2025
      Recent

      KDE Plasma 6 on Wayland: the Payoff for Years of Plumbing

      August 14, 2025

      FOSS Weekly #25.33: Debian 13 Released, Torvalds vs RISC-V, Arch’s New Tool, GNOME Perfection and More Linux Stuff

      August 14, 2025

      Ultimate ChatGPT-5 Prompt Guide: 52 Ideas for Any Task

      August 14, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»OpenAI Releases HealthBench: An Open-Source Benchmark for Measuring the Performance and Safety of Large Language Models in Healthcare

    OpenAI Releases HealthBench: An Open-Source Benchmark for Measuring the Performance and Safety of Large Language Models in Healthcare

    May 13, 2025

    OpenAI has released HealthBench, an open-source evaluation framework designed to measure the performance and safety of large language models (LLMs) in realistic healthcare scenarios. Developed in collaboration with 262 physicians across 60 countries and 26 medical specialties, HealthBench addresses the limitations of existing benchmarks by focusing on real-world applicability, expert validation, and diagnostic coverage.

    Addressing Benchmarking Gaps in Healthcare AI

    Existing benchmarks for healthcare AI typically rely on narrow, structured formats such as multiple-choice exams. While useful for initial assessments, these formats fail to capture the complexity and nuance of real-world clinical interactions. HealthBench shifts toward a more representative evaluation paradigm, incorporating 5,000 multi-turn conversations between models and either lay users or healthcare professionals. Each conversation ends with a user prompt, and model responses are assessed using example-specific rubrics written by physicians.

    Each rubric consists of clearly defined criteria—positive and negative—with associated point values. These criteria capture behavioral attributes such as clinical accuracy, communication clarity, completeness, and instruction adherence. HealthBench evaluates over 48,000 unique criteria, with scoring handled by a model-based grader validated against expert judgment.

    Benchmark Structure and Design

    HealthBench organizes its evaluation across seven key themes: emergency referrals, global health, health data tasks, context-seeking, expertise-tailored communication, response depth, and responding under uncertainty. Each theme represents a distinct real-world challenge in medical decision-making and user interaction.

    In addition to the standard benchmark, OpenAI introduces two variants:

    • HealthBench Consensus: A subset emphasizing 34 physician-validated criteria, designed to reflect critical aspects of model behavior such as advising emergency care or seeking additional context.
    • HealthBench Hard: A more difficult subset of 1,000 conversations selected for their ability to challenge current frontier models.

    These components allow for detailed stratification of model behavior by both conversation type and evaluation axis, offering more granular insights into model capabilities and shortcomings.

    Evaluation of Model Performance

    OpenAI evaluated several models on HealthBench, including GPT-3.5 Turbo, GPT-4o, GPT-4.1, and the newer o3 model. Results show marked progress: GPT-3.5 achieved 16%, GPT-4o reached 32%, and o3 attained 60% overall. Notably, GPT-4.1 nano, a smaller and cost-effective model, outperformed GPT-4o while reducing inference cost by a factor of 25.

    Performance varied by theme and evaluation axis. Emergency referrals and tailored communication were areas of relative strength, while context-seeking and completeness posed greater challenges. A detailed breakdown revealed that completeness was the most correlated with overall score, underscoring its importance in health-related tasks.

    OpenAI also compared model outputs with physician-written responses. Unassisted physicians generally produced lower-scoring responses than models, although they could improve model-generated drafts, particularly when working with earlier model versions. These findings suggest a potential role for LLMs as collaborative tools in clinical documentation and decision support.

    Reliability and Meta-Evaluation

    HealthBench includes mechanisms to assess model consistency. The “worst-at-k” metric quantifies the degradation in performance across multiple runs. While newer models showed improved stability, variability remains an area for ongoing research.

    To assess the trustworthiness of its automated grader, OpenAI conducted a meta-evaluation using over 60,000 annotated examples. GPT-4.1, used as the default grader, matched or exceeded the average performance of individual physicians in most themes, suggesting its utility as a consistent evaluator.

    Conclusion

    HealthBench represents a technically rigorous and scalable framework for assessing AI model performance in complex healthcare contexts. By combining realistic interactions, detailed rubrics, and expert validation, it offers a more nuanced picture of model behavior than existing alternatives. OpenAI has released HealthBench via the simple-evals GitHub repository, providing researchers with tools to benchmark, analyze, and improve models intended for health-related applications.


    Check out the Paper, GitHub PagePage and Official Release. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

    Here’s a brief overview of what we’re building at Marktechpost:

    • ML News Community – r/machinelearningnews (92k+ members)
    • Newsletter– airesearchinsights.com/(30k+ subscribers)
    • miniCON AI Events – minicon.marktechpost.com
    • AI Reports & Magazines – magazine.marktechpost.com
    • AI Dev & Research News – marktechpost.com (1M+ monthly readers)
    • Partner with us

    The post OpenAI Releases HealthBench: An Open-Source Benchmark for Measuring the Performance and Safety of Large Language Models in Healthcare appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleRL^V: Unifying Reasoning and Verification in Language Models through Value-Free Reinforcement Learning
    Next Article A Step-by-Step Guide on Building, Customizing, and Publishing an AI-Focused Blogging Website with Lovable.dev and Seamless GitHub Integration

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 14, 2025
    Machine Learning

    Citations with Amazon Nova understanding models

    August 14, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-34129 – LILIN Digital Video Recorder (DVR) Command Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Clair Obscur: Expedition 33 is an Xbox Game Pass triumph that expertly weaves two RPG genres together, and I’m into it

    News & Updates

    CISA Adds 3 Flaws to KEV Catalog, Impacting AMI MegaRAC, D-Link, Fortinet

    Development

    CVE-2025-7897 – Harry0703 MoneyPrinterTurbo API Endpoint Missing Authentication Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-47292 – Cap Collectif Remote Code Execution Vulnerability

    May 14, 2025

    CVE ID : CVE-2025-47292

    Published : May 14, 2025, 11:16 a.m. | 51 minutes ago

    Description : Cap Collectif is an online decision making platform that integrates several tools. Before commit 812f2a7d271b76deab1175bdaf2be0b8102dd198, the `DebateAlternateArgumentsResolver` deserializes a `Cursor`, allowing any classes and which can be controlled by unauthenticated user. Exploitation of this vulnerability can lead to Remote Code Execution. The vulnerability is fixed in commit 812f2a7d271b76deab1175bdaf2be0b8102dd198.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-32999 – “A-Blog CMS Cross-Site Scripting Vulnerability”

    May 19, 2025

    Amazon DynamoDB data modeling for Multi-tenancy – Part 3

    May 17, 2025

    Robotic probe quickly measures key properties of new materials

    July 4, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.