Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 14, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 14, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 14, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 14, 2025

      I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of work

      May 14, 2025

      How to use your Android phone as a webcam when your laptop’s default won’t cut it

      May 14, 2025

      The 5 most customizable Linux desktop environments – when you want it your way

      May 14, 2025

      Gen AI use at work saps our motivation even as it boosts productivity, new research shows

      May 14, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Strategic Cloud Partner: Key to Business Success, Not Just Tech

      May 14, 2025
      Recent

      Strategic Cloud Partner: Key to Business Success, Not Just Tech

      May 14, 2025

      Perficient’s “What If? So What?” Podcast Wins Gold at the 2025 Hermes Creative Awards

      May 14, 2025

      PIM for Azure Resources

      May 14, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Windows 11 24H2’s Settings now bundles FAQs section to tell you more about your system

      May 14, 2025
      Recent

      Windows 11 24H2’s Settings now bundles FAQs section to tell you more about your system

      May 14, 2025

      You can now share an app/browser window with Copilot Vision to help you with different tasks

      May 14, 2025

      Microsoft will gradually retire SharePoint Alerts over the next two years

      May 14, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»OpenAI Releases HealthBench: An Open-Source Benchmark for Measuring the Performance and Safety of Large Language Models in Healthcare

    OpenAI Releases HealthBench: An Open-Source Benchmark for Measuring the Performance and Safety of Large Language Models in Healthcare

    May 13, 2025

    OpenAI has released HealthBench, an open-source evaluation framework designed to measure the performance and safety of large language models (LLMs) in realistic healthcare scenarios. Developed in collaboration with 262 physicians across 60 countries and 26 medical specialties, HealthBench addresses the limitations of existing benchmarks by focusing on real-world applicability, expert validation, and diagnostic coverage.

    Addressing Benchmarking Gaps in Healthcare AI

    Existing benchmarks for healthcare AI typically rely on narrow, structured formats such as multiple-choice exams. While useful for initial assessments, these formats fail to capture the complexity and nuance of real-world clinical interactions. HealthBench shifts toward a more representative evaluation paradigm, incorporating 5,000 multi-turn conversations between models and either lay users or healthcare professionals. Each conversation ends with a user prompt, and model responses are assessed using example-specific rubrics written by physicians.

    Each rubric consists of clearly defined criteria—positive and negative—with associated point values. These criteria capture behavioral attributes such as clinical accuracy, communication clarity, completeness, and instruction adherence. HealthBench evaluates over 48,000 unique criteria, with scoring handled by a model-based grader validated against expert judgment.

    Benchmark Structure and Design

    HealthBench organizes its evaluation across seven key themes: emergency referrals, global health, health data tasks, context-seeking, expertise-tailored communication, response depth, and responding under uncertainty. Each theme represents a distinct real-world challenge in medical decision-making and user interaction.

    In addition to the standard benchmark, OpenAI introduces two variants:

    • HealthBench Consensus: A subset emphasizing 34 physician-validated criteria, designed to reflect critical aspects of model behavior such as advising emergency care or seeking additional context.
    • HealthBench Hard: A more difficult subset of 1,000 conversations selected for their ability to challenge current frontier models.

    These components allow for detailed stratification of model behavior by both conversation type and evaluation axis, offering more granular insights into model capabilities and shortcomings.

    Evaluation of Model Performance

    OpenAI evaluated several models on HealthBench, including GPT-3.5 Turbo, GPT-4o, GPT-4.1, and the newer o3 model. Results show marked progress: GPT-3.5 achieved 16%, GPT-4o reached 32%, and o3 attained 60% overall. Notably, GPT-4.1 nano, a smaller and cost-effective model, outperformed GPT-4o while reducing inference cost by a factor of 25.

    Performance varied by theme and evaluation axis. Emergency referrals and tailored communication were areas of relative strength, while context-seeking and completeness posed greater challenges. A detailed breakdown revealed that completeness was the most correlated with overall score, underscoring its importance in health-related tasks.

    OpenAI also compared model outputs with physician-written responses. Unassisted physicians generally produced lower-scoring responses than models, although they could improve model-generated drafts, particularly when working with earlier model versions. These findings suggest a potential role for LLMs as collaborative tools in clinical documentation and decision support.

    Reliability and Meta-Evaluation

    HealthBench includes mechanisms to assess model consistency. The “worst-at-k” metric quantifies the degradation in performance across multiple runs. While newer models showed improved stability, variability remains an area for ongoing research.

    To assess the trustworthiness of its automated grader, OpenAI conducted a meta-evaluation using over 60,000 annotated examples. GPT-4.1, used as the default grader, matched or exceeded the average performance of individual physicians in most themes, suggesting its utility as a consistent evaluator.

    Conclusion

    HealthBench represents a technically rigorous and scalable framework for assessing AI model performance in complex healthcare contexts. By combining realistic interactions, detailed rubrics, and expert validation, it offers a more nuanced picture of model behavior than existing alternatives. OpenAI has released HealthBench via the simple-evals GitHub repository, providing researchers with tools to benchmark, analyze, and improve models intended for health-related applications.


    Check out the Paper, GitHub PagePage and Official Release. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

    Here’s a brief overview of what we’re building at Marktechpost:

    • ML News Community – r/machinelearningnews (92k+ members)
    • Newsletter– airesearchinsights.com/(30k+ subscribers)
    • miniCON AI Events – minicon.marktechpost.com
    • AI Reports & Magazines – magazine.marktechpost.com
    • AI Dev & Research News – marktechpost.com (1M+ monthly readers)
    • Partner with us

    The post OpenAI Releases HealthBench: An Open-Source Benchmark for Measuring the Performance and Safety of Large Language Models in Healthcare appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleRL^V: Unifying Reasoning and Verification in Language Models through Value-Free Reinforcement Learning
    Next Article Multimodal AI Needs More Than Modality Support: Researchers Propose General-Level and General-Bench to Evaluate True Synergy in Generalist Models

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    May 14, 2025
    Machine Learning

    Rethinking Toxic Data in LLM Pretraining: A Co-Design Approach for Improved Steerability and Detoxification

    May 14, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Laravel Rewind is a Versioning Package for Eloquent

    Development

    How to Use a PHP GitHub Webhook Package to Automatically Update the Code of an Application When the Code Changes in GitHub

    Development

    PinePods is a Rust-based podcast management system

    Linux

    openKylin – Chinese desktop Linux distribution

    Linux

    Highlights

    Development

    Real Estate CRM Development: Cost, Features, and Best Practices

    August 29, 2024

    In today’s competitive real estate market, the ability to manage customer relationships efficiently can be…

    My favorite bike computer just got more affordable but with just as many safety features

    April 8, 2025

    The Minecraft movie makes an appearance at Minecraft Live with exclusive footage

    March 22, 2025

    Common financial services use cases for Amazon DynamoDB

    April 2, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.