Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»AI in health should be regulated, but don’t forget about the algorithms, researchers say

    AI in health should be regulated, but don’t forget about the algorithms, researchers say

    December 20, 2024

    One might argue that one of the primary duties of a physician is to constantly evaluate and re-evaluate the odds: What are the chances of a medical procedure’s success? Is the patient at risk of developing severe symptoms? When should the patient return for more testing? Amidst these critical deliberations, the rise of artificial intelligence promises to reduce risk in clinical settings and help physicians prioritize the care of high-risk patients.

    Despite its potential, researchers from the MIT Department of Electrical Engineering and Computer Science (EECS), Equality AI, and Boston University are calling for more oversight of AI from regulatory bodies in a new commentary published in the New England Journal of Medicine AI’s (NEJM AI) October issue after the U.S. Office for Civil Rights (OCR) in the Department of Health and Human Services (HHS) issued a new rule under the Affordable Care Act (ACA).

    In May, the OCR published a final rule in the ACA that prohibits discrimination on the basis of race, color, national origin, age, disability, or sex in “patient care decision support tools,” a newly established term that encompasses both AI and non-automated tools used in medicine.

    Developed in response to President Joe Biden’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence from 2023, the final rule builds upon the Biden-Harris administration’s commitment to advancing health equity by focusing on preventing discrimination. 

    According to senior author and associate professor of EECS Marzyeh Ghassemi, “the rule is an important step forward.” Ghassemi, who is affiliated with the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), the Computer Science and Artificial Intelligence Laboratory (CSAIL), and the Institute for Medical Engineering and Science (IMES), adds that the rule “should dictate equity-driven improvements to the non-AI algorithms and clinical decision-support tools already in use across clinical subspecialties.”

    The number of U.S. Food and Drug Administration-approved, AI-enabled devices has risen dramatically in the past decade since the approval of the first AI-enabled device in 1995 (PAPNET Testing System, a tool for cervical screening). As of October, the FDA has approved nearly 1,000 AI-enabled devices, many of which are designed to support clinical decision-making.

    However, researchers point out that there is no regulatory body overseeing the clinical risk scores produced by clinical-decision support tools, despite the fact that the majority of U.S. physicians (65 percent) use these tools on a monthly basis to determine the next steps for patient care.

    To address this shortcoming, the Jameel Clinic will host another regulatory conference in March 2025. Last year’s conference ignited a series of discussions and debates amongst faculty, regulators from around the world, and industry experts focused on the regulation of AI in health.

    “Clinical risk scores are less opaque than ‘AI’ algorithms in that they typically involve only a handful of variables linked in a simple model,” comments Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School and editor-in-chief of NEJM AI. “Nonetheless, even these scores are only as good as the datasets used to ‘train’ them and as the variables that experts have chosen to select or study in a particular cohort. If they affect clinical decision-making, they should be held to the same standards as their more recent and vastly more complex AI relatives.”

    Moreover, while many decision-support tools do not use AI, researchers note that these tools are just as culpable in perpetuating biases in health care, and require oversight.

    “Regulating clinical risk scores poses significant challenges due to the proliferation of clinical decision support tools embedded in electronic medical records and their widespread use in clinical practice,” says co-author Maia Hightower, CEO of Equality AI. “Such regulation remains necessary to ensure transparency and nondiscrimination.”

    However, Hightower adds that under the incoming administration, the regulation of clinical risk scores may prove to be “particularly challenging, given its emphasis on deregulation and opposition to the Affordable Care Act and certain nondiscrimination policies.” 

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleTeaching a robot its limits, to complete open-ended tasks safely
    Next Article Conversation intelligence platform Echo AI integrates Speech AI to extract key voice of customer insights

    Related Posts

    Machine Learning

    LLMs Struggle with Real Conversations: Microsoft and Salesforce Researchers Reveal a 39% Performance Drop in Multi-Turn Underspecified Tasks

    May 17, 2025
    Machine Learning

    This AI paper from DeepSeek-AI Explores How DeepSeek-V3 Delivers High-Performance Language Modeling by Minimizing Hardware Overhead and Maximizing Computational Efficiency

    May 17, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Google DeepMind Researchers Unlock the Potential of Decoding-Based Regression for Tabular and Density Estimation Tasks

    Machine Learning

    Meet Xata Agent: An Open Source Agent for Proactive PostgreSQL Monitoring, Automated Troubleshooting, and Seamless DevOps Integration

    Machine Learning

    AI-Powered Solutions from Parasoft Slash Testing Failure Rates and Boost Developer Efficiency

    Development

    CVE-2025-45513 – Tenda FH451 Stack Overflow Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Databases

    Perform maintenance tasks and schema modifications in Amazon RDS for PostgreSQL with minimal downtime

    May 13, 2024

    In this post, we walk you through performing schema changes and common maintenance tasks such…

    How to Use a PHP Coverage Report to Check The Quality Level of Your PHPUnit Test Code

    May 9, 2024

    SymbolEditor is a cross stitch symbol editor

    May 13, 2025

    This AI Paper Explores AgentOps Tools: Enhancing Observability and Traceability in Foundation Model FM-Based Autonomous Agents

    November 19, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.