Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Top 15 Enterprise Use Cases That Justify Hiring Node.js Developers in 2025

      July 31, 2025

      The Core Model: Start FROM The Answer, Not WITH The Solution

      July 31, 2025

      AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals   

      July 31, 2025

      Understanding the code modernization conundrum

      July 31, 2025

      Not just YouTube: Google is using AI to guess your age based on your activity – everywhere

      July 31, 2025

      Malicious extensions can use ChatGPT to steal your personal data – here’s how

      July 31, 2025

      What Zuckerberg’s ‘personal superintelligence’ sales pitch leaves out

      July 31, 2025

      This handy NordVPN tool flags scam calls on Android – even before you answer

      July 31, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Route Optimization through Laravel’s Shallow Resource Architecture

      July 31, 2025
      Recent

      Route Optimization through Laravel’s Shallow Resource Architecture

      July 31, 2025

      This Week in Laravel: Laracon News, Free Laravel Idea, and Claude Code Course

      July 31, 2025

      Everything We Know About Pest 4

      July 31, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      FOSS Weekly #25.31: Kernel 6.16, OpenMandriva Review, Conky Customization, System Monitoring and More

      July 31, 2025
      Recent

      FOSS Weekly #25.31: Kernel 6.16, OpenMandriva Review, Conky Customization, System Monitoring and More

      July 31, 2025

      Windows 11’s MSN Widgets board now opens in default browser, such as Chrome (EU only)

      July 31, 2025

      Microsoft’s new “move to Windows 11” campaign implies buying OneDrive paid plan

      July 31, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»Making AI models more trustworthy for high-stakes settings

    Making AI models more trustworthy for high-stakes settings

    May 1, 2025

    The ambiguity in medical imaging can present major challenges for clinicians who are trying to identify disease. For instance, in a chest X-ray, pleural effusion, an abnormal buildup of fluid in the lungs, can look very much like pulmonary infiltrates, which are accumulations of pus or blood.

    An artificial intelligence model could assist the clinician in X-ray analysis by helping to identify subtle details and boosting the efficiency of the diagnosis process. But because so many possible conditions could be present in one image, the clinician would likely want to consider a set of possibilities, rather than only having one AI prediction to evaluate.

    One promising way to produce a set of possibilities, called conformal classification, is convenient because it can be readily implemented on top of an existing machine-learning model. However, it can produce sets that are impractically large. 

    MIT researchers have now developed a simple and effective improvement that can reduce the size of prediction sets by up to 30 percent while also making predictions more reliable.

    Having a smaller prediction set may help a clinician zero in on the right diagnosis more efficiently, which could improve and streamline treatment for patients. This method could be useful across a range of classification tasks — say, for identifying the species of an animal in an image from a wildlife park — as it provides a smaller but more accurate set of options.

    “With fewer classes to consider, the sets of predictions are naturally more informative in that you are choosing between fewer options. In a sense, you are not really sacrificing anything in terms of accuracy for something that is more informative,” says Divya Shanmugam PhD ’24, a postdoc at Cornell Tech who conducted this research while she was an MIT graduate student.

    Shanmugam is joined on the paper by Helen Lu ’24; Swami Sankaranarayanan, a former MIT postdoc who is now a research scientist at Lilia Biosciences; and senior author John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering at MIT and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the Conference on Computer Vision and Pattern Recognition in June.

    Prediction guarantees

    AI assistants deployed for high-stakes tasks, like classifying diseases in medical images, are typically designed to produce a probability score along with each prediction so a user can gauge the model’s confidence. For instance, a model might predict that there is a 20 percent chance an image corresponds to a particular diagnosis, like pleurisy.

    But it is difficult to trust a model’s predicted confidence because much prior research has shown that these probabilities can be inaccurate. With conformal classification, the model’s prediction is replaced by a set of the most probable diagnoses along with a guarantee that the correct diagnosis is somewhere in the set.

    But the inherent uncertainty in AI predictions often causes the model to output sets that are far too large to be useful.

    For instance, if a model is classifying an animal in an image as one of 10,000 potential species, it might output a set of 200 predictions so it can offer a strong guarantee.

    “That is quite a few classes for someone to sift through to figure out what the right class is,” Shanmugam says.

    The technique can also be unreliable because tiny changes to inputs, like slightly rotating an image, can yield entirely different sets of predictions.

    To make conformal classification more useful, the researchers applied a technique developed to improve the accuracy of computer vision models called test-time augmentation (TTA).

    TTA creates multiple augmentations of a single image in a dataset, perhaps by cropping the image, flipping it, zooming in, etc. Then it applies a computer vision model to each version of the same image and aggregates its predictions.

    “In this way, you get multiple predictions from a single example. Aggregating predictions in this way improves predictions in terms of accuracy and robustness,” Shanmugam explains.

    Maximizing accuracy

    To apply TTA, the researchers hold out some labeled image data used for the conformal classification process. They learn to aggregate the augmentations on these held-out data, automatically augmenting the images in a way that maximizes the accuracy of the underlying model’s predictions.

    Then they run conformal classification on the model’s new, TTA-transformed predictions. The conformal classifier outputs a smaller set of probable predictions for the same confidence guarantee.

    “Combining test-time augmentation with conformal prediction is simple to implement, effective in practice, and requires no model retraining,” Shanmugam says.

    Compared to prior work in conformal prediction across several standard image classification benchmarks, their TTA-augmented method reduced prediction set sizes across experiments, from 10 to 30 percent.

    Importantly, the technique achieves this reduction in prediction set size while maintaining the probability guarantee.

    The researchers also found that, even though they are sacrificing some labeled data that would normally be used for the conformal classification procedure, TTA boosts accuracy enough to outweigh the cost of losing those data.

    “It raises interesting questions about how we used labeled data after model training. The allocation of labeled data between different post-training steps is an important direction for future work,” Shanmugam says.

    In the future, the researchers want to validate the effectiveness of such an approach in the context of models that classify text instead of images. To further improve the work, the researchers are also considering ways to reduce the amount of computation required for TTA.

    This research is funded, in part, by the Wistrom Corporation.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleThe new frontier of API governance: Ensuring alignment, security, and efficiency through decentralization
    Next Article Oura wins round 1 in smart ring patent fight against Ultrahuman and RingConn – now what?

    Related Posts

    Artificial Intelligence

    Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment

    July 31, 2025
    Repurposing Protein Folding Models for Generation with Latent Diffusion
    Artificial Intelligence

    Repurposing Protein Folding Models for Generation with Latent Diffusion

    July 31, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Product Walkthrough: Securing Microsoft Copilot with Reco

    Development

    Assigning and completing issues with coding agent in GitHub Copilot

    News & Updates

    CVE-2025-5674 – “Code-Projects Patient Record Management System SQL Injection Vulnerability”

    Common Vulnerabilities and Exposures (CVEs)

    LLM Reasoning Benchmarks are Statistically Fragile: New Study Shows Reinforcement Learning RL Gains often Fall within Random Variance

    Machine Learning

    Highlights

    CVE-2025-6035 – GIMP Integer Overflow Vulnerability in Despeckle Plug-in

    June 13, 2025

    CVE ID : CVE-2025-6035

    Published : June 13, 2025, 4:15 p.m. | 1 hour, 51 minutes ago

    Description : A flaw was found in GIMP. An integer overflow vulnerability exists in the GIMP “Despeckle” plug-in. The issue occurs due to unchecked multiplication of image dimensions, such as width, height, and bytes-per-pixel (img_bpp), which can result in allocating insufficient memory and subsequently performing out-of-bounds writes. This issue could lead to heap corruption, a potential denial of service (DoS), or arbitrary code execution in certain scenarios.

    Severity: 6.6 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-6278 – Upsonic Path Traversal Vulnerability

    June 19, 2025

    CVE-2025-4380 – Adobe Ads Pro Plugin Local File Inclusion Vulnerability

    July 2, 2025

    Cohere Embed 4 multimodal embeddings model is now available on Amazon SageMaker JumpStart

    July 8, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.