Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Microsoft AI Introduces Claimify: A Novel LLM-based Claim-Extraction Method that Outperforms Prior Solutions to Produce More Accurate, Comprehensive, and Substantiated Claims from LLM Outputs

    Microsoft AI Introduces Claimify: A Novel LLM-based Claim-Extraction Method that Outperforms Prior Solutions to Produce More Accurate, Comprehensive, and Substantiated Claims from LLM Outputs

    March 20, 2025

    The widespread adoption of Large Language Models (LLMs) has significantly changed the landscape of content creation and consumption. However, it has also introduced critical challenges regarding accuracy and factual reliability. The content generated by LLMs often includes claims that lack proper verification, potentially leading to misinformation. Therefore, accurately extracting claims from these outputs for effective fact-checking has become essential, albeit challenging due to inherent ambiguities and context dependencies.

    Microsoft AI Research has recently developed Claimify, an advanced claim-extraction method based on LLMs, specifically designed to enhance accuracy, comprehensiveness, and context-awareness in extracting claims from LLM outputs. Claimify addresses the limitations of existing methods by explicitly dealing with ambiguity. Unlike other approaches, it identifies sentences with multiple possible interpretations and only proceeds with claim extraction when the intended meaning is clearly determined within the given context. This careful approach ensures higher accuracy and reliability, particularly benefiting subsequent fact-checking efforts.

    From a technical standpoint, Claimify employs a structured pipeline comprising three key stages: Selection, Disambiguation, and Decomposition. During the Selection stage, Claimify leverages LLMs to identify sentences that contain verifiable information, filtering out those without factual content. In the Disambiguation stage, it uniquely focuses on detecting and resolving ambiguities, such as unclear references or multiple plausible interpretations. Claims are extracted only if ambiguities can be confidently resolved. The final stage, Decomposition, involves converting each clarified sentence into precise, context-independent claims. This structured process enhances both the accuracy and completeness of the resulting claims.

    In evaluations using the BingCheck dataset—which covers a broad range of topics and complex LLM-generated responses—Claimify demonstrated notable improvements over previous methods. It achieved a high entailment rate of 99%, indicating a strong consistency between the extracted claims and the original content. Regarding coverage, Claimify captured 87.6% of verifiable content while maintaining a high precision rate of 96.7%, outperforming comparable approaches. Its systematic approach to decontextualization also ensured that essential contextual details were retained, resulting in better-grounded claims compared to prior methods.

    Overall, Claimify represents a meaningful advancement in the automated extraction of reliable claims from LLM-generated content. By methodically addressing ambiguity and contextuality through a structured and careful evaluation framework, Claimify establishes a new standard for accuracy and reliability. As reliance on LLM-produced content continues to grow, tools like Claimify will play an increasingly crucial role in ensuring the trustworthiness and factual integrity of this content.


    Check out the Paper and Technical details. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.

    The post Microsoft AI Introduces Claimify: A Novel LLM-based Claim-Extraction Method that Outperforms Prior Solutions to Produce More Accurate, Comprehensive, and Substantiated Claims from LLM Outputs appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleNVIDIA AI Just Open Sourced Canary 1B and 180M Flash – Multilingual Speech Recognition and Translation Models
    Next Article Streamline AWS resource troubleshooting with Amazon Bedrock Agents and AWS Support Automation Workflows

    Related Posts

    Machine Learning

    Salesforce AI Releases BLIP3-o: A Fully Open-Source Unified Multimodal Model Built with CLIP Embeddings and Flow Matching for Image Understanding and Generation

    May 16, 2025
    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    May 16, 2025
    Leave A Reply Cancel Reply

    Hostinger

    Continue Reading

    CVE-2025-3506 – Checkmk Unauthenticated File Access Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    I test robot vacuums for a living, and this one has the most useful cleaning feature I’ve seen yet

    News & Updates

    PHP-CGI RCE Flaw Exploited in Attacks on Japan’s Tech, Telecom, and E-Commerce Sectors

    Development

    Windows 11 24H2 just made File Explorer tabs smarter and better

    Operating Systems

    Highlights

    Artificial Intelligence

    Audit automation software: Top choices and best practices

    August 3, 2024

    Over the years, the auditing landscape has undergone remarkable transformations, and among the most significant…

    The Lost White Cat – Bookspotz Chatstories

    June 7, 2024

    I thought a privacy screen protector was a great idea – until I put one on my Galaxy S25 Ultra

    February 3, 2025

    Looking for beta readers

    December 30, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.