Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Google AI Introduce AGREE: A Machine Learning Framework that Enables LLMs to Self-Ground the Claims in their Responses and to Provide Precise Citations

    Google AI Introduce AGREE: A Machine Learning Framework that Enables LLMs to Self-Ground the Claims in their Responses and to Provide Precise Citations

    May 29, 2024

    Maintaining the accuracy of Large Language Models (LLMs), such as GPT, is crucial, particularly in cases requiring factual accuracy, like news reporting or educational content creation. Despite their impressive capabilities, LLMs are prone to generating plausible but nonfactual information, known as “hallucinations,” usually when faced with open-ended queries that require broad world knowledge. Google AI Researchers introduced AGREE to address the issue of “hallucination,” where LLMs generate a response that is factually incorrect, nonsensical, or disconnected from the input prompt.

    Existing approaches to preventing hallucinations in LLMs primarily include two methods: post-hoc citing and prompting-based grounding. Post-hoc citing involves adding citations after generating responses, often using natural language inference (NLI) models. However, this method relies heavily on the knowledge within the LLM’s embeddings and faces challenges with facts beyond its training data. While prompting-based grounding leverages the instruction-following and in-context learning capabilities of LLMs, but its often ineffective, particularly in real-world scenarios requiring high factual accuracy.

    The proposed solution, AGREE (Adaptation for GRounding EnhancEment), introduces a learning-based framework that enables LLMs to self-ground their responses and provide accurate citations. AGREE takes a holistic approach by combining both learning-based adaptation and test-time adaptation (TTA). During training, AGREE fine-tunes LLMs using synthetic data from unlabeled queries, enabling them to self-ground their claims by adding citations to their responses. AGREE uses an iterative inference strategy during test time, which lets LLMs actively seek more information based on self-generated citations, which helps them improve their answers over and over again.

    At the training stage, AGREE involves collecting synthetic data from unlabeled queries, retrieving relevant passages from reliable sources using a retriever model, and fine-tuning a base LLM to self-ground its claims. The fine-tuning process utilizes an NLI model to judge the support for each claim and add citations accordingly. Experiments across five datasets demonstrate AGREE’s effectiveness in improving grounding and citation precision compared to baseline methods. AGREE outperforms prompting-based and post-hoc citing approaches, achieving relative improvements of over 30% in grounding quality. Additionally, AGREE can work with out-of-domain data, suggesting its robustness across different question types, including knowledge out-of-domain. The inclusion of TTA in AGREE also leads to improvements in both grounding and answer correctness.

    In conclusion, AGREE has effectively improved the issue of hallucination in LLMs by working on their factuality and verifiability. By enabling LLMs to self-ground their responses and provide accurate citations, AGREE enhances their reliability, particularly in domains requiring high factual accuracy. AGREE’s approach of combining learning-based adaptation with test-time adaptation provides a strong solution that works better than current approaches and can be used across a wide range of datasets. Overall, AGREE possesses the potential to promote reliable language models suitable for real-world applications requiring high factual accuracy.

    Check out the Paper and Blog. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 43k+ ML SubReddit | Also, check out our AI Events Platform

    The post Google AI Introduce AGREE: A Machine Learning Framework that Enables LLMs to Self-Ground the Claims in their Responses and to Provide Precise Citations appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleScaleGraph: Enhancing Distributed Ledger Technology DLT Scalability with Dynamic Sharding and Synchronous Consensus
    Next Article Top AI Tools for Graphic Designers

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-47916 – Invision Community Themeeditor Remote Code Execution

    May 16, 2025
    Leave A Reply Cancel Reply

    Hostinger

    Continue Reading

    KAlgebra – graphing calculator

    Linux

    Google’s Gemini AI might soon back up Siri on your iPhone – just like ChatGPT

    News & Updates

    CVE-2025-43853 – “WAMR Symlink Following Vulnerability”

    Common Vulnerabilities and Exposures (CVEs)

    CodeSOD: IsEmptyOrNullOrNullOrEmpty

    Development

    Highlights

    Development

    Thailand Enhances Cyber Resilience with AI-Powered Security Measures

    April 2, 2025

    In an effort to enhance Thailand’s cybersecurity infrastructure, the National Cyber Security Agency (NCSA) and…

    I bought Samsung’s $649 Galaxy Watch Ultra: What I learned after weeks of testing

    July 30, 2024

    How to get Open NAT on Xbox Series X|S, Xbox One with port forwarding

    June 25, 2024

    New Malware Targets Exposed Docker APIs for Cryptocurrency Mining

    June 18, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.