Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Neurobiological Inspiration for AI: The HippoRAG Framework for Long-Term LLM Memory

    Neurobiological Inspiration for AI: The HippoRAG Framework for Long-Term LLM Memory

    June 2, 2024

    Despite the advancements in LLMs, the current models still need to continually improve to incorporate new knowledge without losing previously acquired information, a problem known as catastrophic forgetting. Current methods, such as retrieval-augmented generation (RAG), have limitations in performing tasks that require integrating new knowledge across different passages since it encodes passages in isolation, making it difficult to identify relevant information spread across different passages. HippoRAG, a retrieval framework, has been designed to address these challenges. Inspired by neurobiological principles, particularly the hippocampal indexing theory, it enables deeper and more efficient knowledge integration.

    Current RAG methods provide long-term memory to LLMs, thus updating the model with new knowledge. However, they fall short in aiding knowledge integration of information spread across multiple passages, as they encode each passage in isolation. This limitation hinders their effectiveness in complex tasks like scientific literature reviews, legal case briefings, and medical diagnoses, which demand the synthesis of information from various sources. 

    A team of researchers from Ohio State University and Stanford University Introduces HippoRAG. This unique approach sets itself apart from other models by leveraging the associative memory functions of the human brain, particularly the hippocampus. This novel method utilizes a graph-based hippocampal index to create and utilize a network of associations, enhancing the model’s ability to navigate and integrate information from multiple passages.

    HippoRAG’s innovative approach involves an indexing process that extracts noun phrases and relations from passages using an instruction-tuned LLM and a retrieval encoder. This indexing method allows HippoRAG to build a comprehensive web of associations, enhancing its ability to retrieve and integrate knowledge across various passages. HippoRAG employs a personalized PageRank algorithm during retrieval to identify the most relevant passages for answering a query, showcasing its superior performance in knowledge integration tasks compared to existing RAG methods.

    HippoRAG’s methodology involves two main phases: offline indexing and online retrieval. The indexing process of HippoRAG involves a meticulous procedure of processing passages using an instruction-tuned LLM and a retrieval encoder. By extracting named entities and utilizing Open Information Extraction (OpenIE), HippoRAG constructs a graph-based hippocampal index that captures the relationships between entities and passages. This indexing method enhances the model’s ability to retrieve and integrate information effectively, showcasing its advanced knowledge integration capabilities.

    During the retrieval process, HippoRAG utilizes a 1-shot prompt to extract named entities from a query, encoding them with the retrieval encoder. By identifying query nodes with the highest cosine similarity to the query-named entities, HippoRAG efficiently retrieves relevant information from its hippocampal index. The model then runs the Personalized PageRank (PPR) algorithm over the index, enabling effective pattern completion and enhancing its knowledge integration performance across various tasks.

    When tested on multi-hop question answering benchmarks, including MuSiQue and 2WikiMultiHopQA, HippoRAG demonstrated its superiority by outperforming state-of-the-art methods by up to 20%. Notably, HippoRAG’s single-step retrieval achieved comparable or better performance than iterative methods like IRCoT while being 10-30 times cheaper and 6-13 times faster. This clear comparison highlights the potential of HippoRAG to revolutionize the field of language modeling and information retrieval.

    In conclusion, the HippoRAG framework significantly advances large language models (LLMs). It is not just a theoretical advancement but a practical solution enabling deeper and more efficient integration of new knowledge. Inspired by the associative memory functions of the human brain, HippoRAG improves the model’s ability to retrieve and synthesize information from multiple sources. The paper’s findings demonstrate the superior performance of HippoRAG in knowledge-intensive NLP tasks, highlighting its potential for real-world applications that require continuous knowledge integration.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 43k+ ML SubReddit | Also, check out our AI Events Platform

    The post Neurobiological Inspiration for AI: The HippoRAG Framework for Long-Term LLM Memory appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow to deal with parent and child iframes
    Next Article Symbolic Chain-of-Thought ‘SymbCoT’: A Fully LLM-based Framework that Integrates Symbolic Expressions and Logic Rules with CoT Prompting

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-47916 – Invision Community Themeeditor Remote Code Execution

    May 16, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Understanding the Redux Ecosystem: From Concept to Implementation

    Development

    CodeSOD: Empty Reasoning

    News & Updates

    AWS Elastic Beanstalk: Simplifying Web Application Deployment

    Development

    AUDio MEasurement System – oscilloscope and spectrum analyzer

    Linux

    Highlights

    Development

    How Linux is Revolutionizing Education with Open Source Learning

    June 26, 2024

    by George Whittaker Introduction In today’s rapidly evolving technological landscape, the importance of equipping students…

    Top Factors to Consider When Choosing the Right AI Service Provider🤖

    April 30, 2025

    XB Software’s Cool Collabs with Polish Businesses

    November 14, 2024

    CVE-2025-21572 – OpenGrok Reflected Cross-Site Scripting Vulnerability

    May 2, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.