Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 3, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 3, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 3, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 3, 2025

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025

      PSA: You don’t need to spend $400+ to upgrade your Xbox Series X|S storage

      June 3, 2025

      UK civil servants saved 24 minutes per day using Microsoft Copilot, saving two weeks each per year according to a new report

      June 3, 2025

      These solid-state fans will revolutionize cooling in our PCs and laptops

      June 3, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Community News: Latest PECL Releases (06.03.2025)

      June 3, 2025
      Recent

      Community News: Latest PECL Releases (06.03.2025)

      June 3, 2025

      A Comprehensive Guide to Azure Firewall

      June 3, 2025

      Test Job Failures Precisely with Laravel’s assertFailedWith Method

      June 3, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025
      Recent

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025

      PSA: You don’t need to spend $400+ to upgrade your Xbox Series X|S storage

      June 3, 2025

      UK civil servants saved 24 minutes per day using Microsoft Copilot, saving two weeks each per year according to a new report

      June 3, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»LightPROF: A Lightweight AI Framework that Enables Small-Scale Language Models to Perform Complex Reasoning Over Knowledge Graphs (KGs) Using Structured Prompts

    LightPROF: A Lightweight AI Framework that Enables Small-Scale Language Models to Perform Complex Reasoning Over Knowledge Graphs (KGs) Using Structured Prompts

    April 12, 2025

    Large Language Models (LLMs) have revolutionized natural language processing, with abilities on complex zero-shot tasks through extensive training data and vast parameters. However, LLMs often struggle with knowledge-intensive tasks due to limited task-specific prior knowledge and understanding capabilities. LLMs need access to reliable and continuously updated knowledge bases for effective reasoning, with Knowledge Graphs (KGs) being ideal candidates due to their structured semantic framework. Current approaches to LLM reasoning on KGs encounter two obstacles: representing KG content as extensive text fails to convey rich logical relationships within the graph structure, and retrieval and reasoning processes demand numerous LLM calls and substantial reasoning power.

    Prompt engineering has emerged as a critical technique for expanding LLM capabilities across various applications without modifying model parameters. The field has evolved from simple zero-shot and few-shot prompts to more complex approaches like Chain-of-Thought (CoT), Tree-of-Thoughts (ToT), and Graph-of-Thoughts (GoT). KG-based LLM reasoning has gained traction as KGs provide explicit, structured knowledge that enhances LLMs’ knowledge awareness with clear logical structures. More flexible solutions like KAPING, KGGPT, StructGPT, ToG, and KnowledgeNavigator construct LLM prompts using KG factual information with various techniques like semantic similarity retrieval, multi-step reasoning frameworks, and beam search on KGs to enhance reasoning capabilities.

    Researchers from Beijing University of Posts and Telecommunications, Hangzhou Dianzi University, Singapore Management University, National University of Singapore, Institute of Computing Technology at Chinese Academy of Sciences, and Xi’an Jiaotong University have proposed LightPROF, a Lightweight and efficient Prompt learning-ReasOning Framework. The RetrieveEmbed-Reason framework enables small-scale LLMs to perform stable retrieval and efficient reasoning on KGs. It contains three core components: Retrieval, Embedding, and Reasoning modules. The Retrieval uses relations as fundamental retrieval units and limits the scope based on question semantics, the Embedding uses a compact Transformer-based Knowledge Adapter, and the Reasoning combines embedded representation vectors with carefully designed prompts. LightPROF supports various open-source LLMs and KGs while only requiring Knowledge Adapter tuning during training.

    LightPROF is evaluated on two Freebase-based public datasets: WebQuestionsSP (WebQSP) and ComplexWebQuestions (CWQ). WebQSP serves as a benchmark with fewer questions (4,737) but a larger KG, and CWQ is designed for complex KG question answering with 34,689 question-answer pairs built upon WebQSP. Performance is measured using match accuracy (Hits@1), which evaluates whether the model’s top answer is correct. LightPROF is compared against three categories of baseline methods: full fine-tuning approaches (including KV-Mem, EmbedKGQA, TransferNet, NSM, etc), vanilla LLM methods (featuring LLaMa series models), and LLM+KGs methods (such as StructGPT, ToG, KnowledgeNavigator, and AgentBench).

    LightPROF significantly outperforms state-of-the-art models, achieving 83.7% accuracy on the WebQSP dataset and 59.3% on the more challenging CWQ dataset. These results validate LightPROF’s effectiveness in handling multi-hop and complex reasoning challenges in KG question answering. When integrating different LLMs within the framework, LightPROF consistently enhances performance regardless of the baseline capabilities of the original models. This plug-and-play integration strategy eliminates the need for costly LLM fine-tuning. Efficiency evaluations against StructGPT reveal LightPROF’s superior resource utilization, with a 30% reduction in processing time, 98% reduction in input token usage, and significantly lower tokens per request.

    Hostinger

    In conclusion, researchers introduced LightPROF, a novel framework that enhances LLM reasoning through accurate retrieval and efficient encoding of KGs. It narrows the retrieval scope by sampling KGs using stable relationships as units. Researchers developed a complex Knowledge Adapter that effectively parses graph structures and integrates information to enable efficient reasoning with smaller LLMs. It condenses reasoning graphs into fewer tokens while achieving comprehensive alignment with LLM input space through the Projector component. Future research directions include developing KG encoders with strong generalization capabilities that can be applied to unseen KG data without retraining and designing unified cross-modal encoders capable of handling multimodal KGs.


    Check out Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

    The post LightPROF: A Lightweight AI Framework that Enables Small-Scale Language Models to Perform Complex Reasoning Over Knowledge Graphs (KGs) Using Structured Prompts appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleStep by Step Guide on Converting Text to High-Quality Audio Using an Open Source TTS Model on Hugging Face: Including Detailed Audio File Analysis and Diagnostic Tools in Python
    Next Article New White House tariff exemptions for electronics could offer temporary break for tech

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 3, 2025
    Machine Learning

    This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal Reasoning

    June 3, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    CVE-2025-31712 – Cisco cplog Out-of-Bounds Write Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    gopher64 is an N64 emulator written in Rust

    Linux

    Rockstar should entirely skip GTA 6’s second trailer, says former dev

    Operating Systems

    UX in Universal Design Series: Key Principles for Keyboard Access in Health Systems – 2

    Development
    GetResponse

    Highlights

    OrderOnline: AI Improves Conversion Rate by 56% with MongoDB

    May 13, 2025

    Established by Ordivo Group in 2018, OrderOnline has quickly become a driving force behind Indonesia’s…

    Google goes all in on agent development, Gemini at Google Cloud Next 25

    April 9, 2025

    10 Must-Read Books on HR

    December 3, 2024

    CVE-2025-3838 – “VMware Connect Unauthorized Access to Installer Credentials”

    April 21, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.