Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      GitHub’s CEO Thomas Dohmke steps down, triggering tighter integration of company within Microsoft

      August 12, 2025

      bitHuman launches SDK for creating AI avatars

      August 12, 2025

      Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases

      August 11, 2025

      Why Companies Are Investing in AI-Powered React.js Development Services in 2025

      August 11, 2025

      I found a Google Maps alternative that won’t track you or drain your battery – and it’s free

      August 12, 2025

      I tested this new AI podcast tool to see if it can beat NotebookLM – here’s how it did

      August 12, 2025

      Microsoft’s new update makes your taskbar a productivity hub – here’s how

      August 12, 2025

      Save $50 on the OnePlus Pad 3 plus get a free gift – here’s the deal

      August 12, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Laravel Global Scopes: Automatic Query Filtering

      August 12, 2025
      Recent

      Laravel Global Scopes: Automatic Query Filtering

      August 12, 2025

      Building MCP Servers in PHP

      August 12, 2025

      Filament v4 is Stable!

      August 12, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      I Asked OpenAI’s New Open-Source AI Model to Complete a Children’s School Test — Is It Smarter Than a 10-Year-Old?

      August 12, 2025
      Recent

      I Asked OpenAI’s New Open-Source AI Model to Complete a Children’s School Test — Is It Smarter Than a 10-Year-Old?

      August 12, 2025

      Madden NFL 26 Leads This Week’s Xbox Drops—But Don’t Miss These Hidden Gems

      August 12, 2025

      ASUS G14 Bulked Up for 2025—Still Sexy, Just a Bit Chonkier

      August 12, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Meet LEANN: The Tiniest Vector Database that Democratizes Personal AI with Storage-Efficient Approximate Nearest Neighbor (ANN) Search Index

    Meet LEANN: The Tiniest Vector Database that Democratizes Personal AI with Storage-Efficient Approximate Nearest Neighbor (ANN) Search Index

    August 12, 2025

    Embedding-based search outperforms traditional keyword-based methods across various domains by capturing semantic similarity using dense vector representations and approximate nearest neighbor (ANN) search. However, the ANN data structure brings excessive storage overhead, often 1.5 to 7 times the size of the original raw data. This overhead is manageable in large-scale web applications but becomes impractical for personal devices or large datasets. Reducing storage to under 5% of the original data size is critical for edge deployment, but existing solutions fall short. Techniques like product quantization (PQ) can reduce storage, but either lead to a decrease in accuracy or need increased search latency.

    Recommended Article: NVIDIA AI Released DiffusionRenderer: An AI Model for Editable, Photorealistic 3D Scenes from a Single Video

    Vector search methods depend on IVF and proximity graphs. Graph-based approaches like HNSW, NSG, and Vamana are considered state-of-the-art due to their balance of accuracy and efficiency. Efforts to reduce graph size, such as learned neighbor selection, face limitations due to high training costs and dependency on labeled data. For resource-constrained environments, DiskANN and Starling store data on disk, while FusionANNS optimizes hardware usage. Methods like AiSAQ and EdgeRAG attempt to minimize memory usage but still suffer from high storage overhead or performance degradation at scale. Embedding compression techniques like PQ and RabitQ provides quantization with theoretical error bounds, but struggles to maintain accuracy under tight budgets.

    Researchers from UC Berkeley, CUHK, Amazon Web Services, and UC Davis have developed LEANN, a storage-efficient ANN search index optimized for resource-limited personal devices. It integrates a compact graph-based structure with an on-the-fly recomputation strategy, enabling fast and accurate retrieval while minimizing storage overhead. LEANN achieves up to 50 times smaller storage than standard indexes by reducing the index size to under 5% of the original raw data. It maintains 90% top-3 recall in under 2 seconds on real-world question-answering benchmarks. To reduce latency, LEANN utilizes a two-level traversal algorithm and dynamic batching that combines embedding computations across search hops, enhancing GPU utilization.

    LEANN’s architecture combines core methods such as graph-based recomputation, main techniques, and system workflow. Built on the HNSW framework, it observes that each query needs embeddings for only a limited subset of nodes, prompting on-demand computation instead of pre-storing all embeddings. To address earlier challenges, LEANN introduces two techniques: (a) a two-level graph traversal with dynamic batching to lower recomputation latency, and (b) a high degree of preserving graph pruning method to reduce metadata storage. In the system workflow, LEANN begins by computing embeddings for all dataset items and then constructs a vector index using an off-the-shelf graph-based indexing approach.

    In terms of storage and latency, LEANN outperforms EdgeRAG, an IVF-based recomputation method, achieving latency reductions ranging from 21.17 to 200.60 times across various datasets and hardware platforms. This advantage is from LEANN’s polylogarithmic recomputation complexity, which scales more efficiently than EdgeRAG’s √𝑁 growth. In terms of accuracy for downstream RAG tasks, LEANN achieves higher performance across most datasets, except GPQA, where a distributional mismatch limits its effectiveness. Similarly, on HotpotQA, the single-hop retrieval setup limits accuracy gains, as the dataset demands multi-hop reasoning. Despite these limitations, LEANN shows strong performance across diverse benchmarks.

    In this paper, researchers introduced LEANN, a storage-efficient neural retrieval system that combines graph-based recomputation with innovative optimizations. By integrating a two-level search algorithm and dynamic batching, it eliminates the need to store full embeddings, achieving significant reductions in storage overhead while maintaining high accuracy. Despite its strengths, LEANN faces limitations, such as high peak storage usage during index construction, which could be addressed through pre-clustering or other techniques. Future work may focus on reducing latency and enhancing responsiveness, opening the path for broader adoption in resource-constrained environments.


    Check out the Paper and GitHub Page here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    🇬 Star us on GitHub
    🇸 Sponsor us

    The post Meet LEANN: The Tiniest Vector Database that Democratizes Personal AI with Storage-Efficient Approximate Nearest Neighbor (ANN) Search Index appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAutomate AIOps with Amazon SageMaker Unified Studio projects, Part 1: Solution architecture
    Next Article Zhipu AI Releases GLM-4.5V: Versatile Multimodal Reasoning with Scalable Reinforcement Learning

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 12, 2025
    Machine Learning

    Interspeech 2025

    August 12, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2022-27562 – HCL Domino Volt HTML Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-30642 – Trend Micro Deep Security Link Following Vulnerability Denial of Service

    Common Vulnerabilities and Exposures (CVEs)

    Generative AI in the Enterprise: Transforming Everything from Content to Code🚀

    Web Development

    May 2025 Baseline monthly digest

    Development

    Highlights

    OpenAI decides to reverse recent GPT-4o update after user find bot being overly appeasing

    April 28, 2025

    OpenAI’s CEO, Sam Altman, announced the company will reverse the latest GPT-4o update after users…

    CVE-2025-23164 – Unifi Protect Permanent Livestream Access Token Vulnerability

    May 19, 2025

    AI’s biggest impact on your workforce is still to come – 3 ways to avoid getting left behind

    July 18, 2025

    Best Semrush Alternatives (2025 Edition)

    July 16, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.