Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: Across the 4th Dimension

      September 25, 2025

      Cursor vs GitHub Copilot (2025): Which AI Platform Wins for Your Node.js Dev Team?

      September 25, 2025

      NuGet adds support for Trusted Publishing

      September 25, 2025

      AWS launches IDE extension for building browser automation agents

      September 25, 2025

      Distribution Release: Kali Linux 2025.3

      September 23, 2025

      Distribution Release: SysLinuxOS 13

      September 23, 2025

      Development Release: MX Linux 25 Beta 1

      September 22, 2025

      DistroWatch Weekly, Issue 1140

      September 21, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Beyond Denial: How AI Concierge Services Can Transform Healthcare from Reactive to Proactive

      September 25, 2025
      Recent

      Beyond Denial: How AI Concierge Services Can Transform Healthcare from Reactive to Proactive

      September 25, 2025

      IDC ServiceScape for Microsoft Power Apps Low-Code/No-Code Custom Application Development Services

      September 25, 2025

      A Stream-Oriented UI library for interactive web applications

      September 24, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      FOSS Weekly #25.39: Kill Switch Phones, LMDE 7, Zorin OS 18 Beta, Polybar, Apt History and More Linux Stuff

      September 25, 2025
      Recent

      FOSS Weekly #25.39: Kill Switch Phones, LMDE 7, Zorin OS 18 Beta, Polybar, Apt History and More Linux Stuff

      September 25, 2025

      Distribution Release: Kali Linux 2025.3

      September 23, 2025

      Distribution Release: SysLinuxOS 13

      September 23, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»From 100,000 to Under 500 Labels: How Google AI Cuts LLM Training Data by Orders of Magnitude

    From 100,000 to Under 500 Labels: How Google AI Cuts LLM Training Data by Orders of Magnitude

    August 10, 2025

    Google Research has unveiled a groundbreaking method for fine-tuning large language models (LLMs) that slashes the amount of required training data by up to 10,000x, while maintaining or even improving model quality. This approach centers on active learning and focusing expert labeling efforts on the most informative examples—the “boundary cases” where model uncertainty peaks.

    The Traditional Bottleneck

    Fine-tuning LLMs for tasks demanding deep contextual and cultural understanding—like ad content safety or moderation—has typically required massive, high-quality labeled datasets. Most data is benign, meaning that for policy violation detection, only a small fraction of examples matter, driving up the cost and complexity of data curation. Standard methods also struggle to keep up when policies or problematic patterns shift, necessitating expensive retraining.

    Google’s Active Learning Breakthrough

    How It Works:

    • LLM-as-Scout: The LLM is used to scan a vast corpus (hundreds of billions of examples) and identify cases it’s least certain about.
    • Targeted Expert Labeling: Instead of labeling thousands of random examples, human experts only annotate those borderline, confusing items.
    • Iterative Curation: This process repeats, with each batch of new “problematic” examples informed by the latest model’s confusion points.
    • Rapid Convergence: Models are fine-tuned in multiple rounds, and the iteration continues until the model’s output aligns closely with expert judgment—measured by Cohen’s Kappa, which compares agreement between annotators beyond chance.
    Image source: https://research.google/blog/achieving-10000x-training-data-reduction-with-high-fidelity-labels/

    Impact:

    • Data Needs Plummet: In experiments with Gemini Nano-1 and Nano-2 models, alignment with human experts reached parity or better using 250–450 well-chosen examples rather than ~100,000 random crowdsourced labels—a reduction of three to four orders of magnitude.
    • Model Quality Rises: For more complex tasks and larger models, performance improvements reached 55–65% over baseline, demonstrating more reliable alignment with policy experts.
    • Label Efficiency: For reliable gains using tiny datasets, high label quality was consistently necessary (Cohen’s Kappa > 0.8).

    Why It Matters

    This approach flips the traditional paradigm. Rather than drowning models in vast pools of noisy, redundant data, it leverages both LLMs’ ability to identify ambiguous cases and the domain expertise of human annotators where their input is most valuable. The benefits are profound:

    • Cost Reduction: Vastly fewer examples to label, dramatically lowering labor and capital expenditure.
    • Faster Updates: The ability to retrain models on a handful of examples makes adaptation to new abuse patterns, policy changes, or domain shifts rapid and feasible.
    • Societal Impact: Enhanced capacity for contextual and cultural understanding increases the safety and reliability of automated systems handling sensitive content.

    In Summary

    Google’s new methodology enables LLM fine-tuning on complex, evolving tasks with just hundreds (not hundreds of thousands) of targeted, high-fidelity labels—ushering in far leaner, more agile, and cost-effective model development.


    Check out the technical article from Google blog. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    🇬 Star us on GitHub
    🇷 Join our ML Subreddit
    🇸 Sponsor us

    The post From 100,000 to Under 500 Labels: How Google AI Cuts LLM Training Data by Orders of Magnitude appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleUsing RouteLLM to Optimize LLM Usage
    Next Article AI Agent Trends of 2025: A Transformative Landscape

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Rilasciata Ultramarine Linux 41 “Cyberia”: una nuova versione basata su Fedora con tante novità

    Linux

    CVE-2025-46204 – Apache Unifiedtransform Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-20671 – Thermal Out-of-Bounds Write Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Setting Line Length in CSS (and Fitting Text to a Container)

    News & Updates

    Highlights

    Distribution Release: Kali Linux 2025.3

    September 23, 2025

    The DistroWatch news feed is brought to you by TUXEDO COMPUTERS. The Kali Linux team have announced the launch of Kali Linux 2025.3. The distribution, which is based on Debian’s Testing branch, refreshes the Nexmon wireless firmware and drops support for armel CPUs. “Nexmon is a “patched” firmware, for certain wireless chips, to extend their functionally to allow: Monitor….

    Switching Vendors? Here’s How We Take Over Your Project (No Panic Required)

    August 19, 2025

    CVE-2025-46171 – vBulletin Denial-of-Service Vulnerability

    July 23, 2025

    Cast Model Properties to a Uri Instance in 12.17

    June 4, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.