Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 14, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 14, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 14, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 14, 2025

      I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of work

      May 14, 2025

      How to use your Android phone as a webcam when your laptop’s default won’t cut it

      May 14, 2025

      The 5 most customizable Linux desktop environments – when you want it your way

      May 14, 2025

      Gen AI use at work saps our motivation even as it boosts productivity, new research shows

      May 14, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Strategic Cloud Partner: Key to Business Success, Not Just Tech

      May 14, 2025
      Recent

      Strategic Cloud Partner: Key to Business Success, Not Just Tech

      May 14, 2025

      Perficient’s “What If? So What?” Podcast Wins Gold at the 2025 Hermes Creative Awards

      May 14, 2025

      PIM for Azure Resources

      May 14, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Windows 11 24H2’s Settings now bundles FAQs section to tell you more about your system

      May 14, 2025
      Recent

      Windows 11 24H2’s Settings now bundles FAQs section to tell you more about your system

      May 14, 2025

      You can now share an app/browser window with Copilot Vision to help you with different tasks

      May 14, 2025

      Microsoft will gradually retire SharePoint Alerts over the next two years

      May 14, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Path: A Machine Learning Method for Training Small-Scale (Under 100M Parameter) Neural Information Retrieval Models with as few as 10 Gold Relevance Labels

    Path: A Machine Learning Method for Training Small-Scale (Under 100M Parameter) Neural Information Retrieval Models with as few as 10 Gold Relevance Labels

    June 26, 2024

    The creative applications and management of pretrained language models have led to some great improvements in the quality of information retrieval (IR). Existing IR models are usually trained on large datasets comprising hundreds of thousands or even millions of queries and relevance judgments, especially those that can generalize to new, uncommon topics. 

    The usefulness and necessity of such large-scale data for language model optimization for information retrieval tasks are questioned, raising scientific and engineering issues. In particular, it is not apparent from a scientific standpoint whether this massive amount of data is necessary, and from an engineering standpoint, it is not evident how to train IR models for languages with little or no labeled IR data or for niche domains.

    In recent research, a team of researchers from the University of Waterloo, Stanford University, and IBM Research AI has presented a technique for training small-scale neural information retrieval models using as few as ten gold relevance labels, that is, models with less than 100 million parameters. This approach has been named PATH – Prompts as Auto-optimized Training Hyperparameters. 

    The foundation of this method is the creation of fictitious document queries via a language model (LM). The key innovation is that the language model automatically optimizes the prompt it uses to create these fictitious queries, guaranteeing that the training quality is optimized.

    The team has shared the procedure, which is as follows. A text corpus and a very small number of relevant labels are the starting points. Then potential search queries are created that might be pertinent to the documents in the corpus using an LM. In order to create training data, pairs of queries and passages must be created. Optimizing the LM prompt, which directs the creation of the inquiry, is a crucial step in raising the caliber of the synthetic data in response to input from the training procedure.

    Using the BIRCO benchmark, which consists of difficult and unusual IR tasks, the team has conducted trials and discovered that this approach greatly improves the performance of the trained models. In particular, the small-scale models outperform RankZephyr and are competitive with RankLLama, having been trained with minimally labeled data and optimized prompts. These later models, which included 7 billion parameters and were trained on datasets with more than 100,000 labels, are significantly larger.

    These outcomes demonstrate how well automatic rapid optimization produces artificial datasets of superior quality. This approach not only shows that effective IR models can be trained with fewer resources, but it also shows that, with the right adjustments to the data creation process, smaller models can outperform much bigger models. 

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. 

    Join our Telegram Channel and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 45k+ ML SubReddit

    Create, edit, and augment tabular data with the first compound AI system, Gretel Navigator, now generally available! [Advertisement]

    The post Path: A Machine Learning Method for Training Small-Scale (Under 100M Parameter) Neural Information Retrieval Models with as few as 10 Gold Relevance Labels appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleReplete-AI Introduces Replete-Coder-Qwen2-1.5b: A Versatile AI Model for Advanced Coding and General-Purpose Use with Unmatched Efficiency
    Next Article Meet Abstra: An AI-Powered Startup that Scales Business Processes with Python and AI

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 15, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-30419 – NI Circuit Design Suite SymbolEditor Out-of-Bounds Read Vulnerability

    May 15, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Best Free and Open Source Alternatives to Corel PDF Fusion

    Development

    Beware of the Antidot Android Banking Trojan Disguised as Google Play Updates

    Development

    Here are all of Avowed’s known bugs and glitches that we’ve identified, along with some potential fixes

    News & Updates

    Weekly JavaScript Roundup

    Web Development

    Highlights

    Samsung Galaxy Ring vs Ultrahuman Ring Air: Which subscription-free smart ring wins?

    July 10, 2024

    Samsung just unveiled its subscription-free Galaxy Ring, and the Ultrahuman Ring Air is a competitive…

    You can finally repair your Xbox at home with parts from iFixit

    December 7, 2024

    CVE-2025-45322 – Kashipara Online Service Management Portal SQL Injection Vulnerability

    May 5, 2025

    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024

    June 10, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.