Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 13, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 13, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 13, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 13, 2025

      This $4 Steam Deck game includes the most-played classics from my childhood — and it will save you paper

      May 13, 2025

      Microsoft shares rare look at radical Windows 11 Start menu designs it explored before settling on the least interesting one of the bunch

      May 13, 2025

      NVIDIA’s new GPU driver adds DOOM: The Dark Ages support and improves DLSS in Microsoft Flight Simulator 2024

      May 13, 2025

      How to install and use Ollama to run AI LLMs on your Windows 11 PC

      May 13, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Community News: Latest PECL Releases (05.13.2025)

      May 13, 2025
      Recent

      Community News: Latest PECL Releases (05.13.2025)

      May 13, 2025

      How We Use Epic Branches. Without Breaking Our Flow.

      May 13, 2025

      I think the ergonomics of generators is growing on me.

      May 13, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      This $4 Steam Deck game includes the most-played classics from my childhood — and it will save you paper

      May 13, 2025
      Recent

      This $4 Steam Deck game includes the most-played classics from my childhood — and it will save you paper

      May 13, 2025

      Microsoft shares rare look at radical Windows 11 Start menu designs it explored before settling on the least interesting one of the bunch

      May 13, 2025

      NVIDIA’s new GPU driver adds DOOM: The Dark Ages support and improves DLSS in Microsoft Flight Simulator 2024

      May 13, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Model Performance Begins with Data: Researchers from Ai2 Release DataDecide—A Benchmark Suite to Understand Pretraining Data Impact Across 30K LLM Checkpoints

    Model Performance Begins with Data: Researchers from Ai2 Release DataDecide—A Benchmark Suite to Understand Pretraining Data Impact Across 30K LLM Checkpoints

    April 17, 2025

    The Challenge of Data Selection in LLM Pretraining

    Developing large language models entails substantial computational investment, especially when experimenting with alternative pretraining corpora. Comparing datasets at full scale—on the order of billions of parameters and hundreds of billions of tokens—can consume hundreds of thousands of GPU hours per run. Consequently, practitioners resort to smaller‐scale experiments as proxies for large‐model behavior. Yet these “pilot” studies are rarely published, producing a fragmented landscape in which each laboratory repeats similar small‐scale tests without shared benchmarks or methodologies . This opacity impedes reproducibility, underutilizes collective insights, and obscures the true trade‑offs between development compute and final model performance.

    DataDecide

    To address these limitations, the Allen Institute for AI (AI2), in collaboration with the University of Washington and the University of Pennsylvania, today releases DataDecide—a comprehensive suite of controlled pretraining experiments spanning 25 distinct corpora and 14 model sizes from 4 million to 1 billion parameters. DataDecide’s datasets include well‑known sources such as Dolma, DCLM, RefinedWeb, C4, and FineWeb, alongside variations produced by domain ablation, deduplication, quality filtering, and source mixing. Each model is trained at a fixed token‑to‑parameter ratio of 100 (100 tokens per parameter), reflecting the “overtraining” regime that optimizes inference efficiency. In total, over 1,050 models and more than 30,000 checkpoints—each evaluated across ten downstream tasks—are released to the public.

    Technical Structure and Pragmatic Benefits

    DataDecide orchestrates experiments along three axes:

      • Data Recipes: Twenty‑five well‑documented pretraining corpora, each embodying different curation strategies (see Table 1 in the paper for full recipe specifications) .
      • Model Scale: Fourteen parameter configurations (4 M–1 B), programmatically derived via the OLMo model ladder to ensure consistent training hyperparameters across scales. Each non‑target scale includes two “early‑stop” seed runs, while the 1 B‑parameter models feature three complete seed reruns to quantify variability.
      • Evaluation Suite: The OLMES benchmark of ten multiple‑choice tasks (e.g., MMLU, ARC Easy/Challenge, HellaSwag, MBPP, HumanEval) provides a multifaceted view of language understanding, commonsense reasoning, and code generation performance.

      By releasing both pretraining datasets and corresponding models, DataDecide enables researchers to:

      • Reuse checkpoints for new evaluations without retraining.
      • Experiment with novel prediction methods (e.g., advanced scaling‑law fits, smoothing techniques).
      • Investigate benchmark sensitivity to training data and model scale.

      Key Findings and Quantitative Insights

      DataDecide’s systematic analysis yields four practical guidelines:

        • Single‑Scale Baseline Robustness: Ranking corpora by downstream accuracy at a single, small scale (e.g., 150 M parameters) achieves ~80 percent decision accuracy for predicting the best dataset at the 1 B‑parameter target scale. In contrast, eight baseline scaling‑law extrapolations do not surpass this simple heuristic, underscoring its cost‑effectiveness.
        • Task‑Dependent Compute Sensitivity: The compute budget required for reliable decisions varies markedly by task. Benchmarks like MMLU and ARC Easy become predictable with less than 0.01 percent of the target compute, whereas HellaSwag and SocialIQA demand orders of magnitude more FLOPs to achieve similar decision accuracy .
        • Proxy Metric Selection: Continuous likelihood metrics—specifically the character‑normalized average probability of correct continuations (CORRECT PROB) and total probability (TOTAL PROB)—outperform discrete accuracy measures at small scales. This is most pronounced on code tasks (MBPP, HumanEval), where decision accuracy jumps from near‑random to over 80 percent with CORRECT PROB as the proxy .
        • Variance and Spread Considerations: High decision accuracy correlates with low run‑to‑run variance (noise) and ample performance spread across datasets. Proxy metrics that reduce noise or amplify spread thus directly enhance prediction reliability.

        Concluding Perspective

        DataDecide transforms pretraining data selection from an ad hoc art into a transparent, data‐driven science. By open‑sourcing all 25 corpora, 1,050 models, 30,000+ checkpoints, and evaluation scripts on Hugging Face and GitHub, AI2 invites the community to reproduce findings, extend evaluations to new benchmarks, and innovate on decision‑making methods. As LLM development continues to demand ever‑greater compute resources, DataDecide offers a principled framework for minimizing wasted experiments and maximizing insight—paving the way toward more efficient, reproducible, and collaborative AI research.


        Check out the Paper, Model on Hugging Face and Technical details. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

        🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

          The post Model Performance Begins with Data: Researchers from Ai2 Release DataDecide—A Benchmark Suite to Understand Pretraining Data Impact Across 30K LLM Checkpoints appeared first on MarkTechPost.

          Source: Read More 

          Facebook Twitter Reddit Email Copy Link
          Previous ArticleCodeSOD: Static State
          Next Article OpenAI Introduces o3 and o4-mini: Progressing Towards Agentic AI with Enhanced Multimodal Reasoning

          Related Posts

          Machine Learning

          How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

          May 13, 2025
          Machine Learning

          StreamBridge: Turning Your Offline Video Large Language Model into a Proactive Streaming Assistant

          May 13, 2025
          Leave A Reply Cancel Reply

          Continue Reading

          How to Create Serverless AI Agents with Langbase Docs MCP Server in Minutes

          Development

          CVE-2025-35996 – KUNBUS PiCtory Stored Cross-Site Scripting (XSS)

          Common Vulnerabilities and Exposures (CVEs)

          Shell Data Breach: Hacker Group 888 Claims Responsibility for Alleged Cyberattack

          Development

          CVE-2025-4005 – PHPGurukul COVID19 Testing Management System SQL Injection Vulnerability

          Common Vulnerabilities and Exposures (CVEs)

          Highlights

          Development

          LangChain Introduces LangGraph Studio: The First Agent IDE for Visualizing, Interacting with, and Debugging Complex Agentic Applications

          August 4, 2024

          Large Language Models (LLMs) have revolutionized the development of agentic applications, necessitating the evolution of…

          Nintendo Switch 2 pre-orders delayed, new price hike likely – here’s why

          April 4, 2025

          SSL Certificate Error

          August 8, 2024

          What is the Cold Start Problem in Recommender Systems?

          February 26, 2025
          © DevStackTips 2025. All rights reserved.
          • Contact
          • Privacy Policy

          Type above and press Enter to search. Press Esc to cancel.