Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 4, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 4, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 4, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 4, 2025

      Players aren’t buying Call of Duty’s “error” excuse for the ads Activision started forcing into the game’s menus recently

      June 4, 2025

      In Sam Altman’s world, the perfect AI would be “a very tiny model with superhuman reasoning capabilities” for any context

      June 4, 2025

      Sam Altman’s ouster from OpenAI was so dramatic that it’s apparently becoming a movie — Will we finally get the full story?

      June 4, 2025

      One of Microsoft’s biggest hardware partners joins its “bold strategy, Cotton” moment over upgrading to Windows 11, suggesting everyone just buys a Copilot+ PC

      June 4, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      LatAm’s First Databricks Champion at Perficient

      June 4, 2025
      Recent

      LatAm’s First Databricks Champion at Perficient

      June 4, 2025

      Beyond AEM: How Adobe Sensei Powers the Full Enterprise Experience

      June 4, 2025

      Simplify Negative Relation Queries with Laravel’s whereDoesntHaveRelation Methods

      June 4, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Players aren’t buying Call of Duty’s “error” excuse for the ads Activision started forcing into the game’s menus recently

      June 4, 2025
      Recent

      Players aren’t buying Call of Duty’s “error” excuse for the ads Activision started forcing into the game’s menus recently

      June 4, 2025

      In Sam Altman’s world, the perfect AI would be “a very tiny model with superhuman reasoning capabilities” for any context

      June 4, 2025

      Sam Altman’s ouster from OpenAI was so dramatic that it’s apparently becoming a movie — Will we finally get the full story?

      June 4, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Small Models, Big Impact: ServiceNow AI Releases Apriel-5B to Outperform Larger LLMs with Fewer Resources

    Small Models, Big Impact: ServiceNow AI Releases Apriel-5B to Outperform Larger LLMs with Fewer Resources

    April 14, 2025

    As language models continue to grow in size and complexity, so do the resource requirements needed to train and deploy them. While large-scale models can achieve remarkable performance across a variety of benchmarks, they are often inaccessible to many organizations due to infrastructure limitations and high operational costs. This gap between capability and deployability presents a practical challenge, particularly for enterprises seeking to embed language models into real-time systems or cost-sensitive environments.

    In recent years, small language models (SLMs) have emerged as a potential solution, offering reduced memory and compute requirements without entirely compromising on performance. Still, many SLMs struggle to provide consistent results across diverse tasks, and their design often involves trade-offs that limit generalization or usability.

    ServiceNow AI Releases Apriel-5B: A Step Toward Practical AI at Scale

    To address these concerns, ServiceNow AI has released Apriel-5B, a new family of small language models designed with a focus on inference throughput, training efficiency, and cross-domain versatility. With 4.8 billion parameters, Apriel-5B is small enough to be deployed on modest hardware but still performs competitively on a range of instruction-following and reasoning tasks.

    The Apriel family includes two versions:

    • Apriel-5B-Base, a pretrained model intended for further tuning or embedding in pipelines.
    • Apriel-5B-Instruct, an instruction-tuned version aligned for chat, reasoning, and task completion.

    Both models are released under the MIT license, supporting open experimentation and broader adoption across research and commercial use cases.

    Architectural Design and Technical Highlights

    Apriel-5B was trained on over 4.5 trillion tokens, a dataset carefully constructed to cover multiple task categories, including natural language understanding, reasoning, and multilingual capabilities. The model uses a dense architecture optimized for inference efficiency, with key technical features such as:

    • Rotary positional embeddings (RoPE) with a context window of 8,192 tokens, supporting long-sequence tasks.
    • FlashAttention-2, enabling faster attention computation and improved memory utilization.
    • Grouped-query attention (GQA), reducing memory overhead during autoregressive decoding.
    • Training in BFloat16, which ensures compatibility with modern accelerators while maintaining numerical stability.

    These architectural decisions allow Apriel-5B to maintain responsiveness and speed without relying on specialized hardware or extensive parallelization. The instruction-tuned version was fine-tuned using curated datasets and supervised techniques, enabling it to perform well on a range of instruction-following tasks with minimal prompting.

    Evaluation Insights and Benchmark Comparisons

    Apriel-5B-Instruct has been evaluated against several widely used open models, including Meta’s LLaMA 3.1–8B, Allen AI’s OLMo-2–7B, and Mistral-Nemo-12B. Despite its smaller size, Apriel shows competitive results across multiple benchmarks:

    • Outperforms both OLMo-2–7B-Instruct and Mistral-Nemo-12B-Instruct on average across general-purpose tasks.
    • Shows stronger results than LLaMA-3.1–8B-Instruct on math-focused tasks and IF Eval, which evaluates instruction-following consistency.
    • Requires significantly fewer compute resources—2.3x fewer GPU hours—than OLMo-2–7B, underscoring its training efficiency.

    These outcomes suggest that Apriel-5B hits a productive midpoint between lightweight deployment and task versatility, particularly in domains where real-time performance and limited resources are key considerations.

    Conclusion: A Practical Addition to the Model Ecosystem

    Apriel-5B represents a thoughtful approach to small model design, one that emphasizes balance rather than scale. By focusing on inference throughput, training efficiency, and core instruction-following performance, ServiceNow AI has created a model family that is easy to deploy, adaptable to varied use cases, and openly available for integration.

    Its strong performance on math and reasoning benchmarks, combined with a permissive license and efficient compute profile, makes Apriel-5B a compelling choice for teams building AI capabilities into products, agents, or workflows. In a field increasingly defined by accessibility and real-world applicability, Apriel-5B is a practical step forward.


    Check out ServiceNow-AI/Apriel-5B-Base and ServiceNow-AI/Apriel-5B-Instruct. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

    The post Small Models, Big Impact: ServiceNow AI Releases Apriel-5B to Outperform Larger LLMs with Fewer Resources appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleDynamic text-to-SQL for enterprise workloads with Amazon Bedrock Agents
    Next Article What to work on next?

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 4, 2025
    Machine Learning

    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025

    June 4, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Animated Product Grid Preview with GSAP & Clip-Path

    News & Updates

    GenSQL: A Generative AI System for Databases that Advances Probabilistic Programming for Integrated Tabular Data Analysis

    Development

    CVE-2025-20970 – Bixby Vision Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    WWE 2K25 has revealed its release date for Xbox and PC, while hinting at a new ‘Island’ game mode

    News & Updates

    Highlights

    Development

    Steganography Explained: How XWorm Hides Inside Images

    March 16, 2025

    Inside the most innocent-looking image, a breathtaking landscape, or a funny meme, something dangerous could…

    Derive meaningful and actionable operational insights from AWS Using Amazon Q Business

    July 26, 2024

    ruby-align is Baseline Newly available

    May 2, 2025

    iPhone 16 Mockup for Figma

    January 5, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.