Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Representative Line: Brace Yourself

      September 18, 2025

      Beyond the Pilot: A Playbook for Enterprise-Scale Agentic AI

      September 18, 2025

      GitHub launches MCP Registry to provide central location for trusted servers

      September 18, 2025

      MongoDB brings Search and Vector Search to self-managed versions of database

      September 18, 2025

      Distribution Release: Security Onion 2.4.180

      September 18, 2025

      Distribution Release: Omarchy 3.0.1

      September 17, 2025

      Distribution Release: Mauna Linux 25

      September 16, 2025

      Distribution Release: SparkyLinux 2025.09

      September 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      AI Momentum and Perficient’s Inclusion in Analyst Reports – Highlights From 2025 So Far

      September 18, 2025
      Recent

      AI Momentum and Perficient’s Inclusion in Analyst Reports – Highlights From 2025 So Far

      September 18, 2025

      Shopping Portal using Python Django & MySQL

      September 17, 2025

      Perficient Earns Adobe’s Real-time CDP Specialization

      September 17, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Valve Survey Reveals Slight Retreat in Steam-on-Linux Share

      September 18, 2025
      Recent

      Valve Survey Reveals Slight Retreat in Steam-on-Linux Share

      September 18, 2025

      Review: Elecrow’s All-in-one Starter Kit for Pico 2

      September 18, 2025

      FOSS Weekly #25.38: GNOME 49 Release, KDE Drama, sudo vs sudo-rs, Local AI on Android and More Linux Stuff

      September 18, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Why Small Language Models (SLMs) Are Poised to Redefine Agentic AI: Efficiency, Cost, and Practical Deployment

    Why Small Language Models (SLMs) Are Poised to Redefine Agentic AI: Efficiency, Cost, and Practical Deployment

    June 18, 2025

    The Shift in Agentic AI System Needs

    LLMs are widely admired for their human-like capabilities and conversational skills. However, with the rapid growth of agentic AI systems, LLMs are increasingly being utilized for repetitive, specialized tasks. This shift is gaining momentum—over half of major IT companies now use AI agents, with significant funding and projected market growth. These agents rely on LLMs for decision-making, planning, and task execution, typically through centralized cloud APIs. Massive investments in LLM infrastructure reflect confidence that this model will remain foundational to AI’s future. 

    SLMs: Efficiency, Suitability, and the Case Against Over-Reliance on LLMs

    Researchers from NVIDIA and Georgia Tech argue that small language models (SLMs) are not only powerful enough for many agent tasks but also more efficient and cost-effective than large models. They believe SLMs are better suited for the repetitive and simple nature of most agentic operations. While large models remain essential for more general, conversational needs, they propose using a mix of models depending on task complexity. They challenge the current reliance on LLMs in agentic systems and offer a framework for transitioning from LLMs to SLMs. They invite open discussion to encourage more resource-conscious AI deployment. 

    Why SLMs are Sufficient for Agentic Operations

    The researchers argue that SLMs are not only capable of handling most tasks within AI agents but are also more practical and cost-effective than LLMs. They define SLMs as models that can run efficiently on consumer devices, highlighting their strengths—lower latency, reduced energy consumption, and easier customization. Since many agent tasks are repetitive and focused, SLMs are often sufficient and even preferable. The paper suggests a shift toward modular agentic systems using SLMs by default and LLMs only when necessary, promoting a more sustainable, flexible, and inclusive approach to building intelligent systems. 

    Arguments for LLM Dominance

    Some argue that LLMs will always outperform small models (SLMs) in general language tasks due to superior scaling and semantic abilities. Others claim centralized LLM inference is more cost-efficient due to economies of scale. There is also a belief that LLMs dominate simply because they had an early start, drawing the majority of the industry’s attention. However, the study counters that SLMs are highly adaptable, cheaper to run, and can handle well-defined subtasks in agent systems effectively. Still, the broader adoption of SLMs faces hurdles, including existing infrastructure investments, evaluation bias toward LLM benchmarks, and lower public awareness. 

    Framework for Transitioning from LLMs to SLMs

    To smoothly shift from LLMs to smaller, specialized ones (SLMs) in agent-based systems, the process starts by securely collecting usage data while ensuring privacy. Next, the data is cleaned and filtered to remove sensitive details. Using clustering, common tasks are grouped to identify where SLMs can take over. Based on task needs, suitable SLMs are chosen and fine-tuned with tailored datasets, often utilizing efficient techniques such as LoRA. In some cases, LLM outputs guide SLM training. This isn’t a one-time process—models should be regularly updated and refined to stay aligned with evolving user interactions and tasks. 

    Conclusion: Toward Sustainable and Resource-Efficient Agentic AI

    In conclusion, the researchers believe that shifting from large to SLMs could significantly improve the efficiency and sustainability of agentic AI systems, especially for tasks that are repetitive and narrowly focused. They argue that SLMs are often powerful enough, more cost-effective, and better suited for such roles compared to general-purpose LLMs. In cases requiring broader conversational abilities, using a mix of models is recommended. To encourage progress and open dialogue, they invite feedback and contributions to their stance, committing to share responses publicly. The goal is to inspire more thoughtful and resource-efficient use of AI technologies in the future. 


    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post Why Small Language Models (SLMs) Are Poised to Redefine Agentic AI: Efficiency, Cost, and Practical Deployment appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow to Build an Advanced BrightData Web Scraper with Google Gemini for AI-Powered Data Extraction
    Next Article Meeting summarization and action item extraction with Amazon Nova

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    12 Best MoviesJoy Alternatives (Free & Safe Streaming)

    Operating Systems

    Overwatch 2: All available Heroes, abilities and items in Stadium Mode

    News & Updates

    Introduction to the View Transitions API: A New Era of Seamless Page Navigation

    Web Development

    Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection

    Development

    Highlights

    CVE-2025-47656 – Spiraclethemes Site Library Cross-site Scripting (XSS)

    May 7, 2025

    CVE ID : CVE-2025-47656

    Published : May 7, 2025, 3:16 p.m. | 20 minutes ago

    Description : Improper Neutralization of Input During Web Page Generation (‘Cross-site Scripting’) vulnerability in spiraclethemes Spiraclethemes Site Library allows Stored XSS. This issue affects Spiraclethemes Site Library: from n/a through 1.4.0.

    Severity: 6.5 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    I’m a tech expert, and these Fourth of July phone deals are worth upgrading to

    June 26, 2025

    Evaluate models or RAG systems using Amazon Bedrock Evaluations – Now generally available

    April 4, 2025

    FcrackZIP — ZIP File Cracker

    June 3, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.