Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 3, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 3, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 3, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 3, 2025

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025

      PSA: You don’t need to spend $400+ to upgrade your Xbox Series X|S storage

      June 3, 2025

      UK civil servants saved 24 minutes per day using Microsoft Copilot, saving two weeks each per year according to a new report

      June 3, 2025

      These solid-state fans will revolutionize cooling in our PCs and laptops

      June 3, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Community News: Latest PECL Releases (06.03.2025)

      June 3, 2025
      Recent

      Community News: Latest PECL Releases (06.03.2025)

      June 3, 2025

      A Comprehensive Guide to Azure Firewall

      June 3, 2025

      Test Job Failures Precisely with Laravel’s assertFailedWith Method

      June 3, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025
      Recent

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025

      PSA: You don’t need to spend $400+ to upgrade your Xbox Series X|S storage

      June 3, 2025

      UK civil servants saved 24 minutes per day using Microsoft Copilot, saving two weeks each per year according to a new report

      June 3, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Microsoft AI Researchers Introduce Advanced Low-Bit Quantization Techniques to Enable Efficient LLM Deployment on Edge Devices without High Computational Costs

    Microsoft AI Researchers Introduce Advanced Low-Bit Quantization Techniques to Enable Efficient LLM Deployment on Edge Devices without High Computational Costs

    February 6, 2025

    Edge devices like smartphones, IoT gadgets, and embedded systems process data locally, improving privacy, reducing latency, and enhancing responsiveness, and AI is getting integrated into these devices rapidly. But, deploying large language models (LLMs) on these devices is difficult and complex due to their high computational and memory demands. 

    LLMs are massive in size and power requirements. With billions of parameters, they demand significant memory and processing capacity that exceeds the capabilities of most edge devices. While quantization techniques reduce model size and power consumption, conventional hardware is optimized for symmetric computations, limiting support for mixed-precision arithmetic. This lack of native hardware support for low-bit computations restricts deployment across mobile and embedded platforms. 

    Prior methods for running LLMs on edge devices use high-bit precision formats like FP32 and FP16, which improve numerical stability but require significant memory and energy. Some approaches use lower-bit quantization (e.g., int8 or int4) to reduce resource demands, but compatibility issues arise with existing hardware. Another technique, dequantization, re-expands compressed models before computation but introduces latency and negates efficiency gains. Also, traditional matrix multiplication (GEMM) requires uniform precision levels, which makes performance optimization across different hardware architectures complex.

    Microsoft researchers introduced a series of advancements to enable efficient low-bit quantization for LLMs on edge devices. Their approach includes three major innovations: 

    1. Ladder data type compiler 
    2. T-MAC mpGEMM library
    3. LUT Tensor Core hardware architecture 

    These techniques aim to overcome hardware limitations by facilitating mixed-precision general matrix multiplication (mpGEMM) and reducing computational overhead. With these solutions, researchers propose a practical framework that supports efficient LLM inference without requiring specialized GPUs or high-power accelerators.

    The Ladder data type compiler’s first component bridges the gap between low-bit model representations and hardware constraints. It converts unsupported data formats into hardware-compatible representations while maintaining efficiency. This approach ensures modern deep learning architectures can utilize custom data types without sacrificing performance. 

    Image Source

    The T-MAC mpGEMM library optimizes mixed-precision computations using a lookup table (LUT)–based method instead of traditional multiplication operations. This innovation eliminates the need for dequantization and significantly enhances CPU computational efficiency. 

    Image Source

    Also, the LUT Tensor Core hardware architecture introduces a specialized accelerator designed for low-bit quantization. It leverages an optimized instruction set to improve performance while reducing power consumption.

    Image Source

    In evaluations, the Ladder data type compiler outperforms conventional deep neural network (DNN) compilers by up to 14.6 times for specific low-bit computations. When tested on edge devices like the Surface Laptop 7 with the Qualcomm Snapdragon X Elite chipset, the T-MAC library achieved 48 tokens per second for the 3B BitNet-b1.58 model, outperforming existing inference libraries. On lower-end devices such as the Raspberry Pi 5, it achieved 11 tokens per second, demonstrating significant efficiency improvements. Meanwhile, the LUT Tensor Core hardware achieved an 11.2-fold increase in energy efficiency and a 20.9-fold boost in computational density.

    Several key takeaways from the research by Microsoft include: 

    1. Low-bit quantization reduces model size, enabling efficient execution on edge devices.
    2. The T-MAC library enhances inference speed by eliminating traditional multiplication operations.
    3. The Ladder compiler ensures seamless integration of custom low-bit data formats with existing hardware.
    4. Optimized techniques reduce power usage, making LLMs feasible for low-energy devices.
    5. These methods allow LLMs to operate effectively on a wide range of hardware, from high-end laptops to low-power IoT devices.
    6. These innovations achieve 48 tokens per second on Snapdragon X Elite, 30 tokens per second on 2-bit 7B Llama, and 20 tokens per second on 4-bit 7B Llama.
    7. They also enable AI-driven applications across mobile, robotic, and embedded AI systems by making LLMs more accessible.

    In conclusion, the study highlights the importance of hardware-aware quantization techniques for deploying LLMs on edge devices. The proposed solutions effectively address the long-standing challenges of memory consumption, computational efficiency, and hardware compatibility. By implementing Ladder, T-MAC, and LUT Tensor Core, researchers have paved the way for next-generation AI applications that are faster, more energy-efficient, and more scalable across various platforms.


    Check out the Details and Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 75k+ ML SubReddit.

    🚨 Recommended Open-Source AI Platform: ‘IntellAgent is a An Open-Source Multi-Agent Framework to Evaluate Complex Conversational AI System’ (Promoted)

    The post Microsoft AI Researchers Introduce Advanced Low-Bit Quantization Techniques to Enable Efficient LLM Deployment on Edge Devices without High Computational Costs appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleEncpipe – encryption tool
    Next Article Fine-tune and host SDXL models cost-effectively with AWS Inferentia2

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 3, 2025
    Machine Learning

    This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal Reasoning

    June 3, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Overwatch 2 Stadium Mode — Best Moira Builds: Best items, powers, and gameplay tips

    News & Updates

    How We Leveraged DHTMLX to Built an Efficient Construction Project Management Solution for Our Client

    Development

    TRANSMI: A Machine Learning Framework to Create Baseline Models Adapted for Transliterated Data from Existing Multilingual Pretrained Language Models mPLMs without Any Training

    Development

    Microsoft open sources the Windows Subsystem for Linux — invites developers to help more seamlessly integrate Linux with Windows

    News & Updates

    Highlights

    Samsung confirms it’s working with Google to develop AR glasses

    January 22, 2025

    The two tech giants are entering the race to create a pair of augmented reality…

    LG’s new projector is also a Bluetooth speaker and a mood lamp

    January 5, 2025

    10 Best Hyperscience Alternatives – Ratings, Reviews & Pricing

    November 15, 2024

    Meta AI Introducing the Language Model Transparency Tool: An Open-Source Interactive Toolkit for Analyzing Transformer-based Language Models

    April 18, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.