Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Huawei AI Introduces ‘Kangaroo’: A Novel Self-Speculative Decoding Framework Tailored for Accelerating the Inference of Large Language Models

    Huawei AI Introduces ‘Kangaroo’: A Novel Self-Speculative Decoding Framework Tailored for Accelerating the Inference of Large Language Models

    May 2, 2024

    The development of natural language processing has been significantly propelled by the advancements in large language models (LLMs). These models have showcased remarkable performance in tasks like translation, question answering, and text summarization, proving their efficiency in generating high-quality text. However, despite their effectiveness, one major limitation remains their slow inference speed, which hinders their use in real-time applications. This challenge predominantly arises from the memory bandwidth bottleneck rather than a lack of computational power, leading to researchers seeking innovative ways to speed up their inference process.

    The key issue lies in the conventional speculative decoding methods that rely on training separate draft models for faster text generation. These methods typically generate multiple tokens in parallel to accelerate the overall process. Although effective, they come with significant training costs and high latency. The high inference latency associated with these methods is primarily due to their dependence on external drafter models, which introduce additional computations that slow down the process.

    Current methods like Medusa and Lookahead have been designed to introduce more efficient speculative decoding approaches. These approaches aim to train smaller draft models that can work alongside the main language model. However, these methods still face latency issues, as the draft models require substantial computational resources and parameter updates. This slows down the overall inference process, reducing the effectiveness of the acceleration.

    Huawei Noah’s Ark Lab researchers have developed an innovative framework named Kangaroo. This novel method addresses the issue of high latency in speculative decoding by introducing a lossless self-speculative decoding framework. Unlike traditional methods that rely on external drafter models, Kangaroo uses a fixed shallow LLM sub-network as the draft model. Researchers train a lightweight adapter module that connects the two to bridge the gap between the sub-network and the full model, enabling efficient and accurate token generation.

    Kangaroo employs an early-exiting mechanism to enhance its efficiency further. This mechanism halts the small model’s prediction once the confidence level of the current token falls below a specific threshold, reducing unnecessary computational latency. The adapter module used in Kangaroo comprises a multi-head attention mechanism and two normalization layers, providing sufficient capacity to ensure high-quality token generation. The early exiting layer balances the token acceptance rate and drafting efficiency trade-offs. The dynamic mechanism of the Kangaroo allows for more efficient token generation by utilizing parallel computing and avoiding unnecessary computations.

    Extensive experiments conducted using Spec-Bench demonstrate the Kangaroo’s effectiveness. It achieved a speedup ratio of up to 1.7× compared to other methods, using 88.7% fewer additional parameters than Medusa, which has 591 million additional parameters. Kangaroo’s significant improvements in speedup ratio are attributed to its double early-exit mechanism and the efficient design of the adapter network. This innovative framework significantly reduces latency, making it highly suitable for real-time natural language processing applications.

    In conclusion, Kangaroo is a pioneering solution in accelerating LLMs’ inference speed. Using a fixed shallow sub-network from the LLM as a draft model, Kangaroo eliminates the need for costly and time-consuming external drafter models. Introducing the early-exit mechanism further enhances the speed and efficiency of the inference process, enabling Kangaroo to outperform other speculative decoding methods. With up to a 1.7× speedup and a drastic reduction in additional parameters, Kangaroo presents a promising approach to improving the efficiency of large language models. It sets a new standard in real-time natural language processing by significantly reducing latency without compromising accuracy.

    Check out the Paper and GitHub. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 40k+ ML SubReddit

    The post Huawei AI Introduces ‘Kangaroo’: A Novel Self-Speculative Decoding Framework Tailored for Accelerating the Inference of Large Language Models appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleMeet Pyte: A Data Collaboration Platform that Preserves the Confidentiality of Data During Its Entire Data Lifecycle
    Next Article Top Object-Oriented Programming Languages for Beginners (OOP Guide)

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 17, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-40906 – MongoDB BSON Serialization BSON::XS Multiple Vulnerabilities

    May 17, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    CVE-2025-28074 – phpList XSS Injection

    Common Vulnerabilities and Exposures (CVEs)

    Google Cloud Researchers Uncover Flaws in Rsync File Synchronization Tool

    Development

    CVE-2025-4716 – Campcodes Sales and Inventory System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Shokz’ latest open-ear headphones are earning praise at CES 2025 – here’s why

    News & Updates

    Highlights

    5 reasons why Linux will overtake Windows and MacOS on the desktop – eventually

    July 31, 2024

    Although Linux still only commands single digits in market share, I expect that to rise…

    Download Files Easily with Laravel’s HTTP sink Method

    March 26, 2025

    Understanding the Difference Between Optimizely Configured Commerce SDK and Cloud

    January 31, 2025

    Recorder – audio recorder

    June 27, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.