Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Linear Attention Sequence Parallel (LASP): An Efficient Machine Learning Method Tailored to Linear Attention-Based Language Models

    Linear Attention Sequence Parallel (LASP): An Efficient Machine Learning Method Tailored to Linear Attention-Based Language Models

    April 7, 2024

    Linear attention-based models are gaining attention for their faster processing speed and comparable performance to Softmax transformers. However, large language models (LLMs), due to their large size and longer sequence lengths, exert significant strain on contemporary GPU hardware because a single GPU’s memory confines a language model’s maximum sequence length.

    Sequence Parallelism (SP) techniques are often utilized to divide a long sequence into several sub-sequences and train them on multiple GPUs separately. However, current SP methods underutilize linear attention features, resulting in inefficient parallelism and usability issues. 

    Researchers from Shanghai AI Laboratory and TapTap present the linear attention sequence parallel (LASP) technique, which optimizes sequence parallelism on linear transformers. It employs point-to-point (P2P) communication for efficient state exchange among GPUs within or across nodes. LASP maximizes the use of right-product kernel tricks in linear attention. Importantly, it doesn’t rely on attention head partitioning, making it adaptable to multi-head, multi-query, and grouped-query attentions.

    LASP employs a tiling approach to partition input sequences into sub-sequence chunks distributed across GPUs. It distinguishes attention computation into intra-chunks and inter-chunks for utilizing linear attention’s right-product advantage. Intra-chunks use conventional attention computation, while inter-chunks exploit kernel tricks. The method also includes data distribution, forward pass, and backward pass mechanisms to enhance parallel processing efficiency.

    LASP achieves significant throughput enhancement for linear attention through efficient communication design, surpassing DeepSpeed-Ulysses by 38% and Megatron by 136% in throughput at 256K sequence length on 1B model. Moreover, LASP, with system optimizations like kernel fusion and KV State caching, supports longer sequence lengths within the same cluster, reaching 2048K for the 1B model and 512K for the 7B model.

    Key contributions of this research are as follows: 

    A new SP strategy tailored to linear attention: Enabling linear attention-based models to scale for long sequences without being limited by a single GPU. 

    Sequence length-independent communication over-head: Their elegant communication mechanism harnesses the right-product kernel trick of linear attention to ensure that the exchanging of linear attention intermediate states is sequence length-independent.

    GPU-friendly implementation: Optimized LASP’s execution on GPUs through meticulous system engineering, including kernel fusion and KV State caching.

    Data-parallel compatibility: LASP is compatible with all batch-level DDP methods, such as PyTorch/Legacy DDP, FSDP, and ZeRO-series optimizers.

    In conclusion,  LASP is introduced to overcome the limitations of existing SP methods on linear transformers by leveraging linear attention features to enhance parallelism efficiency and usability. Implementing P2P communication, kernel fusion, and KV state caching reduces communication traffic and improves GPU cluster utilization. Compatibility with batch-level DDP methods ensures practicality for large-scale distributed training. Experiments highlight LASP’s advantages in scalability, speed, memory usage, and convergence performance compared to existing SP methods.

    Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 39k+ ML SubReddit

    The post Linear Attention Sequence Parallel (LASP): An Efficient Machine Learning Method Tailored to Linear Attention-Based Language Models appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleTop MLOps Books to Read in 2024
    Next Article Evaluating AI Model Security Using Red Teaming Approach: A Comprehensive Study on LLM and MLLM Robustness Against Jailbreak Attacks and Future Improvements

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-47916 – Invision Community Themeeditor Remote Code Execution

    May 16, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Windows 11 KB5052093 24H2 out with Xbox ads, direct download .msu

    Operating Systems

    Useful Tips for Customizing IBM Sterling Store Engagement

    Development

    Cracking the Privacy Paradox in AI: Innovate Without Invading

    Development

    The MongoDB AI Applications Program: Delivering Customer Value

    Databases

    Highlights

    AI agents are the ‘next frontier’ and will change our working lives forever

    August 7, 2024

    The next stage of generative AI will be focused on systems that can interact independently.…

    8 Excellent Free Books to Learn Julia

    May 8, 2025

    Linux Candy: doge – simple motd script

    April 19, 2025

    Some of the announcements coming to Laracon US

    August 19, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.