Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      10 Top Node.js Development Companies for Enterprise-Scale Projects (2025-2026 Ranked & Reviewed)

      July 4, 2025

      12 Must-Know Cost Factors When Hiring Node.js Developers for Your Enterprise

      July 4, 2025

      Mirantis reveals Lens Prism, an AI copilot for operating Kubernetes clusters

      July 3, 2025

      Avoid these common platform engineering mistakes

      July 3, 2025

      “A fantastic device for creative users” — this $550 discount on ASUS’s 3K OLED creator laptop disappears before Prime Day

      July 5, 2025

      Distribution Release: Rhino Linux 2025.3

      July 5, 2025

      Just days after joining Game Pass, the Xbox PC edition of Call of Duty: WW2 is taken offline for “an issue”

      July 5, 2025

      Xbox layoffs and game cuts wreak havoc on talented developers and the company’s future portfolio — Weekend discussion 💬

      July 5, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Flaget – new small 5kB CLI argument parser

      July 5, 2025
      Recent

      Flaget – new small 5kB CLI argument parser

      July 5, 2025

      The dog days of JavaScript summer

      July 4, 2025

      Databricks Lakebase – Database Branching in Action

      July 4, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      “A fantastic device for creative users” — this $550 discount on ASUS’s 3K OLED creator laptop disappears before Prime Day

      July 5, 2025
      Recent

      “A fantastic device for creative users” — this $550 discount on ASUS’s 3K OLED creator laptop disappears before Prime Day

      July 5, 2025

      Distribution Release: Rhino Linux 2025.3

      July 5, 2025

      EmptyEpsilon – spaceship bridge simulator game

      July 5, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»UB-Mesh: A Cost-Efficient, Scalable Network Architecture for Large-Scale LLM Training

    UB-Mesh: A Cost-Efficient, Scalable Network Architecture for Large-Scale LLM Training

    April 3, 2025

    As LLMs scale, their computational and bandwidth demands increase significantly, posing challenges for AI training infrastructure. Following scaling laws, LLMs improve comprehension, reasoning, and generation by expanding parameters and datasets, necessitating robust computing systems. Large-scale AI clusters now require tens of thousands of GPUs or NPUs, as seen in LLAMA-3’s 16K GPU training setup, which took 54 days. With AI data centers deploying over 100K GPUs, scalable infrastructure is essential. Additionally, interconnect bandwidth requirements surpass 3.2 Tbps per node, far exceeding traditional CPU-based systems. The rising costs of symmetrical Clos network architectures make cost-effective solutions critical, alongside optimizing operational expenses such as energy and maintenance. Moreover, high availability is a key concern, as massive training clusters experience frequent hardware failures, demanding fault-tolerant network designs.

    Addressing these challenges requires rethinking AI data center architecture. First, network topologies should align with LLM training’s structured traffic patterns, which differ from traditional workloads. Tensor parallelism, responsible for most data transfers, operates within small clusters, while data parallelism involves minimal but long-range communication. Second, computing and networking systems must be co-optimized, ensuring effective parallelism strategies and resource distribution to avoid congestion and underutilization. Lastly, AI clusters must feature self-healing mechanisms for fault tolerance, automatically rerouting traffic or activating backup NPUs when failures occur. These principles—localized network architectures, topology-aware computation, and self-healing systems—are essential for building efficient, resilient AI training infrastructures.

    Huawei researchers introduced UB-Mesh, an AI data center network architecture designed for scalability, efficiency, and reliability. Unlike traditional symmetrical networks, UB-Mesh employs a hierarchically localized nD-FullMesh topology, optimizing short-range interconnects to minimize switch dependency. Based on a 4D-FullMesh design, its UB-Mesh-Pod integrates specialized hardware and a Unified Bus (UB) technique for flexible bandwidth allocation. The All-Path Routing (APR) mechanism enhances data traffic management, while a 64+1 backup system ensures fault tolerance. Compared to Clos networks, UB-Mesh reduces switch usage by 98% and optical module reliance by 93%, achieving 2.04× cost efficiency with minimal performance trade-offs in LLM training.

    UB-Mesh is a high-dimensional full-mesh interconnect architecture designed to enhance efficiency in large-scale AI training. It employs an nD-FullMesh topology, minimizing reliance on costly switches and optical modules by maximizing direct electrical connections. The system is built on modular hardware components linked through a UB interconnect, streamlining communication across CPUs, NPUs, and switches. A 2D full-mesh structure connects 64 NPUs within a rack, extending to a 4D full-mesh at the Pod level. For scalability, a SuperPod structure integrates multiple Pods using a hybrid Clos topology, balancing performance, flexibility, and cost-efficiency in AI data centers.

    To enhance the efficiency of UB-Mesh in large-scale AI training, we employ topology-aware strategies for optimizing collective communication and parallelization. For AllReduce, a Multi-Ring algorithm minimizes congestion by efficiently mapping paths and utilizing idle links to enhance bandwidth. In all-to-all communication, a multi-path approach boosts data transmission rates, while hierarchical methods optimize bandwidth for broadcasting and reduce operations. Additionally, the study refines parallelization through a systematic search, prioritizing high-bandwidth configurations. Comparisons with Clos architecture reveal that UB-Mesh maintains competitive performance while significantly reducing hardware costs, making it a cost-effective alternative for large-scale model training.

    In conclusion, the UB IO controller incorporates a specialized co-processor, the Collective Communication Unit (CCU), to optimize collective communication tasks. The CCU manages data transfers, inter-NPU transmissions, and in-line data reduction using an on-chip SRAM buffer, minimizing redundant memory copies and reducing HBM bandwidth consumption. It also enhances computer-communication overlap. Additionally, UB-Mesh efficiently supports massive-expert MoE models by leveraging hierarchical all-to-all optimization and load/store-based data transfer. The study introduces UB-Mesh, an nD-FullMesh network architecture for LLM training, offering cost-efficient, high-performance networking with 95%+ linearity, 7.2% improved availability, and 2.04× better cost efficiency than Clos networks.


    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on OPEN SOURCE AI: FREE REGISTRATION + Certificate of Attendance + 3 Hour Short Event (April 12, 9 am- 12 pm PST) + Hands on Workshop [Sponsored]

    The post UB-Mesh: A Cost-Efficient, Scalable Network Architecture for Large-Scale LLM Training appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleIntroduction to MCP: The Ultimate Guide to Model Context Protocol for AI Assistants
    Next Article This AI Paper Unveils a Reverse-Engineered Simulator Model for Modern NVIDIA GPUs: Enhancing Microarchitecture Accuracy and Performance Prediction

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 5, 2025
    Machine Learning

    Soup-of-Experts: Pretraining Specialist Models via Parameters Averaging

    July 4, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Co-op CEO to Members: We’re Fighting to Protect Your Data

    Development

    CVE-2025-5663 – PHPGurukul Auto Taxi Stand Management System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-40585 – Energy Services G5DFR Default Credentials Backdoor

    Common Vulnerabilities and Exposures (CVEs)

    Buy this ultrathin LG laptop for 40% off, and get a 27-inch smart monitor for free

    News & Updates

    Highlights

    The 5 Linux AppImages I depend on daily – and how to add them to your desktop menu

    April 21, 2025

    AppImages have come a long way in recent years. Here’s why you should check them…

    CVE-2025-49468 – Joomla No Boss Calendar SQL Injection Vulnerability

    June 13, 2025

    Windows UWP Map Control and Maps platform API will be deprecated

    April 10, 2025

    Improve you C++ skills by coding an audio plugin

    May 5, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.