Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Gemini 2.5 Pro and Flash are generally available and Gemini 2.5 Flash-Lite preview is announced

      June 19, 2025

      CSS Cascade Layers Vs. BEM Vs. Utility Classes: Specificity Control

      June 19, 2025

      IBM launches new integration to help unify AI security and governance

      June 18, 2025

      Meet Accessible UX Research, A Brand-New Smashing Book

      June 18, 2025

      How to free up your Mac’s storage space – 3 easy ways

      June 19, 2025

      I finally found a mini PC with a striking design (and the power to back it up)

      June 19, 2025

      The best password generators of 2025: Expert tested

      June 19, 2025

      Facebook’s new passkey support could soon let you ditch your password forever

      June 19, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      eslint-plugin-mutate

      June 19, 2025
      Recent

      eslint-plugin-mutate

      June 19, 2025

      Event-Driven Microservice Backend For a Modern E-commerce Platform.

      June 19, 2025

      Search Params Are State – How TanStack Router Solves It

      June 19, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      You Can Now Auto-Generate Google Forms Using Gemini Using Prompts or Files – Here’s How

      June 19, 2025
      Recent

      You Can Now Auto-Generate Google Forms Using Gemini Using Prompts or Files – Here’s How

      June 19, 2025

      Google Helps Devs Build Safe Android Apps with THIS Play Policy – Find Out More Here

      June 19, 2025

      Microsoft Edge for Business Now Lets Admins Push Encrypted Passwords to Users Securely

      June 19, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»MiniMax AI Releases MiniMax-M1: A 456B Parameter Hybrid Model for Long-Context and Reinforcement Learning RL Tasks

    MiniMax AI Releases MiniMax-M1: A 456B Parameter Hybrid Model for Long-Context and Reinforcement Learning RL Tasks

    June 19, 2025

    The Challenge of Long-Context Reasoning in AI Models

    Large reasoning models are not only designed to understand language but are also structured to think through multi-step processes that require prolonged attention spans and contextual comprehension. As the expectations from AI grow, especially in real-world and software development environments, researchers have sought architectures that can handle longer inputs and sustain deep, coherent reasoning chains without overwhelming computational costs.

    Computational Constraints with Traditional Transformers

    The primary difficulty in expanding these reasoning capabilities lies in the excessive computational load that comes with longer generation lengths. Traditional transformer-based models employ a softmax attention mechanism, which scales quadratically with the input size. This limits their capacity to handle long input sequences or extended chains of thought efficiently. This problem becomes even more pressing in areas that require real-time interaction or cost-sensitive applications, where inference expenses are significant.

    Existing Alternatives and Their Limitations

    Efforts to address this issue have yielded a range of methods, including sparse attention and linear attention variants. Some teams have experimented with state-space models and recurrent networks as alternatives to traditional attention structures. However, these innovations have seen limited adoption in the most competitive reasoning models due to either architectural complexity or a lack of scalability in real-world deployments. Even large-scale systems, such as Tencent’s Hunyuan-T1, which utilizes a novel Mamba architecture, remain closed-source, thereby restricting wider research engagement and validation.

    Introduction of MiniMax-M1: A Scalable Open-Weight Model

    Researchers at MiniMax AI introduced MiniMax-M1, a new open-weight, large-scale reasoning model that combines a mixture of experts’ architecture with lightning-fast attention. Built as an evolution of the MiniMax-Text-01 model, MiniMax-M1 contains 456 billion parameters, with 45.9 billion activated per token. It supports context lengths of up to 1 million tokens—eight times the capacity of DeepSeek R1. This model addresses compute scalability at inference time, consuming only 25% of the FLOPs required by DeepSeek R1 at 100,000 token generation length. It was trained using large-scale reinforcement learning on a broad range of tasks, from mathematics and coding to software engineering, marking a shift toward practical, long-context AI models.

    Hybrid-Attention with Lightning Attention and Softmax Blocks

    To optimize this architecture, MiniMax-M1 employs a hybrid attention scheme where every seventh transformer block uses traditional softmax attention, followed by six blocks using lightning attention. This significantly reduces computational complexity while preserving performance. The lightning attention itself is I/O-aware, adapted from linear attention, and is particularly effective at scaling reasoning lengths to hundreds of thousands of tokens. For reinforcement learning efficiency, the researchers introduced a novel algorithm called CISPO. Instead of clipping token updates as traditional methods do, CISPO clips importance sampling weights, enabling stable training and consistent token contributions, even in off-policy updates.

    The CISPO Algorithm and RL Training Efficiency

    The CISPO algorithm proved essential in overcoming the training instability faced in hybrid architectures. In comparative studies using the Qwen2.5-32B baseline, CISPO achieved a 2x speedup compared to DAPO. Leveraging this, the full reinforcement learning cycle for MiniMax-M1 was completed in just three weeks using 512 H800 GPUs, with a rental cost of approximately $534,700. The model was trained on a diverse dataset comprising 41 logic tasks generated via the SynLogic framework and real-world software engineering environments derived from the SWE bench. These environments utilized execution-based rewards to guide performance, resulting in stronger outcomes in practical coding tasks.

    Benchmark Results and Comparative Performance

    MiniMax-M1 delivered compelling benchmark results. Compared to DeepSeek-R1 and Qwen3-235B, it excelled in software engineering, long-context processing, and agentic tool use. Although it trailed the latest DeepSeek-R1-0528 in math and coding contests, it surpassed both OpenAI o3 and Claude 4 Opus in long-context understanding benchmarks. Furthermore, it outperformed Gemini 2.5 Pro in the TAU-Bench agent tool use evaluation.

    Conclusion: A Scalable and Transparent Model for Long-Context AI

    MiniMax-M1 presents a significant step forward by offering both transparency and scalability. By addressing the dual challenge of inference efficiency and training complexity, the research team at MiniMax AI has set a precedent for open-weight reasoning models. This work not only brings a solution to compute constraints but also introduces practical methods for scaling language model intelligence into real-world applications.


    Check out the Paper, Model and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post MiniMax AI Releases MiniMax-M1: A 456B Parameter Hybrid Model for Long-Context and Reinforcement Learning RL Tasks appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAccelerate foundation model training and inference with Amazon SageMaker HyperPod and Amazon SageMaker Studio
    Next Article How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 19, 2025
    Machine Learning

    Accelerate foundation model training and inference with Amazon SageMaker HyperPod and Amazon SageMaker Studio

    June 19, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    How can businesses save money on internet security in 2015?

    Development

    Google Cloud: Driving digital transformation

    Artificial Intelligence

    CVE-2025-5181 – Summer Pearl Group Vacation Rental Management Platform Cross Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-5407 – “Chaitak-Gorai Blogbook Cross-Site Scripting Vulnerability”

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    OpenAI will become a Public Benefit Corporation – here’s what that means

    May 8, 2025

    As it pledges ‘democratic’ AI to benefit everyone, can OpenAI deliver on AI for all…

    Driverless cars ‘could be hacked’ warns Institute of Engineering and Technology

    April 9, 2025

    The Last Letter from the Hills: Part 2 – The Monsoon of Memories

    June 15, 2025

    DietPi – extremely lightweight Debian-based distribution

    April 11, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.