Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 5, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 5, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 5, 2025

      In MCP era API discoverability is now more important than ever

      June 5, 2025

      Google’s DeepMind CEO lists 2 AGI existential risks to society keeping him up at night — but claims “today’s AI systems” don’t warrant a pause on development

      June 5, 2025

      Anthropic researchers say next-generation AI models will reduce humans to “meat robots” in a spectrum of crazy futures

      June 5, 2025

      Xbox just quietly added two of the best RPGs of all time to Game Pass

      June 5, 2025

      7 reasons The Division 2 is a game you should be playing in 2025

      June 5, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Mastering TypeScript: How Complex Should Your Types Be?

      June 5, 2025
      Recent

      Mastering TypeScript: How Complex Should Your Types Be?

      June 5, 2025

      IDMC – CDI Best Practices

      June 5, 2025

      PWC-IDMC Migration Gaps

      June 5, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Google’s DeepMind CEO lists 2 AGI existential risks to society keeping him up at night — but claims “today’s AI systems” don’t warrant a pause on development

      June 5, 2025
      Recent

      Google’s DeepMind CEO lists 2 AGI existential risks to society keeping him up at night — but claims “today’s AI systems” don’t warrant a pause on development

      June 5, 2025

      Anthropic researchers say next-generation AI models will reduce humans to “meat robots” in a spectrum of crazy futures

      June 5, 2025

      Xbox just quietly added two of the best RPGs of all time to Game Pass

      June 5, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Do Reasoning Models Really Need Transformers?: Researchers from TogetherAI, Cornell, Geneva, and Princeton Introduce M1—A Hybrid Mamba-Based AI that Matches SOTA Performance at 3x Inference Speed

    Do Reasoning Models Really Need Transformers?: Researchers from TogetherAI, Cornell, Geneva, and Princeton Introduce M1—A Hybrid Mamba-Based AI that Matches SOTA Performance at 3x Inference Speed

    April 18, 2025

    Effective reasoning is crucial for solving complex problems in fields such as mathematics and programming, and LLMs have demonstrated significant improvements through long-chain-of-thought reasoning. However, transformer-based models face limitations due to their quadratic computational complexity and linear memory requirements, making it challenging to process long sequences efficiently. While techniques such as Chain of Thought (CoT) reasoning and adaptive compute allocation have helped boost model performance, these methods also increase computational costs. Additionally, generating multiple outputs and selecting the best one has been explored as a way to enhance reasoning accuracy. However, such methods still depend on transformer-based architectures, which struggle with scalability in large-batch, long-context tasks.

    To address these challenges, alternatives to the transformer architecture have been explored, including RNN-based models, state space models (SSMs), and linear attention mechanisms, which offer more efficient memory usage and faster inference. Hybrid models combining self-attention with subquadratic layers have also been developed to improve inference-time scaling. Moreover, knowledge distillation techniques, which transfer capabilities from large models to smaller ones, have shown promise in maintaining reasoning performance while reducing model size. Research into cross-architecture distillation, such as transferring knowledge from transformers to RNNs or SSMs, is ongoing to achieve high reasoning capabilities in smaller, more efficient models.

    Researchers from TogetherAI, Cornell University, the University of Geneva, and Princeton University present M1, a hybrid linear RNN reasoning model built on the Mamba architecture, which enhances memory-efficient inference. M1 is trained through a combination of distillation, supervised fine-tuning, and reinforcement learning. Experimental results on the AIME and MATH benchmarks show M1 outperforms previous linear RNN models and matches the performance of DeepSeek R1 distilled transformers. Additionally, M1 achieves a 3x speedup in inference compared to transformers of the same size, boosting reasoning accuracy through techniques like self-consistency and verification, making it a powerful model for large-scale inference.

    The M1 model is built through a three-stage process: distillation, SFT, and RL. First, a pretrained Transformer model is distilled into the Mamba architecture, with a modified approach to linear projections and additional parameters for better performance. In the SFT stage, the model is fine-tuned on math problem datasets, first with general datasets and then with reasoning-focused datasets from the R1 model series. Finally, RL is applied using GRPO, which enhances the model’s reasoning ability by training with advantage estimates and encouraging diversity in its responses, thereby further boosting its performance.

    The experiment uses the Llama3.2-3 B-Instruct models as the target for distillation, with the Mamba layers utilizing a 16-sized SSM state. The evaluation encompasses a range of math benchmarks, including MATH500, AIME25, and Olympiad Bench, assessing model performance based on coverage and accuracy. The pass@k metric is used for coverage, indicating the likelihood of a correct solution among generated samples. The model’s performance is compared with that of various state-of-the-art models, yielding competitive results, particularly in reasoning tasks. The inference speed and test-time scaling are evaluated, demonstrating M1’s efficiency in large-batch generation and longer sequence contexts.

    In conclusion, M1 is a hybrid reasoning model based on the Mamba architecture, designed to overcome scalability issues in Transformer models. By employing distillation and fine-tuning techniques, M1 achieves performance comparable to state-of-the-art reasoning models. It offers more than 3x faster inference than similar-sized Transformer models, especially with large batch sizes, making resource-intensive strategies like self-consistency more feasible. M1 outperforms linear RNN models and matches Deepseek R1’s performance on benchmarks such as AIME and MATH. Additionally, it demonstrates superior accuracy under fixed time budgets, making it a strong, efficient alternative to Transformer-based architectures for mathematical reasoning tasks.


    Here is the Paper. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post Do Reasoning Models Really Need Transformers?: Researchers from TogetherAI, Cornell, Geneva, and Princeton Introduce M1—A Hybrid Mamba-Based AI that Matches SOTA Performance at 3x Inference Speed appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleA Hands-On Tutorial: Build a Modular LLM Evaluation Pipeline with Google Generative AI and LangChain
    Next Article Affordable and Reliable 4o Image API (The latest released)

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 5, 2025
    Machine Learning

    Voice Quality Dimensions as Interpretable Primitives for Speaking Style for Atypical Speech and Affect

    June 5, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    CVE-2025-43949 – MuM MapEdit SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-46824 – Discourse Code Review Plugin Cross-Site Scripting (XSS)

    Common Vulnerabilities and Exposures (CVEs)

    How to Connect, Read, and Process Sensor Data on Microcontrollers – A Beginner’s Guide

    Development

    How to prevent order discrepancy with automated PO-SO matching

    Artificial Intelligence

    Highlights

    snapborg synchronizes snapper snapshots to a borg repository

    May 4, 2025

    snapborg provides automated backups of snapper snapshots to borg repositories. It’s based on sftbackup and…

    NVIDIA Releases Security Update to Address GPU Driver Vulnerabilities

    April 26, 2025

    If Mr. Beast or Elon Musk Bought TikTok: How the Digital World Would Change Forever

    January 17, 2025

    Can AI Save the UK from a Cyber Catastrophe? The Fight Against Russian Hackers

    November 25, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.