Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 5, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 5, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 5, 2025

      In MCP era API discoverability is now more important than ever

      June 5, 2025

      Google’s DeepMind CEO lists 2 AGI existential risks to society keeping him up at night — but claims “today’s AI systems” don’t warrant a pause on development

      June 5, 2025

      Anthropic researchers say next-generation AI models will reduce humans to “meat robots” in a spectrum of crazy futures

      June 5, 2025

      Xbox just quietly added two of the best RPGs of all time to Game Pass

      June 5, 2025

      7 reasons The Division 2 is a game you should be playing in 2025

      June 5, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Mastering TypeScript: How Complex Should Your Types Be?

      June 5, 2025
      Recent

      Mastering TypeScript: How Complex Should Your Types Be?

      June 5, 2025

      IDMC – CDI Best Practices

      June 5, 2025

      PWC-IDMC Migration Gaps

      June 5, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Google’s DeepMind CEO lists 2 AGI existential risks to society keeping him up at night — but claims “today’s AI systems” don’t warrant a pause on development

      June 5, 2025
      Recent

      Google’s DeepMind CEO lists 2 AGI existential risks to society keeping him up at night — but claims “today’s AI systems” don’t warrant a pause on development

      June 5, 2025

      Anthropic researchers say next-generation AI models will reduce humans to “meat robots” in a spectrum of crazy futures

      June 5, 2025

      Xbox just quietly added two of the best RPGs of all time to Game Pass

      June 5, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»From Exploration Collapse to Predictable Limits: Shanghai AI Lab Proposes Entropy-Based Scaling Laws for Reinforcement Learning in LLMs

    From Exploration Collapse to Predictable Limits: Shanghai AI Lab Proposes Entropy-Based Scaling Laws for Reinforcement Learning in LLMs

    June 3, 2025

    Recent advances in reasoning-centric large language models (LLMs) have expanded the scope of reinforcement learning (RL) beyond narrow, task-specific applications, enabling broader generalization and reasoning capabilities. However, this shift introduces significant challenges, particularly in scaling the training compute required for learning from experience. Unlike imitation learning through pre-training and fine-tuning, RL demands a more computationally intensive approach. A central issue is the decline in policy entropy, which affects the balance between exploiting known strategies and exploring new ones. This exploitation-exploration trade-off is fundamental in RL, and controlling policy entropy has become critical to maintaining effective exploration during training.

    Existing efforts address the exploration-exploitation trade-off in RL by utilizing policy entropy. Maximum entropy RL introduces a regularization term to the reward function, promoting uncertainty in action selection and encouraging broader exploration. While this technique has been widely adopted in conventional RL algorithms, its application to LLMs remains debated. Moreover, predictability in RL for LLMs is not explored. While neural scaling laws guide LLM development, similar predictive frameworks for RL training remain limited. Existing RL methods for LLMs with verifiable rewards show promise in reasoning improvements, but lack a deep understanding of their core mechanisms.

    Researchers from Shanghai AI Laboratory, Tsinghua University, UIUC, Peking University, Nanjing University, and CUHK provide an approach to address the collapse of policy entropy in RL for reasoning-centric LLMs. They established a transformation equation, R = −a exp H + b, where H is entropy, R is downstream performance, and a and b are fitting coefficients. This empirical law strongly suggests that policy performance is traded from policy entropy, thus bottlenecked by its exhaustion. Researchers investigate entropy dynamics, and their derivation highlights that the change in policy entropy is driven by the covariance between action probability and the change in logits. They also proposed two techniques, namely Clip-Cov and KL-Cov, which clip and apply a KL penalty to tokens with high covariances, respectively.​

    To investigate and validate the entropy collapse phenomenon in RL-tuned LLMs, researchers applied RL to LLMs on verifiable tasks, like math and coding, using an autoregressive generation setup where models produce token sequences based on input prompts. The study involves 11 widely adopted open-source models spanning four families: Qwen2.5, Mistral, LLaMA, and DeepSeek, with parameters ranging from 0.5B to 32 B. Evaluations are performed on eight public benchmarks, including MATH500, AIME 2024, AMC, and Eurus-2-RL-Code. Moreover, RL training follows the veRL framework in a zero-shot setting, utilizing algorithms like GRPO, REINFORCE++, and PRIME to optimize policy performance while observing entropy dynamics.

    The proposed Clip-Cov and KL-Cov techniques were evaluated on the Qwen2.5 models using the DAPOMATH dataset for math tasks. These methods achieve non-trivial performance gains across all benchmarks. In comparison to the GRPO baseline, these methods improve performance by 2.0% on average for the 7B model and by 6.4% for the 32B model. For example, when the baseline’s entropy reaches a plateau, the KL-Cov method still sustains an entropy level over 10 times higher. The methods can maintain a higher level of entropy throughout the training. Moreover, the methods yield more substantial gains on the larger Qwen2.5-32B model, with improvements of 15.0% and 14.6% compared to GRPO on the most challenging benchmarks, AIME24 and AIME25, respectively. 

    In conclusion, researchers have overcome the challenge of policy entropy collapse in RL for reasoning-centric LLMs. The findings highlight a trade-off between performance improvement and diminished exploration, which ultimately limits further gains. Through theoretical analysis and empirical validation, researchers identify entropy dynamics as a key bottleneck and propose two effective regularization strategies—Clip-Cov and KL-Cov to manage high-covariance tokens and sustain exploration. As RL emerges as a crucial axis for scaling beyond pre-training, addressing entropy collapse becomes essential. This work provides foundational insights into the role of entropy, guiding future efforts to scale RL toward more intelligent and capable language models.


    Check out the Paper and GitHub Page . All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

    The post From Exploration Collapse to Predictable Limits: Shanghai AI Lab Proposes Entropy-Based Scaling Laws for Reinforcement Learning in LLMs appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHugging Face Releases SmolVLA: A Compact Vision-Language-Action Model for Affordable and Efficient Robotics
    Next Article Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data Analytics

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 5, 2025
    Machine Learning

    Voice Quality Dimensions as Interpretable Primitives for Speaking Style for Atypical Speech and Affect

    June 5, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    China-Linked Earth Alux Uses VARGEIT and COBEACON in Multi-Stage Cyber Intrusions

    Development

    This futuristic power generator kept my home running during an outage. Here’s how

    Development

    Coffee Night with the Banshee: A Story by Srinidhi Ranganathan

    Artificial Intelligence

    Top 5 Best Coding Apps in 2025

    Development

    Highlights

    Cryptocurrency Miner and Clipper Malware Spread via SourceForge Cracked Software Listings Development

    Cryptocurrency Miner and Clipper Malware Spread via SourceForge Cracked Software Listings

    April 8, 2025

    Threat actors have been observed distributing malicious payloads such as cryptocurrency miner and clipper malware…

    Gears of War: Reloaded remasters the game once again for Xbox, Game Pass, PC, and now, PS5 as well

    May 5, 2025

    Iranian and Russian Entities Sanctioned for Election Interference Using AI and Cyber Tactics

    January 1, 2025

    Trump Spreads Swift, Musk Deepfakes as Iran is Fingered for Targeting Presidential Campaigns

    August 20, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.