Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 4, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 4, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 4, 2025

      Smashing Animations Part 4: Optimising SVGs

      June 4, 2025

      I test AI tools for a living. Here are 3 image generators I actually use and how

      June 4, 2025

      The world’s smallest 65W USB-C charger is my latest travel essential

      June 4, 2025

      This Spotlight alternative for Mac is my secret weapon for AI-powered search

      June 4, 2025

      Tech prophet Mary Meeker just dropped a massive report on AI trends – here’s your TL;DR

      June 4, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Beyond AEM: How Adobe Sensei Powers the Full Enterprise Experience

      June 4, 2025
      Recent

      Beyond AEM: How Adobe Sensei Powers the Full Enterprise Experience

      June 4, 2025

      Simplify Negative Relation Queries with Laravel’s whereDoesntHaveRelation Methods

      June 4, 2025

      Cast Model Properties to a Uri Instance in 12.17

      June 4, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      My Favorite Obsidian Plugins and Their Hidden Settings

      June 4, 2025
      Recent

      My Favorite Obsidian Plugins and Their Hidden Settings

      June 4, 2025

      Rilasciata /e/OS 3.0: Nuova Vita per Android Senza Google, Più Privacy e Controllo per l’Utente

      June 4, 2025

      Rilasciata Oracle Linux 9.6: Scopri le Novità e i Miglioramenti nella Sicurezza e nelle Prestazioni

      June 4, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»This AI Paper Introduces GRPO-based Open-RS: A Low-Cost Reinforcement Learning Framework to Enhance Reasoning in Small Language Models

    This AI Paper Introduces GRPO-based Open-RS: A Low-Cost Reinforcement Learning Framework to Enhance Reasoning in Small Language Models

    March 26, 2025

    One particular focus on large language models has been improving their logical thinking and problem-solving skills. Reinforcement learning (RL) is increasingly used in this space for massive models and compact versions that can perform well in restricted computing environments. One major challenge in this field is improving a model’s reasoning capability without relying on extremely large infrastructure or excessive training time. Leading models require expensive hardware and proprietary data pipelines, putting them out of reach for smaller labs or companies. This raises the question of whether smaller models can be enhanced using cost-efficient approaches and achieve performance comparable to their larger counterparts on challenging tasks like math reasoning.

    Several methods have been explored to address this. Chain-of-thought prompting helps guide models through problem steps. Search algorithms such as Beam Search and Monte Carlo Tree Search are also used to improve the logical flow of answers. Reinforcement learning itself has been tested in multiple settings. However, many of these approaches are still bound by the same issues: they depend on massive datasets or lead to unstable performance in small-scale setups. Furthermore, the results often fail to match those of proprietary models like OpenAI’s o1-preview.

    Research introduced by a team from Knovel Engineering Lab in Singapore and VNU University of Science in Vietnam focused on overcoming these problems. The researchers used a 1.5-billion-parameter model named DeepSeek-R1-Distill-Qwen-1.5B. They adopted the Group Relative Policy Optimization (GRPO) algorithm for their setup, training the model using four NVIDIA A40 GPUs with 48 GB VRAM each, all within a strict 24-hour limit. Their key objective was to enhance the model’s reasoning without large financial or computational investment. Their training consumed only $42 in computing costs, a drastic reduction compared to baselines that require thousands of dollars.

    The team assembled a dataset of 39,659 mathematics-specific questions to achieve this by refining two existing datasets—open-s1 and open-deep scale. The filtering process involved removing trivial or noisy questions using different models such as Qwen2.5-7B-Instruct and DeepSeek-R1-Distill-Qwen-1.5B. The reward system was rule-based and focused on three components: correctness of answers (using boxed notation), structural formatting (enforced with tags), and output length (rewarded with a cosine function to promote concise reasoning). The GRPO algorithm was used to sample group responses and apply score-based optimization, avoiding the need for a critical model and thus reducing computational demands further.

    The performance of this approach was tested across five benchmark datasets: AMC23, AIME24, MATH-500, OlympiadBench, and Minerva. In one experiment, using just the open-s1 dataset, the model’s AMC23 accuracy improved from 63% to 70% within the first 100 global steps but later declined. In another trial that combined 7,000 samples of mixed difficulty, the accuracy on AMC23 rose to 80%, and AIME24 reached 46.7%. The model named Open-RS2, trained in that setup, also showed competitive scores on OlympiadBench (52.4%) and MATH-500 (85%). In the final experiment, the cosine reward helped regulate output length to a range of 1000–3500 tokens, and the model maintained 72.5% accuracy on AMC23 and 84.4% on MATH-500.

    This research showed that effective reasoning in small language models is achievable even with limited resources. The problem of training small models without significant hardware investment was addressed with a low-cost and efficient training strategy. The proposed method used reinforcement learning and curated data to deliver surprisingly strong results. With continued improvements in reward design and optimization stability, small models may soon rival their larger counterparts in practical reasoning tasks.


    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

    The post This AI Paper Introduces GRPO-based Open-RS: A Low-Cost Reinforcement Learning Framework to Enhance Reasoning in Small Language Models appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleUnderstanding and Mitigating Failure Modes in LLM-Based Multi-Agent Systems
    Next Article How to Transfer Your UXR Skills to Different Roles

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 4, 2025
    Machine Learning

    A Coding Implementation to Build an Advanced Web Intelligence Agent with Tavily and Gemini AI

    June 4, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Managing Multi-Device Sessions with Laravel’s Device Logout Feature

    Development

    Geometry Distributions: Advancing Neural 3D Surface Modeling with Diffusion Models

    Development

    If it works, it’s right

    Web Development

    hascard – minimal command-line utility for reviewing notes

    Linux

    Highlights

    OneNote for Mobile will use Copilot to gather notes from images and videos automatically

    February 25, 2025

    OneNote for Mobile is about to become your favorite notes app on Android; Microsoft will…

    CVE-2025-42602 – Meon KYC Token Manipulation Vulnerability

    April 23, 2025

    Fortifying Debian With SELinux by Enforcing Mandatory Access Control for Ultimate System Security

    May 23, 2025

    CVE-2025-46803 – Screen PTY Escalation of Privilege

    May 26, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.