Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Recursive IntroSpEction (RISE): A Machine Learning Approach for Fine-Tuning LLMs to Improve Their Own Responses Over Multiple Turns Sequentially

    Recursive IntroSpEction (RISE): A Machine Learning Approach for Fine-Tuning LLMs to Improve Their Own Responses Over Multiple Turns Sequentially

    July 29, 2024

    Large language models (LLMs) have gained significant attention as powerful tools for various tasks, but their potential as general-purpose decision-making agents presents unique challenges. To function effectively as agents, LLMs must go beyond simply generating plausible text completions. They need to exhibit interactive, goal-directed behavior to accomplish specific tasks. This requires two critical abilities: actively seeking information about the task and making decisions that can be improved through “thinking” and verification at inference time. Current methodologies struggle to achieve these capabilities, particularly in complex tasks requiring logical reasoning. While LLMs often possess the necessary knowledge, they frequently fail to apply it effectively when asked to correct their own mistakes sequentially. This limitation highlights the need for a more robust approach to enable test-time self-improvement in LLM agents.

    Researchers have attempted various approaches to enhance the reasoning and thinking capabilities of foundation models for downstream applications. These methods primarily focus on developing prompting techniques for effective multi-turn interaction with external tools, sequential refinement of predictions through reflection, thought verbalization, self-critique and revision, or using other models for response criticism. While some of these approaches show promise in improving responses, they often rely on detailed error traces or external feedback to succeed.

    Prompting techniques, although useful, have limitations. Studies indicate that intrinsic self-correction guided solely by the LLM itself is often infeasible for off-the-shelf models, even when they possess the required knowledge to tackle the prompt. Fine-tuning LLMs to obtain self-improvement capabilities has also been explored, using strategies such as training on self-generated responses, learned verifiers, search algorithms, contrastive prompting on negative data, and iterated supervised or reinforcement learning.

    However, these existing methods primarily focus on improving single-turn performance rather than introducing the capability to enhance performance over sequential turns of interaction. While some work has explored fine-tuning LLMs for multi-turn interaction directly via reinforcement learning, this approach addresses different challenges than those posed by single-turn problems in multi-turn scenarios.

    Researchers from Carnegie Mellon University, UC Berkeley, and MultiOn present RISE (Recursive IntroSpEction), a unique approach to enhance LLMs’ self-improvement capabilities. This method employs an iterative fine-tuning procedure that frames single-turn prompts as multi-turn Markov decision processes. By incorporating principles from online imitation learning and reinforcement learning, RISE develops strategies for multi-turn data collection and training. This approach enables LLMs to recursively detect and correct mistakes in subsequent iterations, a capability previously thought challenging to attain. Unlike traditional methods focusing on single-turn performance, RISE aims to instill dynamic self-improvement in LLMs, potentially revolutionizing their problem-solving abilities in complex scenarios.

    RISE presents an innovative approach to fine-tune foundation models for self-improvement over multiple turns. The method begins by converting single-turn problems into a multi-turn Markov Decision Process (MDP). This MDP construction transforms prompts into initial states, with model responses serving as actions. The next state is created by concatenating the current state, the model’s action, and a fixed introspection prompt. Rewards are based on answer correctness. RISE then employs strategies for data collection and learning within this MDP framework. The approach uses either distillation from a more capable model or self-distillation to generate improved responses. Finally, RISE applies reward-weighted supervised learning to train the model, enabling it to enhance its predictions over sequential attempts.

    RISE demonstrates significant performance improvements across multiple benchmarks. On GSM8K, RISE boosted the LLama2 base model’s five-turn performance by 15.1% and 17.7% after one and two iterations respectively, without using an oracle. On MATH, improvements of 3.4% and 4.6% were observed. These gains surpass those achieved by other methods, including prompting-only self-refinement and standard fine-tuning on oracle data. Notably, RISE outperforms sampling multiple responses in parallel, indicating its ability to genuinely correct mistakes over sequential turns. The method’s effectiveness persists across different base models, with Mistral-7B + RISE outperforming Eurus-7B-SFT, a model specifically fine-tuned for math reasoning. Also, a self-distillation version of RISE shows promise, improving 5-turn performance even with entirely self-generated data and supervision.

    RISE  introduces a unique approach for fine-tuning Large Language Models to improve their responses over multiple turns. By converting single-turn problems into multi-turn Markov Decision Processes, RISE employs iterative reinforcement learning on on-policy rollout data, using expert or self-generated supervision. The method significantly enhances self-improvement abilities of 7B models on reasoning tasks, outperforming previous approaches. Results show consistent performance gains across different base models and tasks, demonstrating genuine sequential error correction. While computational constraints currently limit the number of training iterations, especially with self-generated supervision, RISE presents a promising direction for advancing LLM self-improvement capabilities.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 47k+ ML SubReddit

    Find Upcoming AI Webinars here

    The post Recursive IntroSpEction (RISE): A Machine Learning Approach for Fine-Tuning LLMs to Improve Their Own Responses Over Multiple Turns Sequentially appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow to monitor the response time of page level individual elements of web page using RPT?
    Next Article Build generative AI–powered Salesforce applications with Amazon Bedrock

    Related Posts

    Machine Learning

    Salesforce AI Releases BLIP3-o: A Fully Open-Source Unified Multimodal Model Built with CLIP Embeddings and Flow Matching for Image Understanding and Generation

    May 16, 2025
    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Guide to TestCloud Tunnels Configuration for Test Execution in Private Domain

    Development

    Free AI-Website Builder, Scene, Helps With the Worst Part of Site Design

    Development

    This controller may look like I found it in a Fallout vault— but the TMR sticks are even better than Hall Effect

    Development

    YouTube celebrates 20 years with new features, cool tricks, and some truly mind-blowing stats

    News & Updates
    Hostinger

    Highlights

    Mastering Buttons in CSS: A Comprehensive Guide

    January 15, 2025

    Post Content Source: Read More 

    Cloudflare goes to war with Microsoft, Google, and OpenAI’s bots, with blanket free tools to block all crawlers

    July 5, 2024

    News Roundup: Walking the DOGE

    February 17, 2025

    FermiNet: Quantum physics and chemistry from first principles

    May 13, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.