Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Top 15 Enterprise Use Cases That Justify Hiring Node.js Developers in 2025

      July 31, 2025

      The Core Model: Start FROM The Answer, Not WITH The Solution

      July 31, 2025

      AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals   

      July 31, 2025

      Understanding the code modernization conundrum

      July 31, 2025

      Not just YouTube: Google is using AI to guess your age based on your activity – everywhere

      July 31, 2025

      Malicious extensions can use ChatGPT to steal your personal data – here’s how

      July 31, 2025

      What Zuckerberg’s ‘personal superintelligence’ sales pitch leaves out

      July 31, 2025

      This handy NordVPN tool flags scam calls on Android – even before you answer

      July 31, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Route Optimization through Laravel’s Shallow Resource Architecture

      July 31, 2025
      Recent

      Route Optimization through Laravel’s Shallow Resource Architecture

      July 31, 2025

      This Week in Laravel: Laracon News, Free Laravel Idea, and Claude Code Course

      July 31, 2025

      Everything We Know About Pest 4

      July 31, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      FOSS Weekly #25.31: Kernel 6.16, OpenMandriva Review, Conky Customization, System Monitoring and More

      July 31, 2025
      Recent

      FOSS Weekly #25.31: Kernel 6.16, OpenMandriva Review, Conky Customization, System Monitoring and More

      July 31, 2025

      Windows 11’s MSN Widgets board now opens in default browser, such as Chrome (EU only)

      July 31, 2025

      Microsoft’s new “move to Windows 11” campaign implies buying OneDrive paid plan

      July 31, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Implementing Self-Refine Technique Using Large Language Models LLMs

    Implementing Self-Refine Technique Using Large Language Models LLMs

    July 29, 2025

    This tutorial demonstrates how to implement the Self-Refine technique using Large Language Models (LLMs) with Mirascope, a powerful framework for building structured prompt workflows. Self-Refine is a prompt engineering strategy where the model evaluates its own output, generates feedback, and iteratively improves its response based on that feedback. This refinement loop can be repeated multiple times to progressively enhance the quality and accuracy of the final answer.

    The Self-Refine approach is particularly effective for tasks involving reasoning, code generation, and content creation, where incremental improvements lead to significantly better results. Check out the Full Codes here

    Installing the dependencies

    Copy CodeCopiedUse a different Browser
    !pip install "mirascope[openai]"

    OpenAI API Key

    To get an OpenAI API key, visit https://platform.openai.com/settings/organization/api-keys and generate a new key. If you’re a new user, you may need to add billing details and make a minimum payment of $5 to activate API access. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    import os
    from getpass import getpass
    os.environ["OPENAI_API_KEY"] = getpass('Enter OpenAI API Key: ')

    Basic Self-Refine Implementation

    We begin by implementing the Self-Refine technique using Mirascope’s @openai.call and @prompt_template decorators. The process starts with generating an initial response to a user query. This response is then evaluated by the model itself, which provides constructive feedback. Finally, the model uses this feedback to generate an improved response. The self_refine function allows us to repeat this refinement process for a specified number of iterations, enhancing the quality of the output with each cycle. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    from mirascope.core import openai, prompt_template
    from mirascope.core.openai import OpenAICallResponse
    
    
    @openai.call(model="gpt-4o-mini")
    def call(query: str) -> str:
        return query
    
    
    @openai.call(model="gpt-4o-mini")
    @prompt_template(
        """
        Here is a query and a response to the query. Give feedback about the answer,
        noting what was correct and incorrect.
        Query:
        {query}
        Response:
        {response}
        """
    )
    def evaluate_response(query: str, response: OpenAICallResponse): ...
    
    
    @openai.call(model="gpt-4o-mini")
    @prompt_template(
        """
        For this query:
        {query}
        The following response was given:
        {response}
        Here is some feedback about the response:
        {feedback}
    
        Consider the feedback to generate a new response to the query.
        """
    )
    def generate_new_response(
        query: str, response: OpenAICallResponse
    ) -> openai.OpenAIDynamicConfig:
        feedback = evaluate_response(query, response)
        return {"computed_fields": {"feedback": feedback}}
    
    
    def self_refine(query: str, depth: int) -> str:
        response = call(query)
        for _ in range(depth):
            response = generate_new_response(query, response)
        return response.content
    
    
    query = "A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?"
    
    print(self_refine(query, 1))

    Enhanced Self-Refine with Response Model

    In this enhanced version, we define a structured response model MathSolution using Pydantic to capture both the solution steps and the final numerical answer. The enhanced_generate_new_response function refines the output by incorporating model-generated feedback and formatting the improved response into a well-defined schema. This approach ensures clarity, consistency, and better downstream usability of the refined answer—especially for tasks like mathematical problem-solving. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    from pydantic import BaseModel, Field
    
    
    class MathSolution(BaseModel):
        steps: list[str] = Field(..., description="The steps taken to solve the problem")
        final_answer: float = Field(..., description="The final numerical answer")
    
    
    @openai.call(model="gpt-4o-mini", response_model=MathSolution)
    @prompt_template(
        """
        For this query:
        {query}
        The following response was given:
        {response}
        Here is some feedback about the response:
        {feedback}
    
        Consider the feedback to generate a new response to the query.
        Provide the solution steps and the final numerical answer.
        """
    )
    def enhanced_generate_new_response(
        query: str, response: OpenAICallResponse
    ) -> openai.OpenAIDynamicConfig:
        feedback = evaluate_response(query, response)
        return {"computed_fields": {"feedback": feedback}}
    
    
    def enhanced_self_refine(query: str, depth: int) -> MathSolution:
        response = call(query)
        for _ in range(depth):
            solution = enhanced_generate_new_response(query, response)
            response = f"Steps: {solution.steps}nFinal Answer: {solution.final_answer}"
        return solution
    
    
    # Example usage
    result = enhanced_self_refine(query, 1)
    print(result)

    The Enhanced Self-Refine technique proved effective in accurately solving the given mathematical problem:

    “A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?”

    Through a single iteration of refinement, the model delivered a logically sound and step-by-step derivation leading to the correct answer of 60 km/h. This illustrates several key benefits of the Self-Refine approach:

    • Improved accuracy through iterative feedback-driven enhancement.
    • Clearer reasoning steps, including variable setup, equation formulation, and quadratic solution application.
    • Greater transparency, making it easier for users to understand and trust the solution.

    In broader applications, this technique holds strong promise for tasks that demand accuracy, structure, and iterative improvement—ranging from technical problem solving to creative and professional writing. However, implementers should remain mindful of the trade-offs in computational cost and fine-tune the depth and feedback prompts to match their specific use case.


    Check out the Full Codes here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    FAQ: Can Marktechpost help me to promote my AI Product and position it in front of AI Devs and Data Engineers?

    Ans: Yes, Marktechpost can help promote your AI product by publishing sponsored articles, case studies, or product features, targeting a global audience of AI developers and data engineers. The MTP platform is widely read by technical professionals, increasing your product’s visibility and positioning within the AI community. [SET UP A CALL]

    The post Implementing Self-Refine Technique Using Large Language Models LLMs appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow Nippon India Mutual Fund improved the accuracy of AI assistant responses using advanced RAG methods on Amazon Bedrock
    Next Article It’s Okay to Be “Just a Wrapper”: Why Solution-Driven AI Companies Win

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 31, 2025
    Machine Learning

    A Coding Guide to Build a Scalable Multi-Agent System with Google ADK

    July 31, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Tx-DevSecOps – Bridging the Gap Between Security and Speed in DevOps

    Development

    Over 100,000 WordPress Sites at Risk from Critical CVSS 10.0 Vulnerability in Wishlist Plugin

    Development

    CVE-2024-9404: Remote DoS Vulnerability Found in Moxa Industrial Switches

    Security

    Attacks on the education sector are surging: How can cyber-defenders respond?

    Development

    Highlights

    SonicWALL Connect Tunnel Vulnerability Allows Attackers to Create a DoS Condition

    April 30, 2025

    SonicWALL Connect Tunnel Vulnerability Allows Attackers to Create a DoS Condition

    A significant security vulnerability has been identified in the SonicWall Connect Tunnel Windows Client, affecting both 32-bit and 64-bit versions.
    This vulnerability, designated as CVE-2025-32817, in …
    Read more

    Published Date:
    Apr 30, 2025 (3 hours, 15 minutes ago)

    Vulnerabilities has been mentioned in this article.

    CVE-2025-32817

    Final Fantasy XVI producer doubles down on multiplatform strategy for Xbox — comments on popularity ofturn-basedRPGS

    July 11, 2025

    “Let’s amp up everything” — Directive 8020 is Alien meets The Thing, and it’s introducing these huge changes for Supermassive’s cinematic horror games

    June 21, 2025

    CVE-2024-47065 – Meshtastic Traceroute Rate Limiting Vulnerability

    July 11, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.