Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      From Data To Decisions: UX Strategies For Real-Time Dashboards

      September 13, 2025

      Honeycomb launches AI observability suite for developers

      September 13, 2025

      Low-Code vs No-Code Platforms for Node.js: What CTOs Must Know Before Investing

      September 12, 2025

      ServiceNow unveils Zurich AI platform

      September 12, 2025

      Building personal apps with open source and AI

      September 12, 2025

      What Can We Actually Do With corner-shape?

      September 12, 2025

      Craft, Clarity, and Care: The Story and Work of Mengchu Yao

      September 12, 2025

      Distribution Release: Q4OS 6.1

      September 12, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Optimizely Mission Control – Part III

      September 14, 2025
      Recent

      Optimizely Mission Control – Part III

      September 14, 2025

      Learning from PHP Log to File Example

      September 13, 2025

      Online EMI Calculator using PHP – Calculate Loan EMI, Interest, and Amortization Schedule

      September 13, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      sudo vs sudo-rs: What You Need to Know About the Rust Takeover of Classic Sudo Command

      September 14, 2025
      Recent

      sudo vs sudo-rs: What You Need to Know About the Rust Takeover of Classic Sudo Command

      September 14, 2025

      Dmitry — The Deep Magic

      September 13, 2025

      Right way to record and share our Terminal sessions

      September 13, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Implementing Self-Refine Technique Using Large Language Models LLMs

    Implementing Self-Refine Technique Using Large Language Models LLMs

    July 29, 2025

    This tutorial demonstrates how to implement the Self-Refine technique using Large Language Models (LLMs) with Mirascope, a powerful framework for building structured prompt workflows. Self-Refine is a prompt engineering strategy where the model evaluates its own output, generates feedback, and iteratively improves its response based on that feedback. This refinement loop can be repeated multiple times to progressively enhance the quality and accuracy of the final answer.

    The Self-Refine approach is particularly effective for tasks involving reasoning, code generation, and content creation, where incremental improvements lead to significantly better results. Check out the Full Codes here

    Installing the dependencies

    Copy CodeCopiedUse a different Browser
    !pip install "mirascope[openai]"

    OpenAI API Key

    To get an OpenAI API key, visit https://platform.openai.com/settings/organization/api-keys and generate a new key. If you’re a new user, you may need to add billing details and make a minimum payment of $5 to activate API access. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    import os
    from getpass import getpass
    os.environ["OPENAI_API_KEY"] = getpass('Enter OpenAI API Key: ')

    Basic Self-Refine Implementation

    We begin by implementing the Self-Refine technique using Mirascope’s @openai.call and @prompt_template decorators. The process starts with generating an initial response to a user query. This response is then evaluated by the model itself, which provides constructive feedback. Finally, the model uses this feedback to generate an improved response. The self_refine function allows us to repeat this refinement process for a specified number of iterations, enhancing the quality of the output with each cycle. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    from mirascope.core import openai, prompt_template
    from mirascope.core.openai import OpenAICallResponse
    
    
    @openai.call(model="gpt-4o-mini")
    def call(query: str) -> str:
        return query
    
    
    @openai.call(model="gpt-4o-mini")
    @prompt_template(
        """
        Here is a query and a response to the query. Give feedback about the answer,
        noting what was correct and incorrect.
        Query:
        {query}
        Response:
        {response}
        """
    )
    def evaluate_response(query: str, response: OpenAICallResponse): ...
    
    
    @openai.call(model="gpt-4o-mini")
    @prompt_template(
        """
        For this query:
        {query}
        The following response was given:
        {response}
        Here is some feedback about the response:
        {feedback}
    
        Consider the feedback to generate a new response to the query.
        """
    )
    def generate_new_response(
        query: str, response: OpenAICallResponse
    ) -> openai.OpenAIDynamicConfig:
        feedback = evaluate_response(query, response)
        return {"computed_fields": {"feedback": feedback}}
    
    
    def self_refine(query: str, depth: int) -> str:
        response = call(query)
        for _ in range(depth):
            response = generate_new_response(query, response)
        return response.content
    
    
    query = "A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?"
    
    print(self_refine(query, 1))

    Enhanced Self-Refine with Response Model

    In this enhanced version, we define a structured response model MathSolution using Pydantic to capture both the solution steps and the final numerical answer. The enhanced_generate_new_response function refines the output by incorporating model-generated feedback and formatting the improved response into a well-defined schema. This approach ensures clarity, consistency, and better downstream usability of the refined answer—especially for tasks like mathematical problem-solving. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    from pydantic import BaseModel, Field
    
    
    class MathSolution(BaseModel):
        steps: list[str] = Field(..., description="The steps taken to solve the problem")
        final_answer: float = Field(..., description="The final numerical answer")
    
    
    @openai.call(model="gpt-4o-mini", response_model=MathSolution)
    @prompt_template(
        """
        For this query:
        {query}
        The following response was given:
        {response}
        Here is some feedback about the response:
        {feedback}
    
        Consider the feedback to generate a new response to the query.
        Provide the solution steps and the final numerical answer.
        """
    )
    def enhanced_generate_new_response(
        query: str, response: OpenAICallResponse
    ) -> openai.OpenAIDynamicConfig:
        feedback = evaluate_response(query, response)
        return {"computed_fields": {"feedback": feedback}}
    
    
    def enhanced_self_refine(query: str, depth: int) -> MathSolution:
        response = call(query)
        for _ in range(depth):
            solution = enhanced_generate_new_response(query, response)
            response = f"Steps: {solution.steps}nFinal Answer: {solution.final_answer}"
        return solution
    
    
    # Example usage
    result = enhanced_self_refine(query, 1)
    print(result)

    The Enhanced Self-Refine technique proved effective in accurately solving the given mathematical problem:

    “A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?”

    Through a single iteration of refinement, the model delivered a logically sound and step-by-step derivation leading to the correct answer of 60 km/h. This illustrates several key benefits of the Self-Refine approach:

    • Improved accuracy through iterative feedback-driven enhancement.
    • Clearer reasoning steps, including variable setup, equation formulation, and quadratic solution application.
    • Greater transparency, making it easier for users to understand and trust the solution.

    In broader applications, this technique holds strong promise for tasks that demand accuracy, structure, and iterative improvement—ranging from technical problem solving to creative and professional writing. However, implementers should remain mindful of the trade-offs in computational cost and fine-tune the depth and feedback prompts to match their specific use case.


    Check out the Full Codes here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    FAQ: Can Marktechpost help me to promote my AI Product and position it in front of AI Devs and Data Engineers?

    Ans: Yes, Marktechpost can help promote your AI product by publishing sponsored articles, case studies, or product features, targeting a global audience of AI developers and data engineers. The MTP platform is widely read by technical professionals, increasing your product’s visibility and positioning within the AI community. [SET UP A CALL]

    The post Implementing Self-Refine Technique Using Large Language Models LLMs appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleFine-tune and deploy Meta Llama 3.2 Vision for generative AI-powered web automation using AWS DLCs, Amazon EKS, and Amazon Bedrock
    Next Article It’s Okay to Be “Just a Wrapper”: Why Solution-Driven AI Companies Win

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    ConnectWise customers get mysterious warning about ‘sophisticated’ nation-state hack

    Security

    Microsoft Reportedly Hires More AI Talent from Google DeepMind

    Operating Systems

    CVE-2025-4013 – PHPGurukul Art Gallery Management System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-4063 – Code-projects Student Information Management System Buffer Overflow

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    10 Best Free and Open Source Collection Managers

    April 30, 2025

    There is a wide range of collection manager software available for Linux. We recommend the…

    Chinese Hackers Exploit Ivanti CSA Zero-Days in Attacks on French Government, Telecoms

    July 15, 2025

    DuckDB is an in-process SQL OLAP database management system

    April 12, 2025

    CVE-2025-6925 – Dromara RuoYi-Vue-Plus Path Traversal Vulnerability

    June 30, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.