Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The Ultimate Guide to Node.js Development Pricing for Enterprises

      July 29, 2025

      Stack Overflow: Developers’ trust in AI outputs is worsening year over year

      July 29, 2025

      Web Components: Working With Shadow DOM

      July 28, 2025

      Google’s new Opal tool allows users to create mini AI apps with no coding required

      July 28, 2025

      5 preinstalled apps you should delete from your Samsung phone immediately

      July 30, 2025

      Ubuntu Linux lagging? Try my 10 go-to tricks to speed it up

      July 30, 2025

      How I survived a week with this $130 smartwatch instead of my Garmin and Galaxy Ultra

      July 30, 2025

      YouTube is using AI to verify your age now – and if it’s wrong, that’s on you to fix

      July 30, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Time-Controlled Data Processing with Laravel LazyCollection Methods

      July 30, 2025
      Recent

      Time-Controlled Data Processing with Laravel LazyCollection Methods

      July 30, 2025

      Create Apple Wallet Passes in Laravel

      July 30, 2025

      The Laravel Idea Plugin is Now FREE for PhpStorm Users

      July 30, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      New data shows Xbox is utterly dominating PlayStation’s storefront — accounting for 60% of the Q2 top 10 game sales spots

      July 30, 2025
      Recent

      New data shows Xbox is utterly dominating PlayStation’s storefront — accounting for 60% of the Q2 top 10 game sales spots

      July 30, 2025

      Opera throws Microsoft to Brazil’s watchdogs for promoting Edge as your default browser — “Microsoft thwarts‬‭ browser‬‭ competition‬‭‬‭ at‬‭ every‬‭ turn”

      July 30, 2025

      Activision once again draws the ire of players for new Diablo Immortal marketing that appears to have been made with generative AI

      July 30, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Implementing Self-Refine Technique Using Large Language Models LLMs

    Implementing Self-Refine Technique Using Large Language Models LLMs

    July 29, 2025

    This tutorial demonstrates how to implement the Self-Refine technique using Large Language Models (LLMs) with Mirascope, a powerful framework for building structured prompt workflows. Self-Refine is a prompt engineering strategy where the model evaluates its own output, generates feedback, and iteratively improves its response based on that feedback. This refinement loop can be repeated multiple times to progressively enhance the quality and accuracy of the final answer.

    The Self-Refine approach is particularly effective for tasks involving reasoning, code generation, and content creation, where incremental improvements lead to significantly better results. Check out the Full Codes here

    Installing the dependencies

    Copy CodeCopiedUse a different Browser
    !pip install "mirascope[openai]"

    OpenAI API Key

    To get an OpenAI API key, visit https://platform.openai.com/settings/organization/api-keys and generate a new key. If you’re a new user, you may need to add billing details and make a minimum payment of $5 to activate API access. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    import os
    from getpass import getpass
    os.environ["OPENAI_API_KEY"] = getpass('Enter OpenAI API Key: ')

    Basic Self-Refine Implementation

    We begin by implementing the Self-Refine technique using Mirascope’s @openai.call and @prompt_template decorators. The process starts with generating an initial response to a user query. This response is then evaluated by the model itself, which provides constructive feedback. Finally, the model uses this feedback to generate an improved response. The self_refine function allows us to repeat this refinement process for a specified number of iterations, enhancing the quality of the output with each cycle. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    from mirascope.core import openai, prompt_template
    from mirascope.core.openai import OpenAICallResponse
    
    
    @openai.call(model="gpt-4o-mini")
    def call(query: str) -> str:
        return query
    
    
    @openai.call(model="gpt-4o-mini")
    @prompt_template(
        """
        Here is a query and a response to the query. Give feedback about the answer,
        noting what was correct and incorrect.
        Query:
        {query}
        Response:
        {response}
        """
    )
    def evaluate_response(query: str, response: OpenAICallResponse): ...
    
    
    @openai.call(model="gpt-4o-mini")
    @prompt_template(
        """
        For this query:
        {query}
        The following response was given:
        {response}
        Here is some feedback about the response:
        {feedback}
    
        Consider the feedback to generate a new response to the query.
        """
    )
    def generate_new_response(
        query: str, response: OpenAICallResponse
    ) -> openai.OpenAIDynamicConfig:
        feedback = evaluate_response(query, response)
        return {"computed_fields": {"feedback": feedback}}
    
    
    def self_refine(query: str, depth: int) -> str:
        response = call(query)
        for _ in range(depth):
            response = generate_new_response(query, response)
        return response.content
    
    
    query = "A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?"
    
    print(self_refine(query, 1))

    Enhanced Self-Refine with Response Model

    In this enhanced version, we define a structured response model MathSolution using Pydantic to capture both the solution steps and the final numerical answer. The enhanced_generate_new_response function refines the output by incorporating model-generated feedback and formatting the improved response into a well-defined schema. This approach ensures clarity, consistency, and better downstream usability of the refined answer—especially for tasks like mathematical problem-solving. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    from pydantic import BaseModel, Field
    
    
    class MathSolution(BaseModel):
        steps: list[str] = Field(..., description="The steps taken to solve the problem")
        final_answer: float = Field(..., description="The final numerical answer")
    
    
    @openai.call(model="gpt-4o-mini", response_model=MathSolution)
    @prompt_template(
        """
        For this query:
        {query}
        The following response was given:
        {response}
        Here is some feedback about the response:
        {feedback}
    
        Consider the feedback to generate a new response to the query.
        Provide the solution steps and the final numerical answer.
        """
    )
    def enhanced_generate_new_response(
        query: str, response: OpenAICallResponse
    ) -> openai.OpenAIDynamicConfig:
        feedback = evaluate_response(query, response)
        return {"computed_fields": {"feedback": feedback}}
    
    
    def enhanced_self_refine(query: str, depth: int) -> MathSolution:
        response = call(query)
        for _ in range(depth):
            solution = enhanced_generate_new_response(query, response)
            response = f"Steps: {solution.steps}nFinal Answer: {solution.final_answer}"
        return solution
    
    
    # Example usage
    result = enhanced_self_refine(query, 1)
    print(result)

    The Enhanced Self-Refine technique proved effective in accurately solving the given mathematical problem:

    “A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?”

    Through a single iteration of refinement, the model delivered a logically sound and step-by-step derivation leading to the correct answer of 60 km/h. This illustrates several key benefits of the Self-Refine approach:

    • Improved accuracy through iterative feedback-driven enhancement.
    • Clearer reasoning steps, including variable setup, equation formulation, and quadratic solution application.
    • Greater transparency, making it easier for users to understand and trust the solution.

    In broader applications, this technique holds strong promise for tasks that demand accuracy, structure, and iterative improvement—ranging from technical problem solving to creative and professional writing. However, implementers should remain mindful of the trade-offs in computational cost and fine-tune the depth and feedback prompts to match their specific use case.


    Check out the Full Codes here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    FAQ: Can Marktechpost help me to promote my AI Product and position it in front of AI Devs and Data Engineers?

    Ans: Yes, Marktechpost can help promote your AI product by publishing sponsored articles, case studies, or product features, targeting a global audience of AI developers and data engineers. The MTP platform is widely read by technical professionals, increasing your product’s visibility and positioning within the AI community. [SET UP A CALL]

    The post Implementing Self-Refine Technique Using Large Language Models LLMs appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow Nippon India Mutual Fund improved the accuracy of AI assistant responses using advanced RAG methods on Amazon Bedrock
    Next Article It’s Okay to Be “Just a Wrapper”: Why Solution-Driven AI Companies Win

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 29, 2025
    Machine Learning

    Amazon Develops an AI Architecture that Cuts Inference Time 30% by Activating Only Relevant Neurons

    July 29, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Disaster Recovery and Business Continuity Plan

    News & Updates

    CVE-2025-48919 – Drupal Simple Klaro Cross-Site Scripting (XSS)

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-6877 – SourceCodester Best Salon Management System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Xbox PC App Update Brings Cloud Console Games and Cross-Device Play History

    Operating Systems

    Highlights

    Linux

    Debian 13: Novità nell’Installer con Supporto per il Recupero dei Sistemi Btrfs

    July 5, 2025

    Debian è una delle distribuzioni GNU/Linux più antiche e rispettate, nota per la sua stabilità,…

    Laptop leak shows a gorgeous PANTONE-inspired PC from a company you won’t expect

    Laptop leak shows a gorgeous PANTONE-inspired PC from a company you won’t expect

    April 11, 2025

    CVE-2025-6481 – “Simple Pizza Ordering System SQL Injection Vulnerability”

    June 22, 2025

    OneDrive is Down for Many: “Too Many Requests” Error Blocks Access — Here’s How to Fix It

    June 12, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.