Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 3, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 3, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 3, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 3, 2025

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025

      PSA: You don’t need to spend $400+ to upgrade your Xbox Series X|S storage

      June 3, 2025

      UK civil servants saved 24 minutes per day using Microsoft Copilot, saving two weeks each per year according to a new report

      June 3, 2025

      These solid-state fans will revolutionize cooling in our PCs and laptops

      June 3, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Community News: Latest PECL Releases (06.03.2025)

      June 3, 2025
      Recent

      Community News: Latest PECL Releases (06.03.2025)

      June 3, 2025

      A Comprehensive Guide to Azure Firewall

      June 3, 2025

      Test Job Failures Precisely with Laravel’s assertFailedWith Method

      June 3, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025
      Recent

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025

      PSA: You don’t need to spend $400+ to upgrade your Xbox Series X|S storage

      June 3, 2025

      UK civil servants saved 24 minutes per day using Microsoft Copilot, saving two weeks each per year according to a new report

      June 3, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Salesforce AI Research Proposes PerfCodeGen: A Training-Free Framework that Enhances the Performance of LLM-Generated Code with Execution Feedback

    Salesforce AI Research Proposes PerfCodeGen: A Training-Free Framework that Enhances the Performance of LLM-Generated Code with Execution Feedback

    January 18, 2025

    Large Language Models (LLMs) have become essential tools in software development, offering capabilities such as generating code snippets, automating unit tests, and debugging. However, these models often fall short in producing code that is not only functionally correct but also efficient in runtime. Overlooking runtime efficiency can lead to software that performs poorly, increases operational costs, and impacts user experience. This issue is particularly pronounced for less experienced developers, who may rely on AI-suggested code without fully understanding its implications. Salesforce Research addresses these challenges with PerfCodeGen, a framework that aims to improve both the correctness and performance of LLM-generated code.

    Salesforce AI’s PerfCodeGen is a training-free framework designed to enhance the runtime efficiency of LLM-generated code. It achieves this by using execution feedback in an iterative self-refinement process. Unlike approaches requiring fine-tuning with extensive training data, PerfCodeGen employs a feedback loop that evaluates and refines code based on runtime metrics during test execution. The framework operates in two key phases: refining correctness and optimizing performance. Initially, it ensures the generated code meets functional requirements by addressing issues identified in unit tests. Once correctness is established, the framework focuses on runtime efficiency, optimizing the code by targeting and refining the most resource-intensive test cases. This iterative process results in solutions that are both correct and efficient.

    Technical Insights and Benefits

    PerfCodeGen integrates with existing LLM workflows and begins by generating multiple candidate solutions using nucleus sampling. In the first phase, these candidates are assessed for correctness through unit tests. Feedback from failed tests is used to refine the solutions. Once functional correctness is ensured, the framework moves to the second phase, analyzing runtime metrics to identify bottlenecks. This information is then used to optimize the code further, focusing on the most time-consuming test cases.

    This two-phase process increases the likelihood of producing optimally efficient programs. PerfCodeGen’s methodology mirrors human debugging and optimization practices, making it both effective and intuitive. Additionally, the framework’s reliance on feedback rather than retraining allows it to scale across various LLMs and application domains. It has shown consistent improvements in runtime efficiency and correctness across models such as Phi-3-mini, Llama 3, and GPT-4.

    PerfCodeGen has been tested on benchmarks such as HumanEval, MBPP, and APPS, demonstrating its effectiveness:

    1. Runtime Efficiency: On HumanEval, GPT-4’s optimization rate (%Opt) increased from 24.54% to 28.83% with PERFCODEGEN, with similar improvements observed across other models.
    2. Correctness Improvement: On MBPP, GPT-3.5’s correctness rate (%Correct) rose from 66.38% to 73.36% with a single sample (Best@1).
    3. Outperforming Ground Truth: PERFCODEGEN enabled LLMs to generate more efficient solutions than ground truth in approximately 55% of HumanEval tasks and 67% of MBPP tasks.
    4. Scalability: Open models such as Phi-3-mini and Mixtral achieved performance comparable to closed models like GPT-3.5 and GPT-4.

    These results highlight PERFCODEGEN’s ability to balance correctness and runtime efficiency effectively, making it a valuable addition to LLM-driven code generation workflows.

    Conclusion:

    PerfCodeGen offers a practical solution to a key limitation of current LLMs: their focus on correctness at the expense of runtime efficiency. By incorporating execution feedback into an iterative refinement process, PerfCodeGen enables the generation of code that is both correct and efficient. This approach enhances the usability of LLMs in software development, providing developers with tools to produce higher-quality code without extensive retraining. The framework’s success across diverse benchmarks demonstrates its potential as a step forward in creating efficient, reliable, and accessible AI-driven programming solutions.


    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 65k+ ML SubReddit.

    🚨 Recommend Open-Source Platform: Parlant is a framework that transforms how AI agents make decisions in customer-facing scenarios. (Promoted)

    The post Salesforce AI Research Proposes PerfCodeGen: A Training-Free Framework that Enhances the Performance of LLM-Generated Code with Execution Feedback appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleRsync ha un problema di Remote Code Execution, ma fortunatamente le fix sono già disponibili
    Next Article Researchers from Meta AI and UT Austin Explored Scaling in Auto-Encoders and Introduced ViTok: A ViT-Style Auto-Encoder to Perform Exploration

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 3, 2025
    Machine Learning

    This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal Reasoning

    June 3, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Ransomware reaches a record high, but payouts are dwindling

    Development

    CVE-2024-51475 – IBM Content Navigator HTML Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    RARE (Retrieval-Augmented Reasoning Modeling): A Scalable AI Framework for Domain-Specific Reasoning in Lightweight Language Models

    Machine Learning

    From $22M in Ransom to +100M Stolen Records: 2025’s All-Star SaaS Threat Actors to Watch

    Development

    Highlights

    Samsung’s $99 Galaxy Watch is the best WearOS deal right now – here’s how to qualify

    June 26, 2024

    Ahead of Samsung Unpacked, you can buy the latest Galaxy Watch FE at a discount…

    XSSTRON — Find XSS Vulnerabilities by Just Browsing

    June 3, 2025

    Figma’s AI design feature disabled after copying Apple’s weather app

    July 5, 2024

    What to Do When Your Website Faces a Major Software Change

    May 13, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.