Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      This week in AI updates: Mistral’s new Le Chat features, ChatGPT updates, and more (September 5, 2025)

      September 6, 2025

      Designing For TV: Principles, Patterns And Practical Guidance (Part 2)

      September 5, 2025

      Neo4j introduces new graph architecture that allows operational and analytics workloads to be run together

      September 5, 2025

      Beyond the benchmarks: Understanding the coding personalities of different LLMs

      September 5, 2025

      Hitachi Energy Pledges $1B to Strengthen US Grid, Build Largest Transformer Plant in Virginia

      September 5, 2025

      How to debug a web app with Playwright MCP and GitHub Copilot

      September 5, 2025

      Between Strategy and Story: Thierry Chopain’s Creative Path

      September 5, 2025

      What You Need to Know About CSS Color Interpolation

      September 5, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Why browsers throttle JavaScript timers (and what to do about it)

      September 6, 2025
      Recent

      Why browsers throttle JavaScript timers (and what to do about it)

      September 6, 2025

      How to create Google Gemini AI component in Total.js Flow

      September 6, 2025

      Drupal 11’s AI Features: What They Actually Mean for Your Team

      September 5, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Harnessing GitOps on Linux for Seamless, Git-First Infrastructure Management

      September 6, 2025
      Recent

      Harnessing GitOps on Linux for Seamless, Git-First Infrastructure Management

      September 6, 2025

      How DevOps Teams Are Redefining Reliability with NixOS and OSTree-Powered Linux

      September 5, 2025

      Distribution Release: Linux Mint 22.2

      September 4, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»TikTok Researchers Introduce SWE-Perf: The First Benchmark for Repository-Level Code Performance Optimization

    TikTok Researchers Introduce SWE-Perf: The First Benchmark for Repository-Level Code Performance Optimization

    July 22, 2025

    Introduction

    As large language models (LLMs) advance in software engineering tasks—ranging from code generation to bug fixing—performance optimization remains an elusive frontier, especially at the repository level. To bridge this gap, researchers from TikTok and collaborating institutions have introduced SWE-Perf—the first benchmark specifically designed to evaluate the ability of LLMs to optimize code performance in real-world repositories.

    Unlike prior benchmarks focused on correctness or function-level efficiency (e.g., SWE-Bench, Mercury, EFFIBench), SWE-Perf captures the complexity and contextual depth of repository-scale performance tuning. It provides a reproducible, quantitative foundation to study and improve the performance optimization capabilities of modern LLMs.

    Image source: https://arxiv.org/abs/2507.12415

    Why SWE-Perf Is Needed

    Real-world codebases are often large, modular, and intricately interdependent. Optimizing them for performance requires understanding of cross-file interactions, execution paths, and computational bottlenecks—challenges beyond the scope of isolated function-level datasets.

    LLMs today are largely evaluated on tasks like syntax correction or small function transformations. But in production environments, performance tuning across repositories can yield more substantial system-wide benefits. SWE-Perf is explicitly built to measure LLM capabilities in such settings.

    Image source: https://arxiv.org/abs/2507.12415

    Dataset Construction

    SWE-Perf is constructed from over 100,000 pull requests across high-profile GitHub repositories. The final dataset covered 9 repositories including:

    • 140 curated instances demonstrating measurable and stable performance improvements.
    • Complete codebases pre- and post-optimization.
    • Target functions categorized as oracle (file-level) or realistic (repo-level).
    • Unit tests and Docker environments for reproducible execution and performance measurement.
    • Expert-authored patches used as gold standards.

    To ensure validity, each unit test must:

    1. Pass before and after the patch.
    2. Show statistically significant runtime gains over 20 repetitions (Mann-Whitney U test, p < 0.1).

    Performance is measured using minimum performance gain (δ), isolating statistical improvements attributable to the patch while filtering noise.

    Benchmark Settings: Oracle vs. Realistic

    • Oracle Setting: The model receives only the target functions and corresponding files. This setting tests localized optimization skills.
    • Realistic Setting: The model is given an entire repository and must identify and optimize performance-critical paths autonomously. This is a closer analog to how human engineers work.

    Evaluation Metrics

    SWE-Perf defines a three-tier evaluation framework, reporting each metric independently:

    1. Apply: Can the model-generated patch be applied cleanly?
    2. Correctness: Does the patch preserve functional integrity (all unit tests pass)?
    3. Performance: Does the patch yield measurable runtime improvement?

    The metrics are not aggregated into a single score, allowing more nuanced evaluation of tradeoffs between syntactic correctness and performance gains.

    Experimental Results

    The benchmark evaluates several top-tier LLMs under both oracle and realistic settings:

    ModelSettingPerformance (%)
    Claude-4-opusOracle1.28
    GPT-4oOracle0.60
    Gemini-2.5-ProOracle1.48
    Claude-3.7 (Agentless)Realistic0.41
    Claude-3.7 (OpenHands)Realistic2.26
    Expert (Human Patch)–10.85

    Notably, even the best-performing LLM configurations fall significantly short of human-level performance. The agent-based method OpenHands, built on Claude-3.7-sonnet, outperforms other configurations in the realistic setting but still lags behind expert-crafted optimizations.

    Key Observations

    • Agent-based frameworks like OpenHands are better suited for complex, multi-step optimization, outperforming direct model prompts and pipeline-based approaches like Agentless.
    • Performance degrades as the number of target functions increases—LLMs struggle with broader optimization scopes.
    • LLMs exhibit limited scalability in long-runtime scenarios, where expert systems continue to show performance gains.
    • Patch analysis shows LLMs focus more on low-level code structures (e.g., imports, environment setup), while experts target high-level semantic abstractions for performance tuning.

    Conclusion

    SWE-Perf represents a pivotal step toward measuring and improving the performance optimization capabilities of LLMs in realistic software engineering workflows. It uncovers a significant capability gap between existing models and human experts, offering a strong foundation for future research in repository-scale performance tuning. As LLMs evolve, SWE-Perf can serve as a north star guiding them toward practical, production-ready software enhancement at scale.


    Check out the Paper, GitHub Page and Project. All credit for this research goes to the researchers of this project.

    Sponsorship Opportunity: Reach the most influential AI developers in US and Europe. 1M+ monthly readers, 500K+ community builders, infinite possibilities. [Explore Sponsorship]

    The post TikTok Researchers Introduce SWE-Perf: The First Benchmark for Repository-Level Code Performance Optimization appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleThis AI Paper from Alibaba Introduces Lumos-1: A Unified Autoregressive Video Generator Leveraging MM-RoPE and AR-DF for Efficient Spatiotemporal Modeling
    Next Article Allen Institute for AI-Ai2 Unveils AutoDS: A Bayesian Surprise-Driven Engine for Open-Ended Scientific Discovery

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Disposable Email Detection in Laravel

    Development

    CVE-2025-40924 – Catalyst::Plugin::Session Insecure Session ID Generation

    Common Vulnerabilities and Exposures (CVEs)

    12 Best Free and Open Source Linux Business Intelligence Software

    Linux

    NVIDIA’s laptop GPUs are being throttled — modder blows past limits with 250W RTX 5090 and unlocks 40% more performance

    News & Updates

    Highlights

    CVE-2025-5117 – WordPress Property Plugin Privilege Escalation Vulnerability

    May 27, 2025

    CVE ID : CVE-2025-5117

    Published : May 27, 2025, 12:15 p.m. | 1 hour, 5 minutes ago

    Description : The Property plugin for WordPress is vulnerable to Privilege Escalation due to a missing capability check on the use of the property_package_user_role metadata in versions 1.0.5 to 1.0.6. This makes it possible for authenticated attackers, with Author‐level access and above, to elevate their privileges to that of an administrator by creating a package post whose property_package_user_role is set to administrator and then submitting the PayPal registration form.

    Severity: 8.8 | HIGH

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Google Confirms Salesforce Database Breach by ShinyHunters Group

    August 7, 2025

    This month in security with Tony Anscombe – May 2025 edition

    May 31, 2025

    Windows 11 KB5058411 24H2 out with AI features, direct download .msu offline installers

    May 14, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.