Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The AI productivity paradox in software engineering: Balancing efficiency and human skill retention

      July 2, 2025

      The impact of gray work on software development

      July 2, 2025

      CSS Intelligence: Speculating On The Future Of A Smarter Language

      July 2, 2025

      Hallucinated code, real threat: How slopsquatting targets AI-assisted development

      July 1, 2025

      Xbox is cancelling Rare’s ‘Everwild’ and ZeniMax’s new MMORPG IP as part of broader cuts — with ‘Perfect Dark’ impacted as well

      July 2, 2025

      Microsoft is closing down Xbox studio The Initiative, with Perfect Dark killed as well — joining Everwild and ZeniMax’s new IP, and other unannounced projects

      July 2, 2025

      No, Microsoft and Xbox’s Phil Spencer isn’t stepping down any time soon — here’s the truth

      July 2, 2025

      Everwild’s cancellation has me worried for one of my favorite dev teams and Xbox itself — It needs creative new games to thrive and refresh its identity

      July 2, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Trust but Verify: The Curious Case of AI Hallucinations

      July 2, 2025
      Recent

      Trust but Verify: The Curious Case of AI Hallucinations

      July 2, 2025

      From Flow to Fabric: Connecting Power Automate to Microsoft Fabric

      July 2, 2025

      Flutter Web Hot Reload Has Landed – No More Refreshes!

      July 2, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Xbox is cancelling Rare’s ‘Everwild’ and ZeniMax’s new MMORPG IP as part of broader cuts — with ‘Perfect Dark’ impacted as well

      July 2, 2025
      Recent

      Xbox is cancelling Rare’s ‘Everwild’ and ZeniMax’s new MMORPG IP as part of broader cuts — with ‘Perfect Dark’ impacted as well

      July 2, 2025

      Microsoft is closing down Xbox studio The Initiative, with Perfect Dark killed as well — joining Everwild and ZeniMax’s new IP, and other unannounced projects

      July 2, 2025

      No, Microsoft and Xbox’s Phil Spencer isn’t stepping down any time soon — here’s the truth

      July 2, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Georgia Tech and Stanford Researchers Introduce MLE-Dojo: A Gym-Style Framework Designed for Training, Evaluating, and Benchmarking Autonomous Machine Learning Engineering (MLE) Agents

    Georgia Tech and Stanford Researchers Introduce MLE-Dojo: A Gym-Style Framework Designed for Training, Evaluating, and Benchmarking Autonomous Machine Learning Engineering (MLE) Agents

    May 15, 2025

    Machine learning engineering (MLE) involves developing, tuning, and deploying machine learning systems that require iterative experimentation, model optimization, and robust handling of data pipelines. As model complexity increases, so do the challenges associated with orchestrating end-to-end workflows efficiently. Researchers have explored the automation of MLE tasks using AI agents to handle these demands. Large Language Models (LLMs), particularly those with strong coding and problem-solving abilities, have shown potential to enhance this process significantly. Their role in automating structured workflows is now being tested through rigorous benchmarks and environments tailored to emulate real-world MLE scenarios.

    A primary hurdle in automating machine learning engineering lies in the work’s inherently iterative and feedback-driven nature. Tasks such as hyperparameter tuning, model debugging, and data preprocessing cannot be resolved in one step; they require repeated modifications and evaluations. Traditional evaluation tools for AI models often rely on static datasets and do not allow for real-time error feedback or interactive problem-solving. This limitation prevents LLM agents from learning through trial and error, an essential component for mastering engineering tasks that evolve or require multiple attempts for success.

    Earlier tools to evaluate LLMs in engineering or coding tasks have mostly focused on individual subtasks or isolated challenges. These include tools like MLAgentBench and DSBench, which rely on narrow test cases sourced from Kaggle competitions or synthetic datasets. While they cover more than basic tasks, they do not enable agents to perform code execution, debugging, or results interpretation in a live setting. Other environments, like SWE-Gym, focus exclusively on software engineering and lack support for machine learning-specific workflows. These limitations have slowed the creation of versatile, high-performing MLE agents that can handle real-time project complexities.

    Researchers from Georgia Institute of Technology and Stanford University have introduced MLE-Dojo, a framework with an interactive environment that connects LLM agents with real-world machine learning tasks derived from over 200 Kaggle competitions. This framework supports tabular data analysis, computer vision, natural language processing, and time-series forecasting challenges. Research introduced MLE-Dojo to allow agents to write, execute, and revise code in a sandboxed, feedback-rich setting. The goal was to replicate the interactive cycles that human engineers follow, enabling structured learning for agents. The environment includes pre-installed dependencies, evaluation metrics, and supports supervised fine-tuning and reinforcement learning strategies.

    MLE-Dojo’s structure consists of modular components that support a wide range of MLE challenges. Each task runs within its own Docker container, isolating it for safety and reproducibility. Agents interact with the environment through a Partially Observable Markov Decision Process, receiving observations, performing actions, and gaining rewards based on performance. The environment supports five primary action types: requesting task information, validating code, executing code, retrieving interaction history, and resetting the environment. It also provides a detailed observation space that includes datasets, execution results, and error messages. The agent receives structured feedback after every interaction, allowing for step-wise improvement. This modular setup helps maintain interoperability and simplifies adding new tasks to the system.

    The evaluation included eight frontier LLMs—Gemini-2.5-Pro, DeepSeek-r1, o3-mini, GPT-4o, GPT-4o-mini, Gemini-2.0-Pro, Gemini-2.0-Flash, and DeepSeek-v3—across four core machine learning domains. Gemini-2.5-Pro achieved the highest Elo rating of 1257, followed by DeepSeek-r1 at 1137 and o3-mini at 1108. Regarding HumanRank, Gemini-2.5-Pro led with 61.95%, indicating its superior performance over human benchmarks. Models like GPT-4o-mini executed code only 20% of the time, adopting conservative strategies, while o3-mini performed executions in over 90% of the cases. The average failure rate for Gemini-2.5-Pro remained the lowest across validation and execution phases, reinforcing its robustness. Among domains, computer vision posed the greatest challenge, with most models scoring under 60 in HumanRank. Reasoning models generally produced longer outputs and maintained stronger performance consistency across iterations.

    The research highlights the difficulty of applying LLMs to full machine learning workflows. It outlines a comprehensive solution in MLE-Dojo that enables learning through interaction, not just completion. MLE-Dojo sets a new standard for training and evaluating autonomous MLE agents by simulating engineering environments more accurately.


    Check out the Paper, Project Page and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

    The post Georgia Tech and Stanford Researchers Introduce MLE-Dojo: A Gym-Style Framework Designed for Training, Evaluating, and Benchmarking Autonomous Machine Learning Engineering (MLE) Agents appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleA Step-by-Step Guide to Build an Automated Knowledge Graph Pipeline Using LangGraph and NetworkX
    Next Article Researchers from Tsinghua and ModelBest Release Ultra-FineWeb: A Trillion-Token Dataset Enhancing LLM Accuracy Across Benchmarks

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 2, 2025
    Machine Learning

    The Super Weight in Large Language Models

    July 2, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Gemini 2.0 is now available to everyone

    Artificial Intelligence

    CVE-2025-32301 – LambertGroup CountDown Pro WP Plugin SQL Injection

    Common Vulnerabilities and Exposures (CVEs)

    The best Mini ITX PC cases of 2025: Expert recommended

    News & Updates

    CVE-2024-52561 – Parallels Desktop for Mac Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Microsoft Edge 136 revamps many of the browser’s sections

    April 16, 2025

    Microsoft has released Edge 136 Beta, introducing many improvements to enhance user experience and security.…

    Ray jobs on Amazon SageMaker HyperPod: scalable and resilient distributed AI

    April 2, 2025

    CVE-2025-5431 – AssamLook CMS SQL Injection Vulnerability

    June 2, 2025

    UC Berkeley Introduces CyberGym: A Real-World Cybersecurity Evaluation Framework to Evaluate AI Agents on Large-Scale Vulnerabilities Across Massive Codebases

    June 20, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.