Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Georgia Tech and Stanford Researchers Introduce MLE-Dojo: A Gym-Style Framework Designed for Training, Evaluating, and Benchmarking Autonomous Machine Learning Engineering (MLE) Agents

    Georgia Tech and Stanford Researchers Introduce MLE-Dojo: A Gym-Style Framework Designed for Training, Evaluating, and Benchmarking Autonomous Machine Learning Engineering (MLE) Agents

    May 15, 2025

    Machine learning engineering (MLE) involves developing, tuning, and deploying machine learning systems that require iterative experimentation, model optimization, and robust handling of data pipelines. As model complexity increases, so do the challenges associated with orchestrating end-to-end workflows efficiently. Researchers have explored the automation of MLE tasks using AI agents to handle these demands. Large Language Models (LLMs), particularly those with strong coding and problem-solving abilities, have shown potential to enhance this process significantly. Their role in automating structured workflows is now being tested through rigorous benchmarks and environments tailored to emulate real-world MLE scenarios.

    A primary hurdle in automating machine learning engineering lies in the work’s inherently iterative and feedback-driven nature. Tasks such as hyperparameter tuning, model debugging, and data preprocessing cannot be resolved in one step; they require repeated modifications and evaluations. Traditional evaluation tools for AI models often rely on static datasets and do not allow for real-time error feedback or interactive problem-solving. This limitation prevents LLM agents from learning through trial and error, an essential component for mastering engineering tasks that evolve or require multiple attempts for success.

    Earlier tools to evaluate LLMs in engineering or coding tasks have mostly focused on individual subtasks or isolated challenges. These include tools like MLAgentBench and DSBench, which rely on narrow test cases sourced from Kaggle competitions or synthetic datasets. While they cover more than basic tasks, they do not enable agents to perform code execution, debugging, or results interpretation in a live setting. Other environments, like SWE-Gym, focus exclusively on software engineering and lack support for machine learning-specific workflows. These limitations have slowed the creation of versatile, high-performing MLE agents that can handle real-time project complexities.

    Researchers from Georgia Institute of Technology and Stanford University have introduced MLE-Dojo, a framework with an interactive environment that connects LLM agents with real-world machine learning tasks derived from over 200 Kaggle competitions. This framework supports tabular data analysis, computer vision, natural language processing, and time-series forecasting challenges. Research introduced MLE-Dojo to allow agents to write, execute, and revise code in a sandboxed, feedback-rich setting. The goal was to replicate the interactive cycles that human engineers follow, enabling structured learning for agents. The environment includes pre-installed dependencies, evaluation metrics, and supports supervised fine-tuning and reinforcement learning strategies.

    MLE-Dojo’s structure consists of modular components that support a wide range of MLE challenges. Each task runs within its own Docker container, isolating it for safety and reproducibility. Agents interact with the environment through a Partially Observable Markov Decision Process, receiving observations, performing actions, and gaining rewards based on performance. The environment supports five primary action types: requesting task information, validating code, executing code, retrieving interaction history, and resetting the environment. It also provides a detailed observation space that includes datasets, execution results, and error messages. The agent receives structured feedback after every interaction, allowing for step-wise improvement. This modular setup helps maintain interoperability and simplifies adding new tasks to the system.

    The evaluation included eight frontier LLMs—Gemini-2.5-Pro, DeepSeek-r1, o3-mini, GPT-4o, GPT-4o-mini, Gemini-2.0-Pro, Gemini-2.0-Flash, and DeepSeek-v3—across four core machine learning domains. Gemini-2.5-Pro achieved the highest Elo rating of 1257, followed by DeepSeek-r1 at 1137 and o3-mini at 1108. Regarding HumanRank, Gemini-2.5-Pro led with 61.95%, indicating its superior performance over human benchmarks. Models like GPT-4o-mini executed code only 20% of the time, adopting conservative strategies, while o3-mini performed executions in over 90% of the cases. The average failure rate for Gemini-2.5-Pro remained the lowest across validation and execution phases, reinforcing its robustness. Among domains, computer vision posed the greatest challenge, with most models scoring under 60 in HumanRank. Reasoning models generally produced longer outputs and maintained stronger performance consistency across iterations.

    The research highlights the difficulty of applying LLMs to full machine learning workflows. It outlines a comprehensive solution in MLE-Dojo that enables learning through interaction, not just completion. MLE-Dojo sets a new standard for training and evaluating autonomous MLE agents by simulating engineering environments more accurately.


    Check out the Paper, Project Page and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

    The post Georgia Tech and Stanford Researchers Introduce MLE-Dojo: A Gym-Style Framework Designed for Training, Evaluating, and Benchmarking Autonomous Machine Learning Engineering (MLE) Agents appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleA Step-by-Step Guide to Build an Automated Knowledge Graph Pipeline Using LangGraph and NetworkX
    Next Article Researchers from Tsinghua and ModelBest Release Ultra-FineWeb: A Trillion-Token Dataset Enhancing LLM Accuracy Across Benchmarks

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    May 16, 2025
    Machine Learning

    DanceGRPO: A Unified Framework for Reinforcement Learning in Visual Generation Across Multiple Paradigms and Tasks

    May 16, 2025
    Leave A Reply Cancel Reply

    Hostinger

    Continue Reading

    Bernhard Schölkopf: Is AI intelligent? | Starmus highlights

    Development

    I tested the new Luminkey Magger68 HE — A standout 65% hall effect keyboard for practically anyone

    News & Updates

    Can a 200MP Android phone replace my $3,500 Sony camera? I tried it and can’t go back

    News & Updates

    CVE-2025-3980 – Wowjoy Internet Doctor Workstation System Remote Unauthorized Access Vulnerability

    Common Vulnerabilities and Exposures (CVEs)
    Hostinger

    Highlights

    Databases

    Data migration strategies to Amazon RDS for Db2

    May 15, 2024

    Amazon announced Amazon Relational Database (Amazon RDS) for Db2 at re:Invent 2023. RDS for Db2…

    Asset Prefetching Strategies with Vite in Laravel 11.21

    August 21, 2024

    Latent Guard: A Machine Learning Framework Designed to Improve the Safety of Text-to-Image T2I Generative Networks

    May 3, 2024

    Microsoft announced a new Surface Pro and Surface Laptop as Nintendo’s legal team tried to gut Palworld and the Xbox gaming handheld leaked online

    May 11, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.