Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      A Week In The Life Of An AI-Augmented Designer

      August 22, 2025

      This week in AI updates: Gemini Code Assist Agent Mode, GitHub’s Agents panel, and more (August 22, 2025)

      August 22, 2025

      Microsoft adds Copilot-powered debugging features for .NET in Visual Studio

      August 21, 2025

      Blackstone portfolio company R Systems Acquires Novigo Solutions, Strengthening its Product Engineering and Full-Stack Agentic-AI Capabilities

      August 21, 2025

      The best AirTag alternative for Samsung users is currently 30% off

      August 24, 2025

      One of the biggest new features on the Google Pixel 10 is also one of the most overlooked

      August 24, 2025

      I tested these viral ‘crush-proof’ Bluetooth speakers, and they’re not your average portables

      August 24, 2025

      I compared the best smartwatches from Google and Apple – and there’s a clear winner

      August 24, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      MongoDB Data Types

      August 23, 2025
      Recent

      MongoDB Data Types

      August 23, 2025

      Building Cross-Platform Alerts with Laravel’s Notification Framework

      August 23, 2025

      Add Notes Functionality to Eloquent Models With the Notable Package

      August 23, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft Teams updated with a feature you probably thought already existed — “Can you hear me?” is now a thing of the past

      August 24, 2025
      Recent

      Microsoft Teams updated with a feature you probably thought already existed — “Can you hear me?” is now a thing of the past

      August 24, 2025

      Xbox Game Pass gets Gears of War: Reloaded, Dragon Age: The Veilguard, and more — here’s what is coming through the rest of August

      August 24, 2025

      Resident Evil ‘9’ Requiem has some of the most incredible lighting I’ve seen in a game — and Capcom uses it as a weapon

      August 24, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»Study could lead to LLMs that are better at complex reasoning

    Study could lead to LLMs that are better at complex reasoning

    July 8, 2025

    For all their impressive capabilities, large language models (LLMs) often fall short when given challenging new tasks that require complex reasoning skills.

    While an accounting firm’s LLM might excel at summarizing financial reports, that same model could fail unexpectedly if tasked with predicting market trends or identifying fraudulent transactions.

    To make LLMs more adaptable, MIT researchers investigated how a certain training technique can be strategically deployed to boost a model’s performance on unfamiliar, difficult problems.

    They show that test-time training, a method that involves temporarily updating some of a model’s inner workings during deployment, can lead to a sixfold improvement in accuracy. The researchers developed a framework for implementing a test-time training strategy that uses examples of the new task to maximize these gains.

    Their work could improve a model’s flexibility, enabling an off-the-shelf LLM to adapt to complex tasks that require planning or abstraction. This could lead to LLMs that would be more accurate in many applications that require logical deduction, from medical diagnostics to supply chain management.

    “Genuine learning — what we did here with test-time training — is something these models can’t do on their own after they are shipped. They can’t gain new skills or get better at a task. But we have shown that if you push the model a little bit to do actual learning, you see that huge improvements in performance can happen,” says Ekin Akyürek PhD ’25, lead author of the study.

    Akyürek is joined on the paper by graduate students Mehul Damani, Linlu Qiu, Han Guo, and Jyothish Pari; undergraduate Adam Zweiger; and senior authors Yoon Kim, an assistant professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Jacob Andreas, an associate professor in EECS and a member of CSAIL. The research will be presented at the International Conference on Machine Learning.

    Tackling hard domains

    LLM users often try to improve the performance of their model on a new task using a technique called in-context learning. They feed the model a few examples of the new task as text prompts which guide the model’s outputs.

    But in-context learning doesn’t always work for problems that require logic and reasoning.

    The MIT researchers investigated how test-time training can be used in conjunction with in-context learning to boost performance on these challenging tasks. Test-time training involves updating some model parameters — the internal variables it uses to make predictions — using a small amount of new data specific to the task at hand.

    The researchers explored how test-time training interacts with in-context learning. They studied design choices that maximize the performance improvements one can coax out of a general-purpose LLM.

    “We find that test-time training is a much stronger form of learning. While simply providing examples can modestly boost accuracy, actually updating the model with those examples can lead to significantly better performance, particularly in challenging domains,” Damani says.

    In-context learning requires a small set of task examples, including problems and their solutions. The researchers use these examples to create a task-specific dataset needed for test-time training.

    To expand the size of this dataset, they create new inputs by slightly changing the problems and solutions in the examples, such as by horizontally flipping some input data. They find that training the model on the outputs of this new dataset leads to the best performance.

    In addition, the researchers only update a small number of model parameters using a technique called low-rank adaption, which improves the efficiency of the test-time training process.

    “This is important because our method needs to be efficient if it is going to be deployed in the real world. We find that you can get huge improvements in accuracy with a very small amount of parameter training,” Akyürek says.

    Developing new skills

    Streamlining the process is key, since test-time training is employed on a per-instance basis, meaning a user would need to do this for each individual task. The updates to the model are only temporary, and the model reverts to its original form after making a prediction.

    A model that usually takes less than a minute to answer a query might take five or 10 minutes to provide an answer with test-time training, Akyürek adds.

    “We wouldn’t want to do this for all user queries, but it is useful if you have a very hard task that you want to the model to solve well. There also might be tasks that are too challenging for an LLM to solve without this method,” he says.

    The researchers tested their approach on two benchmark datasets of extremely complex problems, such as IQ puzzles. It boosted accuracy as much as sixfold over techniques that use only in-context learning.

    Tasks that involved structured patterns or those which used completely unfamiliar types of data showed the largest performance improvements.

    “For simpler tasks, in-context learning might be OK. But updating the parameters themselves might develop a new skill in the model,” Damani says.

    In the future, the researchers want to use these insights toward the development of models that continually learn.

    The long-term goal is an LLM that, given a query, can automatically determine if it needs to use test-time training to update parameters or if it can solve the task using in-context learning, and then implement the best test-time training strategy without the need for human intervention.

    This work is supported, in part, by the MIT-IBM Watson AI Lab and the National Science Foundation.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleLWiAI Podcast #215 – Runway games, Meta Superintelligence, ERNIE 4.5
    Next Article What Are JSON Web Tokens (JWT)?

    Related Posts

    Artificial Intelligence

    Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment

    August 24, 2025
    Repurposing Protein Folding Models for Generation with Latent Diffusion
    Artificial Intelligence

    Repurposing Protein Folding Models for Generation with Latent Diffusion

    August 24, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2022-43669 – Apache HTTP Server Cross-Site Scripting

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-32445 Privilege Escalation Flaw in Argo Events

    Security

    “Periodic table of machine learning” could fuel AI discovery

    Artificial Intelligence

    AMD’s new Radeon graphics driver is ready for DOOM: The Dark Ages and delivers FSR 4 upscaling support to Frostpunk 2, more

    News & Updates

    Highlights

    Machine Learning

    AG-UI (Agent-User Interaction Protocol): An Open, Lightweight, Event-based Protocol that Standardizes How AI Agents Connect to Front-End Applications

    May 13, 2025

    The current generation of AI agents has made significant progress in automating backend tasks such…

    DEKUVE is a Linux distribution based on Debian

    May 27, 2025

    CVE-2025-7848 – NI LabVIEW Memory Corruption Vulnerability (Arbitrary Code Execution)

    July 29, 2025

    CVE-2024-53021 – F5 Big-IP Information Disclosure Vulnerability

    June 3, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.