Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      Handling JavaScript Event Listeners With Parameters

      July 21, 2025

      ChatGPT now has an agent mode

      July 21, 2025

      Scrum Alliance and Kanban University partner to offer new course that teaches both methodologies

      July 21, 2025

      Is ChatGPT down? You’re not alone. Here’s what OpenAI is saying

      July 21, 2025

      I found a tablet that could replace my iPad and Kindle – and it’s worth every penny

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 21, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 21, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Execute Ping Commands and Get Back Structured Data in PHP

      July 21, 2025
      Recent

      Execute Ping Commands and Get Back Structured Data in PHP

      July 21, 2025

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 21, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 21, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      I Made Kitty Terminal Even More Awesome by Using These 15 Customization Tips and Tweaks

      July 21, 2025
      Recent

      I Made Kitty Terminal Even More Awesome by Using These 15 Customization Tips and Tweaks

      July 21, 2025

      Microsoft confirms active cyberattacks on SharePoint servers

      July 21, 2025

      How to Manually Check & Install Windows 11 Updates (Best Guide)

      July 21, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»The unique, mathematical shortcuts language models use to predict dynamic scenarios

    The unique, mathematical shortcuts language models use to predict dynamic scenarios

    July 21, 2025

    Let’s say you’re reading a story, or playing a game of chess. You may not have noticed, but each step of the way, your mind kept track of how the situation (or “state of the world”) was changing. You can imagine this as a sort of sequence of events list, which we use to update our prediction of what will happen next.

    Language models like ChatGPT also track changes inside their own “mind” when finishing off a block of code or anticipating what you’ll write next. They typically make educated guesses using transformers — internal architectures that help the models understand sequential data — but the systems are sometimes incorrect because of flawed thinking patterns. Identifying and tweaking these underlying mechanisms helps language models become more reliable prognosticators, especially with more dynamic tasks like forecasting weather and financial markets.

    But do these AI systems process developing situations like we do? A new paper from researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Electrical Engineering and Computer Science shows that the models instead use clever mathematical shortcuts between each progressive step in a sequence, eventually making reasonable predictions. The team made this observation by going under the hood of language models, evaluating how closely they could keep track of objects that change position rapidly. Their findings show that engineers can control when language models use particular workarounds as a way to improve the systems’ predictive capabilities.

    Shell games

    The researchers analyzed the inner workings of these models using a clever experiment reminiscent of a classic concentration game. Ever had to guess the final location of an object after it’s placed under a cup and shuffled with identical containers? The team used a similar test, where the model guessed the final arrangement of particular digits (also called a permutation). The models were given a starting sequence, such as “42135,” and instructions about when and where to move each digit, like moving the “4” to the third position and onward, without knowing the final result.

    In these experiments, transformer-based models gradually learned to predict the correct final arrangements. Instead of shuffling the digits based on the instructions they were given, though, the systems aggregated information between successive states (or individual steps within the sequence) and calculated the final permutation.

    One go-to pattern the team observed, called the “Associative Algorithm,” essentially organizes nearby steps into groups and then calculates a final guess. You can think of this process as being structured like a tree, where the initial numerical arrangement is the “root.” As you move up the tree, adjacent steps are grouped into different branches and multiplied together. At the top of the tree is the final combination of numbers, computed by multiplying each resulting sequence on the branches together.

    The other way language models guessed the final permutation was through a crafty mechanism called the “Parity-Associative Algorithm,” which essentially whittles down options before grouping them. It determines whether the final arrangement is the result of an even or odd number of rearrangements of individual digits. Then, the mechanism groups adjacent sequences from different steps before multiplying them, just like the Associative Algorithm.

    “These behaviors tell us that transformers perform simulation by associative scan. Instead of following state changes step-by-step, the models organize them into hierarchies,” says MIT PhD student and CSAIL affiliate Belinda Li SM ’23, a lead author on the paper. “How do we encourage transformers to learn better state tracking? Instead of imposing that these systems form inferences about data in a human-like, sequential way, perhaps we should cater to the approaches they naturally use when tracking state changes.”

    “One avenue of research has been to expand test-time computing along the depth dimension, rather than the token dimension — by increasing the number of transformer layers rather than the number of chain-of-thought tokens during test-time reasoning,” adds Li. “Our work suggests that this approach would allow transformers to build deeper reasoning trees.”

    Through the looking glass

    Li and her co-authors observed how the Associative and Parity-Associative algorithms worked using tools that allowed them to peer inside the “mind” of language models. 

    They first used a method called “probing,” which shows what information flows through an AI system. Imagine you could look into a model’s brain to see its thoughts at a specific moment — in a similar way, the technique maps out the system’s mid-experiment predictions about the final arrangement of digits.

    A tool called “activation patching” was then used to show where the language model processes changes to a situation. It involves meddling with some of the system’s “ideas,” injecting incorrect information into certain parts of the network while keeping other parts constant, and seeing how the system will adjust its predictions.

    These tools revealed when the algorithms would make errors and when the systems “figured out” how to correctly guess the final permutations. They observed that the Associative Algorithm learned faster than the Parity-Associative Algorithm, while also performing better on longer sequences. Li attributes the latter’s difficulties with more elaborate instructions to an over-reliance on heuristics (or rules that allow us to compute a reasonable solution fast) to predict permutations.

    “We’ve found that when language models use a heuristic early on in training, they’ll start to build these tricks into their mechanisms,” says Li. “However, those models tend to generalize worse than ones that don’t rely on heuristics. We found that certain pre-training objectives can deter or encourage these patterns, so in the future, we may look to design techniques that discourage models from picking up bad habits.”

    The researchers note that their experiments were done on small-scale language models fine-tuned on synthetic data, but found the model size had little effect on the results. This suggests that fine-tuning larger language models, like GPT 4.1, would likely yield similar results. The team plans to examine their hypotheses more closely by testing language models of different sizes that haven’t been fine-tuned, evaluating their performance on dynamic real-world tasks such as tracking code and following how stories evolve.

    Harvard University postdoc Keyon Vafa, who was not involved in the paper, says that the researchers’ findings could create opportunities to advance language models. “Many uses of large language models rely on tracking state: anything from providing recipes to writing code to keeping track of details in a conversation,” he says. “This paper makes significant progress in understanding how language models perform these tasks. This progress provides us with interesting insights into what language models are doing and offers promising new strategies for improving them.”

    Li wrote the paper with MIT undergraduate student Zifan “Carl” Guo and senior author Jacob Andreas, who is an MIT associate professor of electrical engineering and computer science and CSAIL principal investigator. Their research was supported, in part, by Open Philanthropy, the MIT Quest for Intelligence, the National Science Foundation, the Clare Boothe Luce Program for Women in STEM, and a Sloan Research Fellowship.

    The researchers presented their research at the International Conference on Machine Learning (ICML) this week.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleExploring the context of online images with Backstory
    Next Article New AI system uncovers hidden cell subtypes, boosts precision medicine

    Related Posts

    Artificial Intelligence

    Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment

    July 21, 2025
    Repurposing Protein Folding Models for Generation with Latent Diffusion
    Artificial Intelligence

    Repurposing Protein Folding Models for Generation with Latent Diffusion

    July 21, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    muttum – guess a word in few attempts

    Linux

    How to upskill software engineering teams in the age of AI

    Tech & Work

    EncryptHub Targets Web3 Developers Using Fake AI Platforms to Deploy Fickle Stealer Malware

    Development

    Will Windows 10 leave enterprises vulnerable to zero-days?

    Development

    Highlights

    News & Updates

    7 mistakes I made in Elden Ring Nightreign that you don’t have to

    May 30, 2025

    Here are seven big Elden Ring Nightreign mistakes to avoid making as you play the…

    CVE-2025-0966 – IBM InfoSphere Information Server SQL Injection Vulnerability

    June 25, 2025

    Protect your important files in this encrypted, spacious SSD – now $80 off for Prime Day

    July 7, 2025

    Call me a filthy casual — I recommend this RTX 4050 gaming laptop over any of the powerhouses on Prime Day

    July 8, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.