Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Designing For TV: Principles, Patterns And Practical Guidance (Part 2)

      September 5, 2025

      Neo4j introduces new graph architecture that allows operational and analytics workloads to be run together

      September 5, 2025

      Beyond the benchmarks: Understanding the coding personalities of different LLMs

      September 5, 2025

      Top 10 Use Cases of Vibe Coding in Large-Scale Node.js Applications

      September 3, 2025

      Building smarter interactions with MCP elicitation: From clunky tool calls to seamless user experiences

      September 4, 2025

      From Zero to MCP: Simplifying AI Integrations with xmcp

      September 4, 2025

      Distribution Release: Linux Mint 22.2

      September 4, 2025

      Coded Smorgasbord: Basically, a Smorgasbord

      September 4, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Drupal 11’s AI Features: What They Actually Mean for Your Team

      September 5, 2025
      Recent

      Drupal 11’s AI Features: What They Actually Mean for Your Team

      September 5, 2025

      Why Data Governance Matters More Than Ever in 2025?

      September 5, 2025

      Perficient Included in the IDC Market Glance for Digital Business Professional Services, 3Q25

      September 5, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      How DevOps Teams Are Redefining Reliability with NixOS and OSTree-Powered Linux

      September 5, 2025
      Recent

      How DevOps Teams Are Redefining Reliability with NixOS and OSTree-Powered Linux

      September 5, 2025

      Distribution Release: Linux Mint 22.2

      September 4, 2025

      ‘Cronos: The New Dawn’ was by far my favorite experience at Gamescom 2025 — Bloober might have cooked an Xbox / PC horror masterpiece

      September 4, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»Making AI-generated code more accurate in any language

    Making AI-generated code more accurate in any language

    April 18, 2025

    Programmers can now use large language models (LLMs) to generate computer code more quickly. However, this only makes programmers’ lives easier if that code follows the rules of the programming language and doesn’t cause a computer to crash.

    Some methods exist for ensuring LLMs conform to the rules of whatever language they are generating text in, but many of these methods either distort the model’s intended meaning or are too time-consuming to be feasible for complex tasks.

    A new approach developed by researchers at MIT and elsewhere automatically guides an LLM to generate text that adheres to the rules of the relevant language, such as a particular programming language, and is also error-free. Their method allows an LLM to allocate efforts toward outputs that are most likely to be valid and accurate, while discarding unpromising outputs early in the process. This probabilistic approach boosts computational efficiency.

    Due to these efficiency gains, the researchers’ architecture enabled small LLMs to outperform much larger models in generating accurate, properly structured outputs for several real-world use cases, including molecular biology and robotics.

    In the long run, this new architecture could help nonexperts control AI-generated content. For instance, it could allow businesspeople to write complex queries in SQL, a language for database manipulation, using only natural language prompts.

    “This work has implications beyond research. It could improve programming assistants, AI-powered data analysis, and scientific discovery tools by ensuring that AI-generated outputs remain both useful and correct,” says João Loula, an MIT graduate student and co-lead author of a paper on this framework.

    Loula is joined on the paper by co-lead authors Benjamin LeBrun, a research assistant at the Mila-Quebec Artificial Intelligence Institute, and Li Du, a graduate student at John Hopkins University; co-senior authors Vikash Mansinghka ’05, MEng ’09, PhD ’09, a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences; Alexander K. Lew SM ’20, an assistant professor at Yale University; Tim Vieira, a postdoc at ETH Zurich; and Timothy J. O’Donnell, an associate professor at McGill University and a Canada CIFAR AI Chair at Mila, who led the international team; as well as several others. The research will be presented at the International Conference on Learning Representations.

    Enforcing structure and meaning

    One common approach for controlling the structured text generated by LLMs involves checking an entire output, like a block of computer code, to make sure it is valid and will run error-free. If not, the user must start again, racking up computational resources.

    On the other hand, a programmer could stop to check the output along the way. While this can ensure the code adheres to the programming language and is structurally valid, incrementally correcting the code may cause it to drift from the meaning the user intended, hurting its accuracy in the long run.

    “It is much easier to enforce structure than meaning. We can quickly check whether something is in the right programming language, but to check its meaning you have to execute the code. Our work is also about dealing with these different types of information,” Loula says.

    The researchers’ approach involves engineering knowledge into the LLM to steer it toward the most promising outputs. These outputs are more likely to follow the structural constraints defined by a user, and to have the meaning the user intends.

    “We are not trying to train an LLM to do this. Instead, we are engineering some knowledge that an expert would have and combining it with the LLM’s knowledge, which offers a very different approach to scaling than you see in deep learning,” Mansinghka adds.

    They accomplish this using a technique called sequential Monte Carlo, which enables parallel generation from an LLM to compete with each other. The model dynamically allocates resources to different threads of parallel computation based on how promising their output appears.

    Each output is given a weight that represents how likely it is to be structurally valid and semantically accurate. At each step in the computation, the model focuses on those with higher weights and throws out the rest.

    In a sense, it is like the LLM has an expert looking over its shoulder to ensure it makes the right choices at each step, while keeping it focused on the overall goal. The user specifies their desired structure and meaning, as well as how to check the output, then the researchers’ architecture guides the LLM to do the rest.

    “We’ve worked out the hard math so that, for any kinds of constraints you’d like to incorporate, you are going to get the proper weights. In the end, you get the right answer,” Loula says.

    Boosting small models

    To test their approach, they applied the framework to LLMs tasked with generating four types of outputs: Python code, SQL database queries, molecular structures, and plans for a robot to follow.

    When compared to existing approaches, the researchers’ method performed more accurately while requiring less computation.

    In Python code generation, for instance, the researchers’ architecture enabled a small, open-source model to outperform a specialized, commercial closed-source model that is more than double its size.

    “We are very excited that we can allow these small models to punch way above their weight,” Loula says.

    Moving forward, the researchers want to use their technique to control larger chunks of generated text, rather than working one small piece at a time. They also want to combine their method with learning, so that as they control the outputs a model generates, it learns to be more accurate.

    In the long run, this project could have broader applications for non-technical users. For instance, it could be combined with systems for automated data modeling, and querying generative models of databases.

    The approach could also enable machine-assisted data analysis systems, where the user can converse with software that accurately models the meaning of the data and the questions asked by the user, adds Mansinghka.

    “One of the fundamental questions of linguistics is how the meaning of words, phrases, and sentences can be grounded in models of the world, accounting for uncertainty and vagueness in meaning and reference. LLMs, predicting likely token sequences, don’t address this problem. Our paper shows that, in narrow symbolic domains, it is technically possible to map from words to distributions on grounded meanings. It’s a small step towards deeper questions in cognitive science, linguistics, and artificial intelligence needed to understand how machines can communicate about the world like we do,” says O’Donnell.

    This research is funded, in part, by the Canada CIFAR AI Chairs Program, and by the Siegel Family Foundation via gift to the MIT Siegel Family Quest for Intelligence. 

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAngry Chickens 2 – Easter slingshot game tutorial
    Next Article China Launches the World’s First AI-Powered Underwater Data Centre: Here’s Why It Matters

    Related Posts

    Artificial Intelligence

    Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment

    September 5, 2025
    Repurposing Protein Folding Models for Generation with Latent Diffusion
    Artificial Intelligence

    Repurposing Protein Folding Models for Generation with Latent Diffusion

    September 5, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-46812 – Trix Cross-Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Rilasciato IceWM 3.8.1: Un Gestore di Finestre Leggero e Altamente Configurabile per il Sistema X Window

    Linux

    Critical PyTorch Vulnerability CVE-2025-32434 Allows Remote Code Execution

    Security

    The AI Fix #57: AI is the best hacker in the USA, and self-learning AI

    Development

    Highlights

    CVE-2025-26662 – Apache Data Services Management Console Cross-Site Scripting Vulnerability

    May 13, 2025

    CVE ID : CVE-2025-26662

    Published : May 13, 2025, 1:15 a.m. | 1 hour, 49 minutes ago

    Description : The Data Services Management Console does not sufficiently encode user-controlled inputs, allowing an attacker to inject malicious script. When a targeted victim, who is already logged in, clicks on the compromised link, the injected script gets executed within the scope of victim�s browser. This potentially leads to an impact on confidentiality and integrity. Availability is not impacted.

    Severity: 4.4 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-43746 – Liferay Portal Liferay DXP Reflected Cross-Site Scripting (XSS)

    August 20, 2025

    CVE-2025-2558 – “WordPress Theme-Wound LFI Vulnerability”

    April 24, 2025

    CVE-2025-37792 – Linux Bluetooth btrtl NULL Pointer Dereference Vulnerability

    May 1, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.