Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      10 Ways Node.js Development Boosts AI & Real-Time Data (2025-2026 Edition)

      August 18, 2025

      Looking to Outsource React.js Development? Here’s What Top Agencies Are Doing Right

      August 18, 2025

      Beyond The Hype: What AI Can Really Do For Product Design

      August 18, 2025

      BrowserStack launches Chrome extension that bundles 10+ manual web testing tools

      August 18, 2025

      How much RAM does your Linux PC really need in 2025?

      August 19, 2025

      Have solar at home? Supercharge that investment with this other crucial component

      August 19, 2025

      I replaced my MacBook charger with this compact wall unit – and wish I’d done it sooner

      August 19, 2025

      5 reasons to switch to an immutable Linux distro today – and which to try first

      August 19, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Sentry Adds Logs Support for Laravel Apps

      August 19, 2025
      Recent

      Sentry Adds Logs Support for Laravel Apps

      August 19, 2025

      Efficient Context Management with Laravel’s Remember Functions

      August 19, 2025

      Laravel Devtoolbox: Your Swiss Army Knife Artisan CLI

      August 19, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      From plateau predictions to buggy rollouts — Bill Gates’ GPT-5 skepticism looks strangely accurate

      August 18, 2025
      Recent

      From plateau predictions to buggy rollouts — Bill Gates’ GPT-5 skepticism looks strangely accurate

      August 18, 2025

      We gave OpenAI’s open-source AI a kid’s test — here’s what happened

      August 18, 2025

      With GTA 6, next-gen exclusives, and a console comeback on the horizon, Xbox risks sitting on the sidelines — here’s why

      August 18, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»Researchers glimpse the inner workings of protein language models

    Researchers glimpse the inner workings of protein language models

    August 19, 2025

    Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.

    These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.

    In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.

    “Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”

    Onkar Gujral, an MIT graduate student, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.

    Opening the black box

    In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.

    Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.

    In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.

    However, in all of these studies, it has been impossible to know how the models were making their predictions.

    “We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.

    In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.

    The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.

    Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.

    When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.

    “In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it’s hard to interpret the neurons.”

    Interpretable models

    Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.

    By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”

    This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.

    “When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.

    Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.

    “At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.

    The research was funded by the National Institutes of Health. 

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleFrom plateau predictions to buggy rollouts — Bill Gates’ GPT-5 skepticism looks strangely accurate
    Next Article This Free ChatGPT Alternative has Unlimited Usage in 2025 with Voice Mode and it’s called Pi

    Related Posts

    Repurposing Protein Folding Models for Generation with Latent Diffusion
    Artificial Intelligence

    Repurposing Protein Folding Models for Generation with Latent Diffusion

    August 19, 2025
    Artificial Intelligence

    Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment

    August 19, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    DeepSeek vs ChatGPT: Pros, Cons, and Key Differences Explained

    Web Development

    CISA Reveals ‘Pattern’ of Ransomware Attacks Against SimpleHelp RMM

    Security

    Microsoft dubs latest Windows 11 release “most reliable version of Windows yet” — PCs that upgrade will see an improved experience compared to Windows 10

    News & Updates

    CVE-2025-6754 – “WordPress SEO Metrics Privilege Escalation”

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    News & Updates

    Did Sam Altman Oversell GPT-5? OpenAI Faces Backlash for Ruining ChatGPT, Turning It Into a “Corporate Beige Zombie”

    August 8, 2025

    OpenAI recently shipped GPT-5, touting it as the smartest model ever. However, the company deprecated…

    CVE-2025-6753 – Huija BicycleSharingServer SQL Injection Vulnerability

    June 27, 2025

    CVE-2025-38172 – “Linux EROFS UAF Vulnerability”

    July 3, 2025

    Bernhard Schölkopf: Is AI intelligent? | Starmus highlights

    April 10, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.