Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 23, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 23, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 23, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 23, 2025

      SteamOS is officially not just for Steam Deck anymore — now ready for Lenovo Legion Go S and sort of ready for the ROG Ally

      May 23, 2025

      Microsoft’s latest AI model can accurately forecast the weather: “It doesn’t know the laws of physics, so it could make up something completely crazy”

      May 23, 2025

      OpenAI scientists wanted “a doomsday bunker” before AGI surpasses human intelligence and threatens humanity

      May 23, 2025

      My favorite gaming service is 40% off right now (and no, it’s not Xbox Game Pass)

      May 23, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      A timeline of JavaScript’s history

      May 23, 2025
      Recent

      A timeline of JavaScript’s history

      May 23, 2025

      Loading JSON Data into Snowflake From Local Directory

      May 23, 2025

      Streamline Conditional Logic with Laravel’s Fluent Conditionable Trait

      May 23, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      SteamOS is officially not just for Steam Deck anymore — now ready for Lenovo Legion Go S and sort of ready for the ROG Ally

      May 23, 2025
      Recent

      SteamOS is officially not just for Steam Deck anymore — now ready for Lenovo Legion Go S and sort of ready for the ROG Ally

      May 23, 2025

      Microsoft’s latest AI model can accurately forecast the weather: “It doesn’t know the laws of physics, so it could make up something completely crazy”

      May 23, 2025

      OpenAI scientists wanted “a doomsday bunker” before AGI surpasses human intelligence and threatens humanity

      May 23, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»This AI Paper from Anthropic Introduces Attribution Graphs: A New Interpretability Method to Trace Internal Reasoning in Claude 3.5 Haiku

    This AI Paper from Anthropic Introduces Attribution Graphs: A New Interpretability Method to Trace Internal Reasoning in Claude 3.5 Haiku

    April 6, 2025

    While the outputs of large language models (LLMs) appear coherent and useful, the underlying mechanisms guiding these behaviors remain largely unknown. As these models are increasingly deployed in sensitive and high-stakes environments, it has become crucial to understand what they do and how they do it.

    The main challenge lies in uncovering the internal steps that lead a model to a specific response. The computations happen across hundreds of layers and billions of parameters, making it difficult to isolate the processes involved. Without a clear understanding of these steps, trusting or debugging their behavior becomes harder, especially in tasks requiring reasoning, planning, or factual reliability. Researchers are thus focused on reverse-engineering these models to identify how information flows and decisions are made internally.

    Existing interpretability methods like attention maps and feature attribution offer partial views into model behavior. While these tools help highlight which input tokens contribute to outputs, they often fail to trace the full chain of reasoning or identify intermediate steps. Moreover, these tools usually focus on surface-level behaviors and do not provide consistent insight into deeper computational structures. This has created the need for more structured, fine-grained methods to trace logic through internal representations over multiple steps.

    To address this, researchers from Anthropic introduced a new technique called attribution graphs. These graphs allow researchers to trace the internal flow of information between features within a model during a single forward pass. By doing so, they attempt to identify intermediate concepts or reasoning steps that are not visible from the model’s outputs alone. The attribution graphs generate hypotheses about the computational pathways a model follows, which are then tested using perturbation experiments. This approach marks a significant step toward revealing the “wiring diagram” of large models, much like how neuroscientists map brain activity.

    The researchers applied attribution graphs to Claude 3.5 Haiku, a lightweight language model released by Anthropic in October 2024. The method begins by identifying interpretable features activated by a specific input. These features are then traced to determine their influence on the final output. For example, when prompted with a riddle or poem, the model selects a set of rhyming words before writing lines, a form of planning. In another example, the model identifies “Texas” as an intermediate step to answer the question, “What’s the capital of the state containing Dallas?” which it correctly resolves as “Austin.” The graphs reveal the model outputs and how it internally represents and transitions between ideas.

    The performance results from attribution graphs uncovered several advanced behaviors within Claude 3.5 Haiku. In poetry tasks, the model pre-plans rhyming words before composing each line, showing anticipatory reasoning. In multi-hop questions, the model forms internal intermediate representations, such as associating Dallas with Texas before determining Austin as the answer. It leverages both language-specific and abstract circuits for multilingual inputs, with the latter becoming more prominent in Claude 3.5 Haiku than in earlier models. Further, the model generates diagnoses internally in medical reasoning tasks and uses them to inform follow-up questions. These findings suggest that the model can abstract planning, internal goal-setting, and stepwise logical deductions without explicit instruction.

    This research presents attribution graphs as a valuable interpretability tool that reveals the hidden layers of reasoning in language models. By applying this method, the team from Anthropic has shown that models like Claude 3.5 Haiku don’t merely mimic human responses—they compute through layered, structured steps. This opens the door to deeper audits of model behavior, allowing more transparent and responsible deployment of advanced AI systems.


    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on OPEN SOURCE AI: FREE REGISTRATION + Certificate of Attendance + 3 Hour Short Event (April 12, 9 am- 12 pm PST) + Hands on Workshop [Sponsored]

    The post This AI Paper from Anthropic Introduces Attribution Graphs: A New Interpretability Method to Trace Internal Reasoning in Claude 3.5 Haiku appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleTransformer Meets Diffusion: How the Transfusion Architecture Empowers GPT-4o’s Creativity
    Next Article Min Woo Lee Lululemon Let Him Cook Shirt

    Related Posts

    Machine Learning

    This AI Paper Introduces Group Think: A Token-Level Multi-Agent Reasoning Paradigm for Faster and Collaborative LLM Inference

    May 24, 2025
    Machine Learning

    A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen

    May 24, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    The 24 best Labor Day laptop deals

    Development

    U.S. Customs and Border Protection Issues Guide for Travelers on Facial Recognition Opt-Out

    Development

    How to Use the Mermaid JavaScript Library to Create Flowcharts

    Development

    Once Human release date and time: PC, mobile, launch countdown, preloads, and more

    Development

    Highlights

    9 Ways AI Improves Software Testing

    July 11, 2024

    Test Guild – Automation Testing Tools Community
    9 Ways AI Improves Software Testing
    I recently co-hosted a webinar, with Blinq.io to delve into the impact of intelligence (AI) on software testing practices and nine way AI can improve software testing. Cool stuff! During the session, Tal Barmeir and Guy Arieli seasoned quality assurance (QA) practitioners, shared their perspectives on leveraging AI to enhance team skills. Our conversation was
    You’re reading 9 Ways AI Improves Software Testing, originally posted on Test Guild – Automation Testing Tools Community – and copyrighted by Joe Colantonio

    Emotional intelligence in IT management: Impact, challenges, and cultural differences

    December 20, 2024

    How to Fix ERROR_FILE_LEVEL_TRIM_NOT_SUPPORTED

    December 20, 2024

    CVE-2025-4341 – D-Link DIR-880L Command Injection Vulnerability

    May 6, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.