Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»CoSy (Concept Synthesis): A Novel Architecture-Agnostic Machine Learning Framework to Evaluate the Quality of Textual Explanations for Latent Neurons

    CoSy (Concept Synthesis): A Novel Architecture-Agnostic Machine Learning Framework to Evaluate the Quality of Textual Explanations for Latent Neurons

    June 4, 2024

    Modern Deep Neural Networks (DNNs) are inherently opaque; we do not know how or why these computers arrive at the predictions they do. This is a major barrier to the broader use of Machine Learning techniques in many domains. An emerging area of study called Explainable AI (XAI) has arisen to shed light on how DNNs make decisions in a way that humans can comprehend. XAI has expanded its scope to include examining the functional purpose of each model component to explain the models’ global behavior, as opposed to just explaining how DNNs make decisions locally for specific inputs using saliency maps.

    The second global explainability technique, mechanistic interpretability, is followed by methods that characterize the particular ideas neurons, which are the basic computational units in a neural network, have learned to recognize. This allows one to examine how these broad ideas impact the predictions made by the network. Labeling neurons using notions humans can understand in prose is a common way to explain how a network’s latent representations work. A neuron is given a written description according to the notions it has learned to detect or is strongly triggered by. These techniques have progressed from describing labels to offering more in-depth compositional and open-vocabulary explanations. However, the absence of a generally acknowledged quantitative metric for open-vocabulary neuron descriptions remains a substantial obstacle. The result was that many approaches came up with their evaluation standards, making it hard to conduct thorough, general-purpose comparisons.

    To fill this void, researchers from ATB Potsdam, University of Potsdam, TU Berlin, Fraunhofer Heinrich-Hertz-Institute, and BIFOLD present CoSy, a groundbreaking quantitative evaluation approach for assessing computer vision (CV) models’ use of open-vocabulary explanations for neurons. This innovative method, leveraging modern developments in Generative AI, allows for the creation of synthetic visuals corresponding to the given concept-based textual descriptions. By combining data points typical for specific target explanations, the researchers have paved the way for a new era of AI evaluation. Unlike current ad hoc approaches, CoSy enables quantitative comparisons of several concept-based textual explanation methods and tests using the activations of the neurons. This breakthrough eliminates the need for human intervention, empowering users to assess the accuracy of individual neuron explanations. 

    By conducting a thorough meta-analysis, the team has proven that CoSy ensures an accurate explanation evaluation. The study demonstrates through multiple studies that the last levels, where learning of high-level concepts takes place, are the best places to apply concept-based textual explanation methods. In these layers, INVERT, a technique that inverts the process of generating an image from a neural network’s internal representation, and CLIP-Dissect, a method that dissects the internal representations of a neural network, give notions of high-quality neurons. In contrast, MILAN and FALCON give explanations of lower-quality neurons that can provide concepts that are near to random, which could cause incorrect conclusions about the network. Therefore, it is clear from the data that evaluation is crucial when employing textual explanation approaches based on concepts.

    The researchers highlight that the generative model is a major drawback of CoSy. For instance, the ideas produced may not be incorporated into the training of the text-to-image model. Analyzing pre-training datasets and model performance could help overcome this lack, which results in poorer generative performance. Worse yet, the model can only come up with vague ideas like ‘white objects,’ which are not specific enough to provide a comprehensive understanding. More complex, niche or limited models may be useful in both situations. Looking Ahead In the underexplored field of evaluating non-local explanation approaches, where CoSy is still in its infancy, there is a lot of promise.

    The team is optimistic about the future of CoSy and envisions its application in various fields. They hope that future work will focus on defining explanation quality in a way that considers human judgment, a crucial aspect when judging the plausibility or the quality of an explanation in relation to the outcome of a downstream job. They intend to broaden the scope of their evaluation framework’s application to other fields, such as healthcare and natural language processing. The prospect of evaluating huge, opaque, autointerpretable language models (LLMs) developed recently is particularly intriguing. The researchers also believe that applying CoSy to healthcare datasets, where explanation quality is crucial, could be a significant step forward. These future applications of CoSy hold great promise for the advancement of AI research.  

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 43k+ ML SubReddit | Also, check out our AI Events Platform

    The post CoSy (Concept Synthesis): A Novel Architecture-Agnostic Machine Learning Framework to Evaluate the Quality of Textual Explanations for Latent Neurons appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleKnock Knock: A New Python Library to Get a Notification when Your Training is Complete with just Two Additional Lines of Code
    Next Article This Machine Learning Research from Microsoft Introduces an Active Preference Elicitation Method for the Online Alignment of Large Language Models

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-47916 – Invision Community Themeeditor Remote Code Execution

    May 16, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Non c’è scampo dall’AI? Red Hat la vuole integrare in Fedora e GNOME… Le altre distribuzioni si adegueranno?

    Linux

    How to limit users storage usage with Disk Quotas on Windows 11 (and 10)

    News & Updates

    How to Use PostgreSQL in Django

    Development

    SolarWinds Serv-U Vulnerability Under Active Attack – Patch Immediately

    Development

    Highlights

    Development

    New ‘Rules File Backdoor’ Attack Lets Hackers Inject Malicious Code via AI Code Editors

    March 18, 2025

    Cybersecurity researchers have disclosed details of a new supply chain attack vector dubbed Rules File…

    Quickemu – run Linux, macOS, and Windows virtual machines

    February 7, 2025

    Verifying Exception Reporting in Laravel with assertReported

    April 22, 2025

    Data-stealing cyberattacks are surging – 7 ways to protect yourself and your business

    April 17, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.