Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»ROBOSHOT by University of Wisconsin-Madison Enhancing Zero-Shot Learning Robustness: A Novel Machine Learning Approach to Bias Mitigation

    ROBOSHOT by University of Wisconsin-Madison Enhancing Zero-Shot Learning Robustness: A Novel Machine Learning Approach to Bias Mitigation

    June 4, 2024

    Zero-shot learning is an advanced machine learning technique that enables models to make predictions on tasks without having been explicitly trained on them. This revolutionary paradigm bypasses extensive data collection and training, relying instead on pre-trained models that can generalize across different tasks. Zero-shot models leverage knowledge acquired during pre-training, allowing them to infer information about new, unseen tasks by drawing parallels with their existing knowledge base. This capability is particularly valuable in rapidly evolving fields where new tasks emerge frequently, and collecting and annotating data for each new task would be impractical.

    A major issue in zero-shot models is their inherent vulnerability to biases and unintended correlations from their training on large-scale datasets. These biases can significantly affect the model’s performance, especially when the processed data deviates from the training data distribution. For instance, a zero-shot model trained predominantly on images of waterbirds might erroneously associate any image with a water background as a waterbird. This decreases accuracy and reliability, particularly for data slices that break these in-distribution correlations, leading to poor generalization on rare or atypical instances. The challenge, therefore, lies in developing methods to mitigate these biases without compromising the core advantage of zero-shot models: their ability to perform out of the box.

    Current approaches to address biases in zero-shot models often involve fine-tuning with labeled data to enhance robustness. These methods, although effective, undermine the primary benefit of zero-shot learning by reintroducing the need for additional training. For example, some strategies detect spurious attributes and fine-tune models using these descriptions, while others employ specialized contrastive losses to train adapters on frozen embeddings. Another line of research focuses on debiasing word and multimodal embeddings by manually identifying and removing unwanted concepts. However, these methods are labor-intensive and require domain-specific expertise, limiting their scalability and applicability across diverse tasks.

    Researchers from the University of Wisconsin-Madison have developed ROBOSHOT, a novel method designed to robustify zero-shot models without needing labeled data, training, or manual specification. This innovative approach harnesses insights from language models to identify and mitigate biases in model embeddings. ROBOSHOT leverages the ability of language models to generate useful insights from task descriptions. These insights are embedded and used to adjust the components of the model’s latent representations, effectively removing harmful elements and boosting beneficial ones. This process is entirely unsupervised, maintaining the zero-shot characteristic of the model while significantly enhancing its robustness.

    ROBOSHOT works by first obtaining insights from language models using task descriptions. These insights help identify both harmful and beneficial components within the embeddings. The system then modifies these embeddings to neutralize harmful components and emphasize beneficial ones. For instance, in a classification task, ROBOSHOT can adjust the model’s representations to reduce the impact of background correlations (like associating water with waterbirds) and enhance the focus on relevant features (such as the bird’s characteristics). This adjustment is achieved through simple vector operations that project original embeddings to spaces with reduced spurious components and increased useful components. This method provides a theoretical model that captures and quantifies failures in zero-shot models and characterizes the conditions under which ROBOSHOT can enhance performance.

    Empirical evaluations of ROBOSHOT on nine image and NLP classification tasks demonstrate its effectiveness. The method achieves an average improvement of 15.98% in worst-group accuracy, a critical metric for evaluating robustness, while maintaining or slightly improving overall accuracy. For example, the system significantly improves performance on the Waterbirds dataset by reducing the harmful correlation between water backgrounds and waterbird labels. Similar improvements are observed across other datasets, including CelebA, PACS, VLCS, and CXR14, indicating the method’s versatility and robustness. These results underscore the potential of ROBOSHOT to enhance the robustness of zero-shot models without the need for additional data or training.

    In conclusion, the research addresses the critical issue of bias in zero-shot learning by introducing ROBOSHOT, a method that leverages language model insights to adjust embeddings and enhance robustness. This approach effectively mitigates biases without needing labeled data or training, preserving the core advantage of zero-shot models. By improving worst-group accuracy and overall performance across multiple tasks, ROBOSHOT offers a practical and efficient solution for enhancing the reliability and applicability of zero-shot models.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 43k+ ML SubReddit | Also, check out our AI Events Platform

    The post ROBOSHOT by University of Wisconsin-Madison Enhancing Zero-Shot Learning Robustness: A Novel Machine Learning Approach to Bias Mitigation appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow to maximize browser window irrespective of windows versions
    Next Article Advancing Machine Learning with KerasCV and KerasNLP: A Comprehensive Overview

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-47916 – Invision Community Themeeditor Remote Code Execution

    May 16, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Data breach at Total Fitness exposed almost half a million people’s photos – no password required

    Development

    CVE-2025-48138 – BERTHA AI Missing Authorization Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Kumander Linux – Debian-based distribution

    Development

    Best laptops of CES 2025: These 5 Windows PCs left the biggest mark on the show floor

    News & Updates

    Highlights

    Design Principles Every Junior Designer Should Master

    June 10, 2024

    Post Content Source: Read More 

    CVE-2025-47661 – Codemstory WordPress SimplePay CSRF Vulnerability

    May 7, 2025

    Community News: Latest PECL Releases (11.19.2024)

    November 19, 2024

    CVE-2025-30324 – Adobe Photoshop Integer Underflow Arbitrary Code Execution Vulnerability

    May 13, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.