Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Anthropic Explores Many-Shot Jailbreaking: Exposing AI’s Newest Weak Spot

    Anthropic Explores Many-Shot Jailbreaking: Exposing AI’s Newest Weak Spot

    April 3, 2024

    As the capabilities of large language models (LLMs) continue to evolve, so too do the methods by which these AI systems can be exploited. A recent study by Anthropic has uncovered a new technique for bypassing the safety guardrails of LLMs, dubbed “many-shot jailbreaking.” This technique capitalizes on the large context windows of state-of-the-art LLMs to manipulate model behavior in unintended, often harmful ways.

    Many-shot jailbreaking operates by feeding the model a vast array of question-answer pairs that depict the AI assistant providing dangerous or harmful responses. By scaling this method to include hundreds of such examples, attackers can effectively circumvent the model’s safety training, prompting it to generate undesirable outputs. This vulnerability has been shown to affect not only Anthropic’s own models but also those developed by other prominent AI organizations such as OpenAI and Google DeepMind.

    The underlying principle of many-shot jailbreaking is akin to in-context learning, where the model adjusts its responses based on the examples provided in its immediate prompt. This similarity suggests that crafting a defense against such attacks without hampering the model’s learning capability presents a significant challenge.

    To combat many-shot jailbreaking, Anthropic has explored several mitigation strategies, including:

    Fine-tuning the model to recognize and reject queries resembling jailbreaking attempts. Although this method delays the model’s compliance with harmful requests, it does not eliminate the vulnerability fully.

    Implementing prompt classification and modification techniques to provide additional context to suspected jailbreaking prompts has proven effective in significantly reducing the success rate of attacks from 61% to 2%.

    The implications of Anthropic’s findings are wide-reaching:

    They underscore the limitations of current alignment methods and the urgent need for a more comprehensive understanding of the mechanisms behind many-shot jailbreaking.

    The study could influence public policy, encouraging a more responsible approach to AI development and deployment.

    It warns model developers about the importance of anticipating and preparing for novel exploits, highlighting the need for a proactive approach to AI safety.

    The disclosure of this vulnerability could, paradoxically, aid malicious actors in the short term but is deemed necessary for long-term safety and responsibility in AI advancement.

    Key Takeaways:

    Many-shot jailbreaking represents a significant vulnerability in LLMs, exploiting their large context windows to bypass safety measures.

    This technique demonstrates the effectiveness of in-context learning for malicious purposes, challenging developers to find defenses that do not compromise the model’s capabilities.

    Anthropic’s research highlights the ongoing arms race between developing advanced AI models and securing them against increasingly sophisticated attacks.

    The findings stress the need for an industry-wide effort to share knowledge on vulnerabilities and collaborate on defense mechanisms to ensure the safe development of AI technologies.

    The exploration and mitigation of vulnerabilities like many-shot jailbreaking are critical steps in advancing AI safety and utility. As AI models grow in complexity and capability, the collaborative effort to address these challenges becomes ever more vital to the responsible development and deployment of AI systems.

    Check out the Paper and Blog. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 39k+ ML SubReddit

    New Anthropic research paper: Many-shot jailbreaking.

    We study a long-context jailbreaking technique that is effective on most large language models, including those developed by Anthropic and many of our peers.

    Read our blog post and the paper here: https://t.co/6F03M8AgcA pic.twitter.com/wlcWYsrfg8

    — Anthropic (@AnthropicAI) April 2, 2024

    The post Anthropic Explores Many-Shot Jailbreaking: Exposing AI’s Newest Weak Spot appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleResearchers at Google DeepMind Present Gecko: A Compact and Versatile Embedding Model Powered by the Vast World Knowledge of LLMs
    Next Article Top Open Source Large Language Models (LLMs) Available For Commercial Use

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-47916 – Invision Community Themeeditor Remote Code Execution

    May 16, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    UniteLabs secures €2.77M to become the ‘Operating System’ for the modern biotech lab

    News & Updates

    MEDUSA Claims Access to Harry Perkins Institute’s Video Feed, Demands $500,000 Ransom

    Development
    Thailand and Malaysia Ramp Up Financial Cybersecurity Amid Escalating Threats

    Thailand and Malaysia Ramp Up Financial Cybersecurity Amid Escalating Threats

    Development

    Sam Altman doesn’t want his son to have an AI “bestie” — as Microsoft plans to turn Copilot into an AI friend and companion

    News & Updates

    Highlights

    With Wine 10 update, run your Windows app on Linux better than ever

    January 29, 2025

    The latest version of Wine features better graphics and videos. Gamers will be especially pleased.…

    Rilasciata Finnix 250: La distribuzione GNU/Linux per amministratori di sistema celebra 25 anni

    March 23, 2025

    Dev Hackathon: Reusable Creativity on Wix Studio

    August 15, 2024

    Carolina Cansino Leads with a People-First Mindset

    April 29, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.