Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Meta AI and NYU Researchers Propose E-RLHF to Combat LLM Jailbreaking

    Meta AI and NYU Researchers Propose E-RLHF to Combat LLM Jailbreaking

    August 18, 2024

    Large Language Models (LLMs) have gained prominence in deep learning, demonstrating exceptional capabilities across various domains such as assistance, code generation, healthcare, and theorem proving. The training process for LLMs typically involves two stages: pretraining with massive corpora and an alignment step using Reinforcement Learning from Human Feedback (RLHF). However, LLMs need help generating appropriate content. Despite their effectiveness in multiple tasks, these models are prone to producing offensive or inappropriate content, including hate speech, malware, fake information, and social biases. This vulnerability stems from the unavoidable presence of harmful elements within their pretraining datasets. The alignment process, crucial for addressing these issues, is not universally applicable and depends on specific use cases and user preferences, making it a complex challenge for researchers to overcome

    Researchers have made significant efforts to enhance LLM safety through alignment techniques, including supervised fine-tuning, red teaming, and refining the RLHF process. However, these attempts have led to an ongoing cycle of increasingly sophisticated alignment methods and more inventive “jailbreaking” attacks. Existing approaches to address these challenges fall into three main categories: baseline methods, LLM automation and suffix-based attacks, and manipulation of the decoding process. Baseline techniques like AutoPrompt and ARCA optimize tokens for harmful content generation, while LLM automation methods such as AutoDAN and GPTFuzzer employ genetic algorithms to create plausible jailbreaking prompts. Suffix-based attacks like GCG focus on improving interpretability. Despite these efforts, current methods need help with semantic plausibility and cross-architecture applicability. The lack of a principled universal defense against jailbreaking attacks and limited theoretical understanding of this phenomenon remain significant challenges in the field of LLM safety.

    Researchers from NYU and MetaAI, FAIR introduce a theoretical framework for analyzing LLM pretraining and jailbreaking vulnerabilities. By decoupling input prompts and representing outputs as longer text fragments, the researchers quantify adversary strength and model behavior. They provide a PAC-Bayesian generalization bound for pretraining, suggesting inevitable harmful outputs in high-performing models. The framework demonstrates that jailbreaking remains unpreventable even after safety alignment. Identifying a key drawback in RL Fine-Tuning objectives, the researchers propose methods to train safer, more resilient models without compromising performance. This approach offers new insights into LLM safety and potential improvements in alignment techniques.

    Researchers present a comprehensive theoretical framework for analyzing language model jailbreaking vulnerabilities, modeling prompts as query-concept tuples, and LLMs as generators of longer text fragments called explanations. The researchers introduce key assumptions and define notions of harmfulness, presenting a non-vacuous PAC-Bayesian generalization bound for pretraining Language Models. This bound implies that well-trained LMs may exhibit harmful behavior when exposed to such content during training. Building on these theoretical insights, the research proposes E-RLHF (Expanded Reinforcement Learning from Human Feedback), an innovative approach to improve language model alignment and reduce jailbreaking vulnerabilities. E-RLHF modifies the standard RLHF process by expanding the safety zone in the output distribution, replacing harmful prompts with safety-transformed versions in the KL-divergence term of the objective function. This innovation aims to increase safe explanations in the model’s output for harmful prompts without affecting performance on non-harmful ones. The approach can be integrated into the Direct Preference Optimization objective, eliminating the need for an explicit reward model. 

    The researchers have conducted experiments using the alignment handbook code base and a publicly available SFT model. For evaluating their proposed E-DPO method using the Harmbench and AdvBench datasets, measuring safety alignment with various jailbreak adversaries. Results showed that E-DPO reduced the average Attack Success Rate (ASR) across all adversaries for both datasets, achieving 36.95% for Harmbench and 20.89% for AdvBench, demonstrating improvements over standard DPO. The study also assessed helpfulness using the MT-Bench project, with E-DPO scoring 6.6, surpassing the SFT model’s score of 6.3. The researchers concluded that E-DPO enhances safety alignment without sacrificing model helpfulness, and can be combined with system prompts for further safety improvements.

    This study presented a theoretical framework for language model pretraining and jailbreaking, focusing on dissecting input prompts into query and concept pairs. Their analysis yielded two key theoretical results: first, language models can mimic the world after pretraining, leading to harmful outputs for harmful prompts; and second, jailbreaking is inevitable due to alignment challenges. Guided by these insights, the team developed a simple yet effective technique to enhance safety alignment. Their experiments demonstrated improved resilience to jailbreak attacks using this new methodology, contributing to the ongoing efforts to create safer and more robust language models.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

    Don’t Forget to join our 48k+ ML SubReddit

    Find Upcoming AI Webinars here

    Arcee AI Introduces Arcee Swarm: A Groundbreaking Mixture of Agents MoA Architecture Inspired by the Cooperative Intelligence Found in Nature Itself

    The post Meta AI and NYU Researchers Propose E-RLHF to Combat LLM Jailbreaking appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleUniBench: A Python Library to Evaluate Vision-Language Models VLMs Robustness Across Diverse Benchmarks
    Next Article Publii CMS Reaches a New Milestone: Pages, Enhanced Sync, and More

    Related Posts

    Machine Learning

    Salesforce AI Releases BLIP3-o: A Fully Open-Source Unified Multimodal Model Built with CLIP Embeddings and Flow Matching for Image Understanding and Generation

    May 16, 2025
    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    New Poco RAT Targets Spanish-Speaking Victims in Phishing Campaign

    Development

    Google fixes actively exploited FreeType flaw on Android

    Security

    How to auto scroll Appium Server Console log at bottom

    Development

    Russian Hackers Exploit New NTLM Flaw to Deploy RAT Malware via Phishing Emails

    Development

    Highlights

    Technological Advances that Challenged Visual Art

    August 7, 2024

    Post Content Source: Read More 

    Creating First Game Prototype in a Browser: The Journey So Far

    February 10, 2025

    Spark your digital transformation with AI and VSM

    March 26, 2025

    This AI Paper Introduces a Verbalized Way to Perform Machine Learning and Conducts Several Case Studies on Regression and Classification Tasks

    August 5, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.