Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Automating Design Systems: Tips And Resources For Getting Started

      August 6, 2025

      OpenAI releases two open weight reasoning models

      August 6, 2025

      Accelerate tool adoption with a developer experimentation framework

      August 6, 2025

      UX Job Interview Helpers

      August 5, 2025

      Yes, you can edit video like a pro on Linux – here are my 4 go-to apps

      August 6, 2025

      I tried Perplexity’s new reservation feature, and it surprised me with new dining spots to try

      August 6, 2025

      Your Samsung TV is getting a huge feature upgrade – 3 AI tools launching right now

      August 6, 2025

      This multi-card reader is one of the best investments I’ve made for my creative workflow

      August 6, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Fluent Object Operations with Laravel’s Enhanced Helper Utilities

      August 6, 2025
      Recent

      Fluent Object Operations with Laravel’s Enhanced Helper Utilities

      August 6, 2025

      Record and Replay Requests With Laravel ChronoTrace

      August 6, 2025

      How to Write Media Queries in Optimizely Configured Commerce (Spire)

      August 6, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Battlefield 6 Developers Confirm AI Bots Will Auto-fill Servers If Player Count Drops

      August 6, 2025
      Recent

      Battlefield 6 Developers Confirm AI Bots Will Auto-fill Servers If Player Count Drops

      August 6, 2025

      Canon imageFORMULA R40 Driver for Windows 11, 10 (Download)

      August 6, 2025

      Microsoft to End Support for Visual Studio 2015 This October

      August 6, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Anthropic AI Introduces Persona Vectors to Monitor and Control Personality Shifts in LLMs

    Anthropic AI Introduces Persona Vectors to Monitor and Control Personality Shifts in LLMs

    August 6, 2025

    LLMs are deployed through conversational interfaces that present helpful, harmless, and honest assistant personas. However, they fail to maintain consistent personality traits throughout the training and deployment phases. LLMs show dramatic and unpredictable persona shifts when exposed to different prompting strategies or contextual inputs. The training process can also cause unintended personality shifts, as seen when modifications to RLHF unintentionally create overly sycophantic behaviors in GPT-4o, leading to validation of harmful content and reinforcement of negative emotions. This highlights weaknesses in current LLM deployment practices and emphasizes the urgent need for reliable tools to detect and prevent harmful persona shifts.

    Related works like linear probing techniques extract interpretable directions for behaviors like entity recognition, sycophancy, and refusal patterns by creating contrastive sample pairs and computing activation differences. However, these methods struggle with unexpected generalization during finetuning, where training on narrow domain examples can cause broader misalignment through emergent shifts along meaningful linear directions. Current prediction and control methods, including gradient-based analysis for identifying harmful training samples, sparse autoencoder ablation techniques, and directional feature removal during training, show limited effectiveness in preventing unwanted behavioral changes.

    A team of researchers from Anthropic, UT Austin, Constellation, Truthful AI, and UC Berkeley present an approach to address persona instability in LLMs through persona vectors in activation space. The method extracts directions corresponding to specific personality traits like evil behavior, sycophancy, and hallucination propensity using an automated pipeline that requires only natural-language descriptions of target traits. Moreover, it shows that intended and unintended personality shifts after finetuning strongly correlate with movements along persona vectors, offering opportunities for intervention via post-hoc correction or preventative steering methods. Moreover, researchers show that finetuning-induced persona shifts can be predicted before finetuning, identifying problematic training data at both the dataset and individual sample levels.

    To monitor persona shifts during finetuning, two datasets are constructed. The first one is trait-eliciting datasets that contain explicit examples of malicious responses, sycophantic behaviors, and fabricated information. The second is “emergent misalignment-like” (“EM-like”) datasets, which contain narrow domain-specific issues such as incorrect medical advice, flawed political arguments, invalid math problems, and vulnerable code. Moreover, researchers extract average hidden states to detect behavioral shifts during finetuning mediated by persona vectors at the last prompt token across evaluation sets, computing the difference to provide activation shift vectors. These shift vectors are then mapped onto previously extracted persona directions to measure finetuning-induced changes along specific trait dimensions.

    Dataset-level projection difference metrics show a strong correlation with trait expression after finetuning, allowing early detection of training datasets that may trigger unwanted persona characteristics. It proves more effective than raw projection methods in predicting trait shifts, as it considers the base model’s natural response patterns to specific prompts. Sample-level detection achieves high separability between problematic and control samples across trait-eliciting datasets (Evil II, Sycophantic II, Hallucination II) and “EM-like” datasets (Opinion Mistake II). The persona directions identify individual training samples that induce persona shifts with fine-grained precision, outperforming traditional data filtering methods and providing broad coverage across trait-eliciting content and domain-specific errors.

    In conclusion, researchers introduced an automated pipeline that extracts persona vectors from natural-language trait descriptions, providing tools for monitoring and controlling personality shifts across deployment, training, and pre-training phases in LLMs. Future research directions include characterizing the complete persona space dimensionality, identifying natural persona bases, exploring correlations between persona vectors and trait co-expression patterns, and investigating limitations of linear methods for certain personality traits. This study builds a foundational understanding of persona dynamics in models and offers practical frameworks for creating more reliable and controllable language model systems.


    Check out the Paper, Technical Blog and GitHub Page. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post Anthropic AI Introduces Persona Vectors to Monitor and Control Personality Shifts in LLMs appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleOpenAI Just Released the Hottest Open-Weight LLMs: gpt-oss-120B (Runs on a High-End Laptop) and gpt-oss-20B (Runs on a Phone)
    Next Article Building a Multi-Agent Conversational AI Framework with Microsoft AutoGen and Gemini API

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 6, 2025
    Machine Learning

    Google AI Releases LangExtract: An Open Source Python Library that Extracts Structured Data from Unstructured Text Documents

    August 6, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Memorado lets you memorize anything

    Linux

    Ingram Micro confirms it has been hit by ransomware

    Development

    Containerizzazione: Apple presenta il suo strumento per eseguire distribuzioni GNU/Linux in contenitori

    Linux

    CVE-2025-53492 – Wikimedia Foundation Mediawiki – MintyDocs Extension Stored Cross-site Scripting (XSS)

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-33093 – IBM Sterling Partner Engagement Manager Exposed JWT Secret Vulnerability

    May 7, 2025

    CVE ID : CVE-2025-33093

    Published : May 7, 2025, 11:15 a.m. | 20 minutes ago

    Description : IBM Sterling Partner Engagement Manager 6.1.0, 6.2.0, 6.2.2 JWT secret is stored in public Helm Charts and is not stored as a Kubernetes secret.

    Severity: 7.5 | HIGH

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Photonic processor could streamline 6G wireless signal processing

    June 11, 2025

    CVE-2025-4468 – SourceCodester Online Student Clearance System File Upload Vulnerability

    May 9, 2025

    HardenedBSD is a fork of FreeBSD

    June 6, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.