Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Error’d: Pickup Sticklers

      September 27, 2025

      From Prompt To Partner: Designing Your Custom AI Assistant

      September 27, 2025

      Microsoft unveils reimagined Marketplace for cloud solutions, AI apps, and more

      September 27, 2025

      Design Dialects: Breaking the Rules, Not the System

      September 27, 2025

      Building personal apps with open source and AI

      September 12, 2025

      What Can We Actually Do With corner-shape?

      September 12, 2025

      Craft, Clarity, and Care: The Story and Work of Mengchu Yao

      September 12, 2025

      Cailabs secures €57M to accelerate growth and industrial scale-up

      September 12, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Using phpinfo() to Debug Common and Not-so-Common PHP Errors and Warnings

      September 28, 2025
      Recent

      Using phpinfo() to Debug Common and Not-so-Common PHP Errors and Warnings

      September 28, 2025

      Mastering PHP File Uploads: A Guide to php.ini Settings and Code Examples

      September 28, 2025

      The first browser with JavaScript landed 30 years ago

      September 27, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured
      Recent
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Tech & Work»Anthropic proposes transparency framework for frontier AI development

    Anthropic proposes transparency framework for frontier AI development

    July 8, 2025

    Anthropic is calling for the creation of an AI transparency framework that can be applied to large AI developers to ensure accountability and safety. 

    “As models advance, we have an unprecedented opportunity to accelerate scientific discovery, healthcare, and economic growth. Without safe and responsible development, a single catastrophic failure could halt progress for decades. Our proposed transparency framework offers a practical first step: public visibility into safety practices while preserving private sector agility to deliver AI’s transformative potential,” Anthropic wrote in a post. 

    As such, it is proposing its framework in the hope that it could be applied at the federal, state, or international level. The initial version of the framework includes six core tenets to be followed. 

    First, AI transparency requirements would apply only to the largest frontier model developers, allowing smaller startups creating models with low risk to be exempt. It doesn’t specify a particular company size here, and welcomes input from the start-up community, but says that in internal discussions at Anthropic, some example cutoffs could be companies with revenue of $100 million or less or R&D and capital expenditures of $1 billion or less. 

    Second, frontier model developers should create a Secure Development Framework detailing how they assess and mitigate unreasonable risks, including creation of chemical, biological, radiological, and nuclear harms, in addition to harms caused by misalignment. 

    Third, this Secure Development Framework should be disclosed to the public, so that researchers, government, and the public can stay informed about the models that are currently deployed. Sensitive information would be allowed to be redacted. 

    Fourth, system cards and documentation should summarize testing and evaluation procedures, results, and mitigations. The system card should be deployed alongside the model and should be updated when the model is updated. Again, redaction of sensitive information from system cards could be allowed. 

    Fifth, Anthropic says it should be illegal for an AI lab to lie about its compliance with its framework. By putting this legal foundation in place, existing whistleblower protections would apply and law enforcement resources could be appropriately allocated to companies engaging in misconduct.

    Sixth, there should be a minimum set of standards that can evolve as technology evolves. According to Anthropic, AI safety and security practices are still in their early stages so any framework should be able to adapt as best practices emerge. 

    “Our approach deliberately avoids being heavily prescriptive. We recognize that as the science of AI continues to evolve, any regulatory effort must remain lightweight and flexible. It should not impede AI innovation, nor should it slow our ability to realize AI’s benefits—including lifesaving drug discovery, swift delivery of public benefits, and critical national security functions. Rigid government-imposed standards would be especially counterproductive given that evaluation methods become outdated within months due to the pace of technological change,” Anthropic wrote.

    The post Anthropic proposes transparency framework for frontier AI development appeared first on SD Times.

    Source: Read More 

    news
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleSonatype Open Source Malware Index, Gemini API Batch Mode, and more – Daily News Digest
    Next Article From Silos to Synergy: Accelerating Your AI Journey

    Related Posts

    Tech & Work

    Error’d: Pickup Sticklers

    September 27, 2025
    Tech & Work

    From Prompt To Partner: Designing Your Custom AI Assistant

    September 27, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Natasha Lyonne to Direct AI-Powered Sci-Fi Film That Could Redefine Hollywood

    Artificial Intelligence

    Microsoft Patches Wormable RCE Vulnerability in Windows and Windows Server

    Security

    How Handmade.com modernizes product image and description handling with Amazon Bedrock and Amazon OpenSearch Service

    Machine Learning

    I’m an audiophile, and these $20 earbuds pass my sound quality check

    News & Updates

    Highlights

    Development

    From Hospital Janitor to Developer with Emmett Naughton [Podcast #185]

    August 23, 2025

    On this week’s episode of the podcast, freeCodeCamp founder Quincy Larson interviews Emmett Naughton. He…

    In good company: How retailers use Figma to elevate e-commerce

    July 15, 2025

    Overcome fear and the myths of Artificial Intelligence

    July 24, 2025

    How to Create a Custom Model Context Protocol (MCP) Client Using Gemini

    April 29, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.