Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      A Week In The Life Of An AI-Augmented Designer

      August 22, 2025

      This week in AI updates: Gemini Code Assist Agent Mode, GitHub’s Agents panel, and more (August 22, 2025)

      August 22, 2025

      Microsoft adds Copilot-powered debugging features for .NET in Visual Studio

      August 21, 2025

      Blackstone portfolio company R Systems Acquires Novigo Solutions, Strengthening its Product Engineering and Full-Stack Agentic-AI Capabilities

      August 21, 2025

      I found the ultimate MacBook Air alternative for Windows users – and it’s priced well

      August 23, 2025

      Outdated IT help desks are holding businesses back – but there is a solution

      August 23, 2025

      Android’s latest update can force apps into dark mode – how to see it now

      August 23, 2025

      I tried the Google Pixel Watch 4 – and these key features made it feel indispensable

      August 23, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Building Cross-Platform Alerts with Laravel’s Notification Framework

      August 23, 2025
      Recent

      Building Cross-Platform Alerts with Laravel’s Notification Framework

      August 23, 2025

      Add Notes Functionality to Eloquent Models With the Notable Package

      August 23, 2025

      How to install OpenPlatform — IoT platform

      August 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Basics of Digital Forensics

      August 22, 2025
      Recent

      Basics of Digital Forensics

      August 22, 2025

      Top Linux Server Automation Tools: Simplifying System Administration

      August 22, 2025

      Rising from the Ashes: How AlmaLinux and Rocky Linux Redefined the Post-CentOS Landscape

      August 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Tech & Work»Anthropic proposes transparency framework for frontier AI development

    Anthropic proposes transparency framework for frontier AI development

    July 8, 2025

    Anthropic is calling for the creation of an AI transparency framework that can be applied to large AI developers to ensure accountability and safety. 

    “As models advance, we have an unprecedented opportunity to accelerate scientific discovery, healthcare, and economic growth. Without safe and responsible development, a single catastrophic failure could halt progress for decades. Our proposed transparency framework offers a practical first step: public visibility into safety practices while preserving private sector agility to deliver AI’s transformative potential,” Anthropic wrote in a post. 

    As such, it is proposing its framework in the hope that it could be applied at the federal, state, or international level. The initial version of the framework includes six core tenets to be followed. 

    First, AI transparency requirements would apply only to the largest frontier model developers, allowing smaller startups creating models with low risk to be exempt. It doesn’t specify a particular company size here, and welcomes input from the start-up community, but says that in internal discussions at Anthropic, some example cutoffs could be companies with revenue of $100 million or less or R&D and capital expenditures of $1 billion or less. 

    Second, frontier model developers should create a Secure Development Framework detailing how they assess and mitigate unreasonable risks, including creation of chemical, biological, radiological, and nuclear harms, in addition to harms caused by misalignment. 

    Third, this Secure Development Framework should be disclosed to the public, so that researchers, government, and the public can stay informed about the models that are currently deployed. Sensitive information would be allowed to be redacted. 

    Fourth, system cards and documentation should summarize testing and evaluation procedures, results, and mitigations. The system card should be deployed alongside the model and should be updated when the model is updated. Again, redaction of sensitive information from system cards could be allowed. 

    Fifth, Anthropic says it should be illegal for an AI lab to lie about its compliance with its framework. By putting this legal foundation in place, existing whistleblower protections would apply and law enforcement resources could be appropriately allocated to companies engaging in misconduct.

    Sixth, there should be a minimum set of standards that can evolve as technology evolves. According to Anthropic, AI safety and security practices are still in their early stages so any framework should be able to adapt as best practices emerge. 

    “Our approach deliberately avoids being heavily prescriptive. We recognize that as the science of AI continues to evolve, any regulatory effort must remain lightweight and flexible. It should not impede AI innovation, nor should it slow our ability to realize AI’s benefits—including lifesaving drug discovery, swift delivery of public benefits, and critical national security functions. Rigid government-imposed standards would be especially counterproductive given that evaluation methods become outdated within months due to the pace of technological change,” Anthropic wrote.

    The post Anthropic proposes transparency framework for frontier AI development appeared first on SD Times.

    Source: Read More 

    news
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleSonatype Open Source Malware Index, Gemini API Batch Mode, and more – Daily News Digest
    Next Article From Silos to Synergy: Accelerating Your AI Journey

    Related Posts

    Tech & Work

    A Week In The Life Of An AI-Augmented Designer

    August 22, 2025
    Tech & Work

    This week in AI updates: Gemini Code Assist Agent Mode, GitHub’s Agents panel, and more (August 22, 2025)

    August 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Enterprise-grade natural language to SQL generation using LLMs: Balancing accuracy, latency, and scale

    Machine Learning

    The AI Fix #53: An AI uses blackmail to save itself, and threats make AIs work better

    Development

    Week in review: Backdoor found in SOHO devices running Linux, high-risk WinRAR RCE flaw patched

    Security

    Frostpunk 2 finally has a console release date for Xbox Series X|S and PS5 — and it’s still coming to Xbox Game Pass

    News & Updates

    Highlights

    CVE-2025-8525 – Exrick xboot Information Disclosure Vulnerability

    August 4, 2025

    CVE ID : CVE-2025-8525

    Published : Aug. 4, 2025, 9:15 p.m. | 2 hours, 28 minutes ago

    Description : A vulnerability was found in Exrick xboot up to 3.3.4. It has been classified as problematic. This affects an unknown part of the component Spring Boot Admin/Spring Actuator. The manipulation leads to information disclosure. It is possible to initiate the attack remotely. The exploit has been disclosed to the public and may be used.

    Severity: 5.3 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2012-10035 – Turbo FTP Server Buffer Overflow Vulnerability

    August 5, 2025

    Kdenlive 25.08 Enhances Audio Mixer, Adds 10-bit H.265 Support

    August 21, 2025

    CVE-2022-50218 – Linux Kernel IIO Light Driver Null Pointer Dereference Vulnerability

    June 18, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.