Anthropic is calling for the creation of an AI transparency framework that can be applied to large AI developers to ensure accountability and safety.
“As models advance, we have an unprecedented opportunity to accelerate scientific discovery, healthcare, and economic growth. Without safe and responsible development, a single catastrophic failure could halt progress for decades. Our proposed transparency framework offers a practical first step: public visibility into safety practices while preserving private sector agility to deliver AI’s transformative potential,” Anthropic wrote in a post.
As such, it is proposing its framework in the hope that it could be applied at the federal, state, or international level. The initial version of the framework includes six core tenets to be followed.
First, AI transparency requirements would apply only to the largest frontier model developers, allowing smaller startups creating models with low risk to be exempt. It doesn’t specify a particular company size here, and welcomes input from the start-up community, but says that in internal discussions at Anthropic, some example cutoffs could be companies with revenue of $100 million or less or R&D and capital expenditures of $1 billion or less.
Second, frontier model developers should create a Secure Development Framework detailing how they assess and mitigate unreasonable risks, including creation of chemical, biological, radiological, and nuclear harms, in addition to harms caused by misalignment.
Third, this Secure Development Framework should be disclosed to the public, so that researchers, government, and the public can stay informed about the models that are currently deployed. Sensitive information would be allowed to be redacted.
Fourth, system cards and documentation should summarize testing and evaluation procedures, results, and mitigations. The system card should be deployed alongside the model and should be updated when the model is updated. Again, redaction of sensitive information from system cards could be allowed.
Fifth, Anthropic says it should be illegal for an AI lab to lie about its compliance with its framework. By putting this legal foundation in place, existing whistleblower protections would apply and law enforcement resources could be appropriately allocated to companies engaging in misconduct.
Sixth, there should be a minimum set of standards that can evolve as technology evolves. According to Anthropic, AI safety and security practices are still in their early stages so any framework should be able to adapt as best practices emerge.
“Our approach deliberately avoids being heavily prescriptive. We recognize that as the science of AI continues to evolve, any regulatory effort must remain lightweight and flexible. It should not impede AI innovation, nor should it slow our ability to realize AI’s benefits—including lifesaving drug discovery, swift delivery of public benefits, and critical national security functions. Rigid government-imposed standards would be especially counterproductive given that evaluation methods become outdated within months due to the pace of technological change,” Anthropic wrote.
The post Anthropic proposes transparency framework for frontier AI development appeared first on SD Times.
Source: Read MoreÂ