Two former OpenAI researchers wrote a letter in response to OpenAI’s opposition to California’s controversial SB 1047 AI safety bill.
The proposed bill has been making its way through the state’s legislative steps, and if it passes the full Senate by the end of the month, it will head to Governor Gavin Newsom for his signature.
The bill calls for extra safety checks for AI models that cost more than $100m to train, as well as a ‘kill switch’ in case the model misbehaves. Former OpenAI employees and whistleblowers, William Saunders and Daniel Kokotajlo, say they are “disappointed but not surprised†by OpenAI’s opposition to the bill.
OpenAI’s letter to the bill’s author, Senator Scott Wiener, explained that while it supports the intent behind the bill, federal laws regulating AI development are a better option.
OpenAI says the national security implications such as potential chemical, biological, radiological, and nuclear harms are “best managed by the federal government and agencies.â€
The letter says that if “states attempt to compete with the federal government for scarce talent and resources, it will dilute the already limited expertise across agencies, leading to a less effective and more fragmented policy for guarding against national security risks and critical harms.â€
The letter also quoted Representative Zoe Lofgren’s concerns that if the bill was signed into law, there “is a real risk that companies will decide to incorporate in other jurisdictions or simply not release models in California.â€
OpenAI whistleblower response
The former OpenAI employees aren’t buying OpenAI’s reasoning. They explained “We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.â€
The authors of the letter were also behind the “Right to Warn†letter, released earlier this year.
Explaining their support of SB 1047, the letter says “Developing frontier AI models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public.â€
OpenAI has seen an exodus of AI safety researchers, but the company’s models haven’t delivered any of the doomsday scenarios many have been concerned about. The whistleblowers say “That’s only because truly dangerous systems have not yet been built, not because companies have safety processes that could handle truly dangerous systems.â€
They also don’t believe OpenAI CEO Sam Altman when he says that he’s committed to AI safety. “Sam Altman, our former boss, has repeatedly called for AI regulation. Now, when actual regulation is on the table, he opposes it,†they explained.
OpenAI isn’t the only company opposing the bill. Anthropic also had concerns, but now appears to support it after amendments were made.
Anthropic CEO Dario Amodei said in his letter to California Governor Gavin Newsom on Aug. 21, “In our assessment, the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs.
“However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us…Our initial concerns about the bill potentially hindering innovation due to the rapidly evolving nature of the field have been greatly reduced in the amended version.â€
If SB 1047 is signed into law it could force companies like OpenAI to focus a lot more resources on AI safety but it could also see a migration of tech companies from Silicon Valley.
The post Whistleblowers criticize OpenAI’s opposition to AI safety bill appeared first on DailyAI.
Source: Read MoreÂ