Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»Whistleblowers criticize OpenAI’s opposition to AI safety bill

    Whistleblowers criticize OpenAI’s opposition to AI safety bill

    August 29, 2024

    Two former OpenAI researchers wrote a letter in response to OpenAI’s opposition to California’s controversial SB 1047 AI safety bill.

    The proposed bill has been making its way through the state’s legislative steps, and if it passes the full Senate by the end of the month, it will head to Governor Gavin Newsom for his signature.

    The bill calls for extra safety checks for AI models that cost more than $100m to train, as well as a ‘kill switch’ in case the model misbehaves. Former OpenAI employees and whistleblowers, William Saunders and Daniel Kokotajlo, say they are “disappointed but not surprised” by OpenAI’s opposition to the bill.

    OpenAI’s letter to the bill’s author, Senator Scott Wiener, explained that while it supports the intent behind the bill, federal laws regulating AI development are a better option.

    OpenAI says the national security implications such as potential chemical, biological, radiological, and nuclear harms are “best managed by the federal government and agencies.”

    The letter says that if “states attempt to compete with the federal government for scarce talent and resources, it will dilute the already limited expertise across agencies, leading to a less effective and more fragmented policy for guarding against national security risks and critical harms.”

    The letter also quoted Representative Zoe Lofgren’s concerns that if the bill was signed into law, there “is a real risk that companies will decide to incorporate in other jurisdictions or simply not release models in California.”

    OpenAI whistleblower response

    The former OpenAI employees aren’t buying OpenAI’s reasoning. They explained “We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.”

    The authors of the letter were also behind the “Right to Warn” letter, released earlier this year.

    Explaining their support of SB 1047, the letter says “Developing frontier AI models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public.”

    OpenAI has seen an exodus of AI safety researchers, but the company’s models haven’t delivered any of the doomsday scenarios many have been concerned about. The whistleblowers say “That’s only because truly dangerous systems have not yet been built, not because companies have safety processes that could handle truly dangerous systems.”

    They also don’t believe OpenAI CEO Sam Altman when he says that he’s committed to AI safety. “Sam Altman, our former boss, has repeatedly called for AI regulation. Now, when actual regulation is on the table, he opposes it,” they explained.

    OpenAI isn’t the only company opposing the bill. Anthropic also had concerns, but now appears to support it after amendments were made.

    Anthropic CEO Dario Amodei said in his letter to California Governor Gavin Newsom on Aug. 21, “In our assessment, the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs.

    “However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us…Our initial concerns about the bill potentially hindering innovation due to the rapidly evolving nature of the field have been greatly reduced in the amended version.”

    If SB 1047 is signed into law it could force companies like OpenAI to focus a lot more resources on AI safety but it could also see a migration of tech companies from Silicon Valley.

    The post Whistleblowers criticize OpenAI’s opposition to AI safety bill appeared first on DailyAI.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAutomatic language detection improvements: increased accuracy & expanded language support
    Next Article When is Apple Intelligence out for the iPhone 16? Here’s what you need to know

    Related Posts

    Machine Learning

    LLMs Struggle with Real Conversations: Microsoft and Salesforce Researchers Reveal a 39% Performance Drop in Multi-Turn Underspecified Tasks

    May 17, 2025
    Machine Learning

    This AI paper from DeepSeek-AI Explores How DeepSeek-V3 Delivers High-Performance Language Modeling by Minimizing Hardware Overhead and Maximizing Computational Efficiency

    May 17, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Selenium – How to remove &nbsp from my text value

    Development

    Over 110,000 Websites Affected by Hijacked Polyfill Supply Chain Attack

    Development

    Zorin OS 17.3 Released with New Default Browser

    Linux

    Black Basta Chat Logs Reveal Ransomware Group’s TTPs, IoCs

    Development

    Highlights

    Smashing Security podcast #370: The closed loop conundrum, default passwords, and Baby Reindeer

    May 1, 2024

    The UK Government takes aim at IoT devices shipping with weak or default passwords, an…

    Monitor not paper-y enough? A 25-inch color E Ink monitor just dropped for *checks notes* $1,900

    May 1, 2025

    Want to Grow Vulnerability Management into Exposure Management? Start Here!

    December 7, 2024

    Words! – word puzzle game

    February 21, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.