Not releasing a safety report for this new, clearly frontier, model completely goes against OpenAIs prior commitments, and makes me incredibly pessimistic for the future of AI safety. There’s a new scandal with OpenAI every few weeks, but you don’t hear many from other labs. I wonder if it’s particular to OpenAI, if it’s Sam Altman, or if the other companies are just better at hiding it. I hope it’s just an OpenAI problem and safety measures at other labs are moving fast and can eventually be copied by OpenAI and other labs. Fingers crossed for you mech interp researchers out there 🤞
Everyone should be alarmed by what’s happening at frontier AI labs. These companies aren’t just tech firms—they’re now functioning as de facto defense contractors, and their machine learning engineers are, in effect, weapons researchers. The sudden departure of key safety personnel only underscores how unstable and opaque these institutions have become.
This isn’t just a U.S. issue. The global implications are massive. For example: Canada is currently in the middle of an election, and not one major party is addressing the immediate economic threat of AI-driven workforce collapse or the strategic risk posed by nationalist artificial superintelligence developed by U.S. firms under the Trump administration.
When the stakes are this high and the silence is this loud, the danger isn’t speculative—it’s systemic.
AI risk mitigation is becoming obsolete outside of cybersecurity at OpenAI. Last week, 12 former OpenAI employees fielded a brief in Elon Musk’s case against OpenAI, arguing the company would be encouraged to cut even more corners on safety should it complete its planned corporate restructuring.
Not releasing a safety report for this new, clearly frontier, model completely goes against OpenAIs prior commitments, and makes me incredibly pessimistic for the future of AI safety. There’s a new scandal with OpenAI every few weeks, but you don’t hear many from other labs. I wonder if it’s particular to OpenAI, if it’s Sam Altman, or if the other companies are just better at hiding it. I hope it’s just an OpenAI problem and safety measures at other labs are moving fast and can eventually be copied by OpenAI and other labs. Fingers crossed for you mech interp researchers out there 🤞
Everyone should be alarmed by what’s happening at frontier AI labs. These companies aren’t just tech firms—they’re now functioning as de facto defense contractors, and their machine learning engineers are, in effect, weapons researchers. The sudden departure of key safety personnel only underscores how unstable and opaque these institutions have become.
This isn’t just a U.S. issue. The global implications are massive. For example: Canada is currently in the middle of an election, and not one major party is addressing the immediate economic threat of AI-driven workforce collapse or the strategic risk posed by nationalist artificial superintelligence developed by U.S. firms under the Trump administration.
When the stakes are this high and the silence is this loud, the danger isn’t speculative—it’s systemic.
I wrote about this in more detail here:
https://sonderuncertainly.substack.com/p/the-real-threats-your-leaders-wont
The Real Threats Your Leaders Won’t Acknowledge (Until It’s Too Late)
AI risk mitigation is becoming obsolete outside of cybersecurity at OpenAI. Last week, 12 former OpenAI employees fielded a brief in Elon Musk’s case against OpenAI, arguing the company would be encouraged to cut even more corners on safety should it complete its planned corporate restructuring.