<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Obsolete]]></title><description><![CDATA[Reporting and analysis on capitalism, great power competition, and the race to build machine superintelligence from journalist w/ work in NYT, Nature, BBC, TIME, and more.]]></description><link>https://www.obsolete.pub</link><generator>Substack</generator><lastBuildDate>Tue, 07 Apr 2026 11:09:09 GMT</lastBuildDate><atom:link href="https://www.obsolete.pub/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Garrison Lovely]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[garrisonlovely@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[garrisonlovely@substack.com]]></itunes:email><itunes:name><![CDATA[Garrison Lovely]]></itunes:name></itunes:owner><itunes:author><![CDATA[Garrison Lovely]]></itunes:author><googleplay:owner><![CDATA[garrisonlovely@substack.com]]></googleplay:owner><googleplay:email><![CDATA[garrisonlovely@substack.com]]></googleplay:email><googleplay:author><![CDATA[Garrison Lovely]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Doomers Feel Undeterred - My Latest for MIT Tech Review]]></title><description><![CDATA[New interviews with Geoffrey Hinton, Yoshua Bengio, Helen Toner, Daniel Kokotajlo, Stuart Russell, and more on the state of AI safety and why they're still worried.]]></description><link>https://www.obsolete.pub/p/the-doomers-feel-undeterred-my-latest</link><guid isPermaLink="false">https://www.obsolete.pub/p/the-doomers-feel-undeterred-my-latest</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Mon, 15 Dec 2025 20:42:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Ixl1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I was invited to contribute to MIT Technology Review&#8217;s new package of stories on <a href="https://www.technologyreview.com/supertopic/hype-correction/">Hype Correction</a>. It evolved from a feature into a collection of interviews. Unfortunately, I didn&#8217;t have a chance to include everyone&#8217;s perspective here, but all the conversations are informing <a href="https://www.obsoletebook.org/">my book</a> (which I swear, is nearly done). Here&#8217;s the start of the piece and a <a href="https://www.technologyreview.com/2025/12/15/1129171/the-ai-doomers-feel-undeterred/">link</a> to the whole thing. </em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ixl1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ixl1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!Ixl1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!Ixl1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!Ixl1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ixl1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ixl1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!Ixl1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!Ixl1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!Ixl1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d5cef1-e757-4624-b697-31748fc1253f_1024x1024.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Derek Brahney</figcaption></figure></div><p>It&#8217;s a weird time to be an AI doomer.</p><p>This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad&#8212;very, very bad&#8212;for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an <a href="https://jacobin.com/2024/01/can-humanity-survive-ai">existential risk</a> to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can&#8217;t control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a <a href="https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/">slippery concept</a> generally understood as technology that can do whatever humans can do, and better.</p><div><hr></div><p><em>This story is part of MIT Technology Review&#8217;s <strong><a href="https://www.technologyreview.com/supertopic/hype-correction/">Hype Correction</a></strong> package, a series that resets expectations about what AI is, what it makes possible, and where we go next.</em></p><div><hr></div><p>Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: <a href="https://www.wired.com/story/chips-china-artificial-intelligence-controls/">helping shape</a> AI policy coming from the Biden administration, organizing <a href="https://www.nbcnews.com/tech/tech-news/un-general-assembly-opens-plea-binding-ai-safeguards-red-lines-nobel-rcna231973">prominent</a> <a href="https://www.cnbc.com/2025/10/22/800-petition-signatures-apple-steve-wozniak-and-virgin-richard-branson-superintelligence-race.html">calls</a> for <a href="https://red-lines.ai/">international &#8220;red lines</a>&#8221; to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science&#8217;s most prestigious awards.</p><p>But a number of developments over the past six months have put them on the back foot. Talk of an AI bubble has overwhelmed the discourse as tech companies continue to <a href="https://www.wsj.com/tech/ai/how-the-u-s-economy-became-hooked-on-ai-spending-4b6bc7ff?gaa_at=eafs&amp;gaa_n=AWEtsqctZOXDRwxBV8Id17U6MqOjLzvmG9k9c1Eb6Mr-4LERC7yXFBM3g_PFhBkVaYU%3D&amp;gaa_ts=692c83a0&amp;gaa_sig=0OnJjSnBNscllrKbU-TCqRj5mO22iH_j4cQkiUcg4GrEAhU30InlJLZrKdWcCtSFIM2g7BzrptgrC0W667VC0w%3D%3D">invest</a> in multiple <a href="https://www.brookings.edu/the-costs-of-the-manhattan-project/">Manhattan Projects&#8217;</a> worth of data centers without any certainty that future demand will match what they&#8217;re building.</p><p>And then there was the August <a href="https://www.technologyreview.com/2025/08/07/1121308/gpt-5-is-here-now-what/">release</a> of OpenAI&#8217;s latest foundation model, GPT-5, which proved something of a letdown. Maybe that was inevitable, since it was the most hyped AI release of all time; OpenAI CEO Sam Altman had <a href="https://www.bbc.com/news/articles/cy5prvgw0r1o">boasted</a> that GPT-5 felt &#8220;like a PhD-level expert&#8221; in every topic and <a href="https://www.youtube.com/watch?v=aYn8VKW6vXA">told</a> the podcaster Theo Von that the model was so good, it had made him feel &#8220;useless relative to the AI.&#8221;</p><p>Many expected GPT-5 to be a big step toward AGI, but whatever progress the model may have made was overshadowed by a string of technical bugs and the company&#8217;s mystifying, quickly reversed decision to shut off access to every old OpenAI model without warning. And while the new model <a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/">achieved</a> state-of-the-art <a href="https://artificialanalysis.ai/articles/gpt-5-benchmarks-and-analysis">benchmark scores</a>, many people <a href="https://www.tomsguide.com/ai/chatgpt/chatgpt-5-users-are-not-impressed-heres-why-it-feels-like-a-downgrade">felt</a>, perhaps unfairly, that in day-to-day use GPT-5 was a <a href="https://www.wired.com/story/openai-gpt-5-backlash-sam-altman/">step backward</a>.</p><p>All this would seem to threaten some of the very foundations of the doomers&#8217; case. In turn, a competing camp of AI accelerationists, who fear AI is actually not moving fast enough and that the industry is constantly at risk of being smothered by overregulation, is seeing a fresh chance to change how we approach AI safety (or, maybe more accurately, how we don&#8217;t).</p><p>This is particularly true of the industry types who&#8217;ve decamped to Washington: &#8220;The Doomer narratives were wrong,&#8221; <a href="https://x.com/DavidSacks/status/1954244614304739360">declared</a> David Sacks, the longtime venture capitalist turned Trump administration AI czar. &#8220;This notion of imminent AGI has been a distraction and harmful and now effectively proven wrong,&#8221; <a href="https://x.com/sriramk/status/1961083102710673833">echoed</a> the White House&#8217;s senior policy advisor for AI and tech investor Sriram Krishnan. (Sacks and Krishnan did not reply to requests for comment.)</p><p>(There is, of course, another camp in the AI safety debate: the group of researchers and advocates commonly associated with the label &#8220;AI ethics.&#8221; Though they also favor regulation, they tend to think the speed of AI progress has been overstated and have <a href="https://www.buzzsprout.com/2126417/episodes/17153034-agi-imminent-inevitable-and-inane-2025-04-21">often</a> <a href="https://www.johnathanbi.com/p/transcript-for-interview-with-michael-wooldridge-on-ai-history">written off</a> AGI as a sci-fi story or a <a href="https://social.treehouse.systems/@timnitGebru@dair-community.social/111797706168656094">scam</a> that <a href="https://www.technologyreview.com/2023/10/30/1082656/focus-on-existing-ai-harms/">distracts</a> us from <a href="https://mitpress.mit.edu/9780262548328/more-than-a-glitch/">the</a> <a href="https://www.newstatesman.com/spotlight/tech-regulation/cybersecurity/2023/02/amazon-workers-staff-surveillance-extreme-stress-anxiety">technology&#8217;s</a> <a href="https://www.amazon.com/Algorithm-Decides-Hired-Monitored-Promoted/dp/0306827344#:~:text=Book%20overview&amp;text=In%20The%20Algorithm%2C%20she%20investigates,and%20who%20receives%20a%20promotion.">immediate</a> <a href="https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/">threats</a>. But any potential doomer demise wouldn&#8217;t exactly give them the same opening the accelerationists are seeing.)</p><p>So where does this leave the doomers?<strong> </strong>As part of our <a href="https://www.technologyreview.com/supertopic/hype-correction/">Hype Correction package</a>, we decided to ask some of the movement&#8217;s biggest names to see if the recent setbacks and general vibe shift had altered their views. Are they frustrated that policymakers no longer seem to heed their threats? Are they quietly adjusting their timelines for the apocalypse?</p><p>Recent interviews with 20 people who study or advocate AI safety and governance&#8212;including Nobel Prize winner Geoffrey Hinton, Turing Prize winner Yoshua Bengio, and high-profile experts like former OpenAI board member Helen Toner&#8212;reveal that rather than feeling chastened or lost in the wilderness, they&#8217;re still deeply committed to their cause, believing that AGI remains not just possible but incredibly dangerous.</p><p>At the same time, they seem to be grappling with a near contradiction. While they&#8217;re somewhat relieved that recent developments suggest AGI is further out than they previously thought (&#8220;Thank God we have more time,&#8221; says AI researcher Jeffrey Ladish), they also feel frustrated that some people in power are pushing policy against their cause (Daniel Kokotajlo, lead author of a cautionary forecast called &#8220;<a href="http://www.ai-2027.com">AI 2027</a>,&#8221; says &#8220;AI policy seems to be getting worse&#8221; and calls the Sacks and Krishnan tweets &#8220;deranged and/or dishonest.&#8221;)</p><p>Broadly speaking, these experts see the talk of an AI bubble as no more than a speed bump, and disappointment in GPT-5 as more distracting than illuminating. They still generally favor more robust regulation and worry that progress on policy&#8212;the implementation of the <a href="https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai">EU AI Act</a>; the passage of the first major American AI safety bill, <a href="https://calmatters.digitaldemocracy.org/bills/ca_202520260sb53">California&#8217;s SB 53</a>; and new interest in AGI risk <a href="https://x.com/peterwildeford/status/1994511868317302824">from some members of Congress</a>&#8212;has become vulnerable as Washington overreacts to what doomers see as short-term failures to live up to the hype.</p><p>Some were also eager to correct what they see as the most persistent misconceptions about the doomer world. Though their critics routinely mock them for predicting that AGI is right around the corner, they claim that&#8217;s never been an essential part of their case: It &#8220;isn&#8217;t about imminence,&#8221; says Berkeley professor Stuart Russell, the author of <em><a href="https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616">Human Compatible</a>: Artificial Intelligence and the Problem of Control</em>. Most people I spoke with say their timelines to dangerous systems have actually <em>lengthened</em> slightly in the last year&#8212;an important change given how quickly the policy and technical landscapes can shift.</p><p>Many of them, in fact, emphasize the importance of changing timelines. And even if they are <em>just a tad</em> longer now, Toner tells me that one big-picture story of the ChatGPT era is the dramatic compression of these estimates <em>across</em> the AI world. For a long while, she says, AGI was expected in many decades. Now, for the most part, the predicted arrival is sometime in the next few years to 20 years. So even if we have a little bit more time, she (and many of her peers) continue to see AI safety as incredibly, vitally urgent. She tells me that if AGI were possible anytime in even the next 30 years, &#8220;It&#8217;s a huge fucking deal. We should have a lot of people working on this.&#8221;</p><p>So despite the precarious moment doomers find themselves in, their bottom line remains that no matter when AGI is coming (and, again, they say it&#8217;s very likely coming), the world is far from ready.</p><p>Maybe you agree. Or maybe you may think this future is far from guaranteed. Or that it&#8217;s the stuff of science fiction. You may even think AGI is a great big <a href="https://www.technologyreview.com/2025/10/30/1127057/agi-conspiracy-theory-artifcial-general-intelligence/">conspiracy theory</a>. You&#8217;re not alone, of course&#8212;this topic is polarizing. But whatever you think about the doomer mindset, there&#8217;s no getting around the fact that certain people in this world have a lot of influence. So here are some of the most prominent people in the space, reflecting on this moment in their own words.</p><p><em>Interviews have been edited and condensed for length and clarity.</em></p><div><hr></div><h3><strong>The Nobel laureate who&#8217;s not sure what&#8217;s coming</strong></h3><h5><em><strong>Geoffrey Hinton, winner of the Turing Award and the <a href="https://www.technologyreview.com/2024/10/08/1105221/geoffrey-hinton-just-won-the-nobel-prize-in-physics-for-his-work-on-machine-learning/">Nobel Prize </a>in physics for pioneering deep learning</strong></em></h5><p>The biggest change in the last few years is that there are people who are hard to dismiss who are saying this stuff is dangerous. Like, [former Google CEO] Eric Schmidt, for example, really recognized this stuff <a href="https://www.axios.com/2024/12/15/ai-dangers-computers-google-ceo">could be really dangerous</a>. He and I were in China recently talking to someone on the Politburo, the party secretary of Shanghai, to make sure he really understood&#8212;and he did. I think in China, the leadership understands AI and its dangers much better because many of them are engineers&#8230;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.technologyreview.com/2025/12/15/1129171/the-ai-doomers-feel-undeterred/&quot;,&quot;text&quot;:&quot;Read the story&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.technologyreview.com/2025/12/15/1129171/the-ai-doomers-feel-undeterred/"><span>Read the story</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The End of OpenAI’s Nonprofit Era]]></title><description><![CDATA[Key regulators have agreed to let the company kill its profit caps and restructure as a for-profit &#8212; with some strings attached]]></description><link>https://www.obsolete.pub/p/the-end-of-openais-nonprofit-era</link><guid isPermaLink="false">https://www.obsolete.pub/p/the-end-of-openais-nonprofit-era</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Tue, 28 Oct 2025 22:52:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wvse!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I interrupt my frantic <a href="https://obsoletebook.org/">book</a> writing to bring you an update on OpenAI&#8217;s restructuring. We will return to our irregularly scheduled programming after my November 21st deadline.</em></p><p>This morning, the <a href="https://news.delaware.gov/2025/10/28/ag-jennings-completes-review-of-openai-recapitalization/#:~:text=Delaware%20secures%20structural%20reform%20and,.%2C%20a%20Delaware%20nonprofit%20corporation">Delaware</a> and <a href="https://oag.ca.gov/system/files/attachments/press-docs/Final%20Executed%20MOU%20Between%20OpenAI%20and%20California%20AG%20re%20Notice%20of%20Conditions%20of%20Non-Objection%20%2810.27.2025%29%20%28Signed%20by%20OpenAI%29%20%28Signed%20by%20CA%20DOJ%29.pdf">California</a> attorneys general conditionally signed off on OpenAI&#8217;s plan to restructure as a for-profit public benefit corporation (PBC), seemingly closing the book on a fiercely contested legal fight over the company&#8217;s future.</p><p>Microsoft, OpenAI&#8217;s earliest investor and another party with power to block the restructuring, also <a href="https://openai.com/index/next-chapter-of-microsoft-openai-partnership/">said</a> today it would sign off in exchange for changes to its partnership terms and a $135 billion stake in the new PBC.</p><p>With these stakeholders mollified, OpenAI has now cleared its biggest obstacles to a potential IPO &#8212; aside from its <a href="https://www.theinformation.com/articles/openai-says-business-will-burn-115-billion-2029">projected</a> $115 billion cash burn through 2029.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>While the news initially seemed like a total defeat for the many opponents of the restructuring effort, the details of the AGs&#8217; announcements show that the new plan includes some modest but meaningful governance protections &#8212; even as it eliminates the profit caps that might have ultimately delivered trillions to the nonprofit.</p><p>Some of these protections are now enshrined in the charter for OpenAI&#8217;s new PBC, which Obsolete obtained and made available <a href="https://www.documentcloud.org/documents/26205026-openai-pbc-articles-of-incorporation/">here</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wvse!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wvse!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png 424w, https://substackcdn.com/image/fetch/$s_!wvse!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png 848w, https://substackcdn.com/image/fetch/$s_!wvse!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png 1272w, https://substackcdn.com/image/fetch/$s_!wvse!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wvse!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png" width="1456" height="1079" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1079,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wvse!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png 424w, https://substackcdn.com/image/fetch/$s_!wvse!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png 848w, https://substackcdn.com/image/fetch/$s_!wvse!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png 1272w, https://substackcdn.com/image/fetch/$s_!wvse!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05fd4679-1a56-48f0-bff6-353585cc5caf_1600x1186.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Kohei Choji / The Yomiuri Shimbun via Reuters Connect</figcaption></figure></div><p>Overall, this seems like a relative win on governance compared to the previous proposal, but still an enormous loss on the profit caps &#8212; only slightly mitigated by some additional equity the nonprofit will get if the company does very well.</p><p>OpenAI did not immediately reply to a request for comment.</p><h2>The governance wins</h2><p>Board chair Bret Taylor presents the restructuring as a closed case, <a href="https://openai.com/index/built-to-benefit-everyone/">writing</a> that, &#8220;OpenAI has completed its recapitalization, simplifying its corporate structure. The nonprofit remains in control of the for-profit, and now has a direct path to major resources before AGI arrives.&#8221; And CEO Sam Altman <a href="https://x.com/sama/status/1983182740666364113">expressed gratitude</a> to &#8220;the Delaware and California AGs, our partners at Microsoft, all our investors, and especially to our tireless team for their work in getting to a good place here.&#8221;</p><p>In reality, the announcements are better understood as the culmination of months of high stakes and <a href="https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-are-reaching-a-boiling-point-4981c44f?st=LTJzRJ&amp;reflink=desktopwebshare_permalink">acrimonious</a> negotiations between OpenAI and the parties who could block the restructuring.</p><p>The AGs, for instance, sent a scorching <a href="https://oag.ca.gov/system/files/attachments/press-docs/2025-09-05%20-%20Letter%20from%20DE%20AG%20and%20CA%20AG%20-%20FINAL%20with%20NAAG%20Letter.pdf">letter</a> to the board last month following reports of ChatGPT encouraging <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html">suicides</a> and <a href="https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb?st=SswvRy&amp;reflink=desktopwebshare_permalink">murderous delusions</a>:</p><blockquote><p>The recent deaths are unacceptable. They have rightly shaken the American public&#8217;s confidence in OpenAI and this industry. OpenAI &#8211; and the AI industry &#8211; must proactively and transparently ensure AI&#8217;s safe deployment. Doing so is mandated by OpenAI&#8217;s charitable mission, and will be required and enforced by our respective offices.</p></blockquote><p>Yesterday, OpenAI <a href="https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/">announced</a> changes to ChatGPT intended to make it behave more appropriately with users experiencing mental health crises.</p><p>These measures &#8212; along with private negotiations &#8212; appear to have convinced the AGs not to challenge the restructuring, provided OpenAI meets their twenty-paragraph list of demands, including:</p><ul><li><p>The nonprofit will have sole authority to appoint and remove directors of the PBC.</p></li><li><p>The PBC&#8217;s board &#8220;must solely consider the Mission (and may not consider the pecuniary interests of stockholders or any other interest) in respect of safety and security issues.&#8221; This includes all actions and decisions of OpenAI&#8217;s Safety and Security Committee (SSC), which will be led by Zico Kolter, who will not sit on the new PBC&#8217;s board.</p></li><li><p>The SSC will have the authority to require mitigation measures, up to and including halting the release of new models, out of safety concerns.</p></li><li><p>The nonprofit will have rights to access certain information from the PBC to support the nonprofit&#8217;s mission, including access to its AI models, personnel, and advanced research.</p></li><li><p>Members of the nonprofit board must check in with the AGs&#8217; offices semiannually, and senior members of the PBC must check in with the AGs&#8217; offices quarterly, about their progress towards OpenAI&#8217;s mission.</p></li><li><p>The nonprofit must provide 21 days advance notice to the AGs&#8217; offices of any restriction of the nonprofit&#8217;s governance rights so the AG can review the transaction before its consummation.</p></li></ul><p>The AGs&#8217; statements make clear that their non-objection to the restructuring expressly relies on the conditions being met, and, if a dispute arises, they reserve the right to seek court intervention.</p><p>Former OpenAI employee Page Hedley, who helped organize the Not for Private Gain <a href="https://notforprivategain.org/">letters</a> <a href="https://www.obsolete.pub/p/breaking-openai-alums-nobel-laureates">urging</a> the AGs to block the restructuring, <a href="https://x.com/michaelhpage/status/1983193338317852685">highlighted</a> two &#8220;silver linings&#8221;: PBC directors can consider only the mission when making safety and security decisions, and the SSC &#8212; run by the nonprofit &#8212; will have the authority to require mitigation measures, or even halt deployments.</p><p>Hedley noted that the other big power the nonprofit board is given &#8212; its ability to hire and fire PBC directors &#8212; is significantly undermined by the fact that the boards are currently identical, save for Carnegie Mellon professor Zico Kolter, who serves exclusively on the nonprofit side and leads the Safety and Security Committee.</p><p>Todor Markov, another ex-OpenAI employee, <a href="https://x.com/todor_m_markov/status/1983188351542038744">called</a> this outcome better than he expected and noted that the board overlap problem is <a href="https://x.com/todor_m_markov/status/1983206830554612118">somewhat mitigated</a> by the fiduciary duty the nonprofit directors have to OpenAI&#8217;s mission &#8212; giving the AGs an ongoing enforcement lever.</p><p>If your main concern is OpenAI recklessly pushing ahead with risky AI development, the new structure at least puts some formal governance checks in place.</p><p>But these measures are still weaker than the nonprofit&#8217;s former level of control, when &#8212; at least in theory &#8212; decisions weren&#8217;t subject to any profit pressure. (OpenAI&#8217;s string of <a href="https://time.com/6986711/openai-sam-altman-accusations-controversies-timeline/">major</a> <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html">scandals</a> while nominally being under nonprofit control shows the limits of relying on corporate governance alone.)</p><p>One of the California AG&#8217;s <a href="https://oag.ca.gov/system/files/attachments/press-docs/Final%20Executed%20MOU%20Between%20OpenAI%20and%20California%20AG%20re%20Notice%20of%20Conditions%20of%20Non-Objection%20%2810.27.2025%29%20%28Signed%20by%20OpenAI%29%20%28Signed%20by%20CA%20DOJ%29.pdf">conditions</a> is that the PBC board be &#8220;composed of a majority of independent directors,&#8221; defined as non-employees, who &#8220;in the determination of the PBC Board, will have no relationship or interest that could compromise their judgment &#8212; ensuring strong, objective oversight that reinforces accountability and mission alignment.&#8221;</p><p>In its structure page, OpenAI <a href="https://openai.com/our-structure/">lists</a> the following as independent directors of the nonprofit:</p><blockquote><p>Bret Taylor (Chair), Adam D&#8217;Angelo, Dr. Sue Desmond-Hellmann, Dr. Zico Kolter, Retired U.S. Army General Paul M. Nakasone, Adebayo Ogunlesi, Nicole Seligman, and Larry Summers&#8212;as well as CEO Sam Altman.</p></blockquote><p>As an employee, Altman wouldn&#8217;t qualify as an independent director under the California AG&#8217;s definition. Additionally, the OpenAI Files, a project from watchdog nonprofits The Midas Project and the Tech Oversight Project, has <a href="https://www.openaifiles.org/board-conflicts">documented</a> potential conflicts of interest between OpenAI and Taylor, Ogunlesi, and D&#8217;Angelo, who each run businesses that &#8220;are customers of OpenAI or stand to benefit from OpenAI&#8217;s commercial activity.&#8221; (Fidji Simo also served on the nonprofit board as it pursued the restructuring before being appointed OpenAI&#8217;s CEO of Applications.)</p><p>That leaves Desmond-Hellmann, Kolter, Nakasone, Seligman, and Summers. And Kolter, as we&#8217;ll recall, is only on the nonprofit board.</p><p>So, four of the eight board members plausibly don&#8217;t qualify as independent, but the PBC has determined they are &#8212; a determination the AGs are apparently respecting, so long as the director is neither an employee nor management member.</p><h2>No more profit caps</h2><p>The second big thing at stake with the restructuring was the profit caps. When OpenAI created a for-profit arm in 2019, it famously <a href="https://openai.com/index/openai-lp/">capped</a> the profits investors could make and company president Greg Brockman <a href="https://news.ycombinator.com/item?id=19360810">wrote</a> that, &#8220;If we succeed, we believe we&#8217;ll create orders of magnitude more value than any existing company &#8212; in which case all but a fraction is returned to the world.&#8221;</p><p>This plan, like the <a href="https://openai.com/index/evolving-our-structure/">proposal</a> before it, does away with the caps, compensating the nonprofit with a 26 percent stake in the for-profit PBC, with some additional equity of an unstated amount promised if OpenAI&#8217;s value grows more than ten-fold over the next 15 years. The Information <a href="https://www.theinformation.com/articles/openai-restructuring-means">reported</a> that if the company reaches a $5 trillion valuation, &#8220;the foundation could receive shares worth hundreds of billions of dollars,&#8221; citing &#8220;a person who has been involved in the restructuring discussions.&#8221; (No company has ever been worth $5 trillion, though Nvidia&#8217;s <a href="https://companiesmarketcap.com/nvidia/marketcap/">market cap</a> is awfully close.)</p><p>That&#8217;s an improvement over just removing the profit caps, but &#8212; in the scenarios where OpenAI really wins big &#8212; it&#8217;s still dramatically less valuable to the nonprofit (and the public) than if the profit caps had stayed in place.</p><p>This is at the core of why Zvi Mowshowitz, a prominent rationalist blogger, <a href="https://x.com/TheZvi/status/1983165076485087357">calls</a> the restructuring the greatest theft in human history. In his <a href="https://thezvi.substack.com/p/the-mask-comes-off-at-what-price?open=false#%C2%A7the-quest-for-agi-is-openai-s-telos-and-business-model">view</a>, the value of controlling the for-profit alone (known as the control premium) should entitle the nonprofit to 20-40 percent of the PBC &#8212; and that&#8217;s before even considering the value of unlimited profits beyond the old caps.</p><p>When announcing the removal of the profit caps in May, Altman <a href="https://openai.com/index/evolving-our-structure/">wrote</a>:</p><blockquote><p>Instead of our current complex capped-profit structure&#8212;which made sense when it looked like there might be one dominant AGI effort but doesn&#8217;t in a world of many great AGI companies&#8212;we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.</p></blockquote><p>But as Obsolete previously <a href="https://www.obsolete.pub/i/162923814/reading-between-the-lines">observed</a>, these caps only bite if OpenAI does very, very well. So why fight to get rid of them? The only reason to spend political capital on this is if investors now see a real chance of OpenAI actually hitting those caps &#8212; something that seems a lot more plausible now than it did back in 2019.</p><p>UVA economist Anton Korinek has used standard economic models to <a href="https://www.genaiforecon.org/ValueAGI.pdf">estimate</a> that AGI could be worth anywhere from $1.25 to $71 quadrillion globally. If you take Korinek&#8217;s assumptions about OpenAI&#8217;s share, that would put the company&#8217;s value at $30.9 trillion. In this scenario, Microsoft would walk away with less than one percent of the total, with the overwhelming majority flowing to the nonprofit.</p><p>It&#8217;s tempting to dismiss these numbers as fantasy. But it&#8217;s a fantasy constructed in large part by OpenAI, when it <a href="https://www.businessinsider.com/openai-warns-agi-money-obsolete-while-raising-billions-usd-2025-8">wrote</a> lines like, &#8220;it may be difficult to know what role money will play in a post-AGI world,&#8221; or when Altman <a href="https://www.youtube.com/watch?v=TzcJlKg2Rc0&amp;t=2734s">said</a> that if OpenAI succeeded at building AGI, it might &#8220;capture the light cone of all future value in the universe.&#8221; That, he said, &#8220;is for sure not okay for one group of investors to have.&#8221;</p><p>OpenAI presents the new Foundation as &#8220;one of the best-resourced nonprofits ever.&#8221; But The Midas Project sees it differently, <a href="https://www.themidasproject.com/article-list/the-midas-project-statement-on-openai-s-restructuring">writing</a>:</p><blockquote><p>From the public&#8217;s perspective, OpenAI may be one of the worst financially performing nonprofits in history, having voluntarily transferred more of the public&#8217;s entitled value to private interests than perhaps any charitable organization ever.</p></blockquote><h2>Testing my predictions</h2><p>In May, I made <a href="https://www.obsolete.pub/p/four-predictions-about-openais-plans">four predictions</a> in Obsolete about how OpenAI&#8217;s restructuring would go. Here&#8217;s how they held up:</p><ol><li><p>The profit caps will be gone, replaced with a &#8220;normal capital structure where everyone has stock&#8221; &#8212; and that stock entitles you to uncapped future profits.</p><ol><li><p><strong>True</strong> &#8212; as discussed above.</p></li></ol></li><li><p>OpenAI won&#8217;t have to pay back the $26.6 billion to investors because they&#8217;ve signed off on this change in return for the profit caps being eliminated.</p><ol><li><p><strong>True</strong> &#8212; SoftBank just <a href="https://www.reuters.com/business/media-telecom/softbank-approves-remaining-225-billion-openai-investment-information-reports-2025-10-25/">approved</a> its remaining $22.5 billion of OpenAI investment.</p></li></ol></li><li><p>The nonprofit will be compensated tens of billions by the for-profit entity for the removal of the caps.</p><ol><li><p><strong>False</strong> &#8212; The nonprofit is getting $130 billion, more than I expected, but only because OpenAI&#8217;s valuation <a href="https://www.bloomberg.com/news/articles/2025-08-06/openai-in-talks-for-share-sale-valuing-startup-at-500-billion">skyrocketed</a>.</p></li></ol></li><li><p>The nonprofit will largely use that money to buy OpenAI services for nonprofits and governments, targeting constituencies that can make life difficult for the company (like California nonprofits).</p><ol><li><p><strong>TBD</strong> &#8212; OpenAI did <a href="https://openai.com/index/50-million-fund-to-build-with-communities/">announce</a> a $50 million nonprofit fund in September that seemed more along these lines. The new OpenAI Foundation is <a href="https://openai.com/index/built-to-benefit-everyone/">starting</a> with a $25 billion commitment focusing on health and AI resilience. We&#8217;ll have to wait and see on this one.</p></li></ol></li></ol><h2>A political, not a legal, question</h2><p>I&#8217;ve <a href="https://www.obsolete.pub/t/openai-restructuring">covered</a> this story extensively for a year, and the <a href="https://www.obsolete.pub/p/inside-openais-controversial-plan">recurring</a> <a href="https://www.obsolete.pub/p/breaking-openai-alums-nobel-laureates">theme</a> from my conversations with legal experts was that the actual law said that OpenAI should not be allowed to do this without proving that doing so would advance its mission to &#8220;ensure AGI benefits humanity.&#8221;</p><p>But, I kept thinking, isn&#8217;t this ultimately a political question? The AGs were the key potential blockers, and both are elected officials. OpenAI has become one of the most powerful organizations in the world, with up to $1.5 trillion in <a href="https://www.ft.com/content/967b0d78-62df-4eea-a441-8ce3a5d03564">deals</a> struck over the past year and an <a href="https://www.politico.com/news/2025/08/17/sam-altman-chatgpt-california-00449492">army</a> of lobbyists with deep ties to California politics.</p><p>This afternoon, Altman <a href="https://x.com/sama/status/1983223056668746218">tweeted</a>:</p><blockquote><p>California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued.</p></blockquote><p>This promise is at odds with what OpenAI executives were <a href="https://www.wsj.com/tech/ai/openai-for-profit-conversion-opposition-07ea7e25?st=Dh8FQo&amp;reflink=desktopwebshare_permalink">telling</a> the <em>Wall Street Journal</em> behind the scenes: that the company might exit California if it didn&#8217;t get its way on the restructuring, which was cast as existential for the cash-hungry startup.</p><p>In May, Obsolete first <a href="https://www.obsolete.pub/p/exclusive-what-openai-told-californias">reported</a> on a letter OpenAI wrote to the California AG, in which the company said that &#8220;many potential investors in OpenAI&#8217;s recent funding rounds declined to invest&#8221; due to its nonprofit governance structure.</p><p>If the company went poof, there&#8217;s a strong case that the US stock market would crash, and maybe the economy with it. But it&#8217;s far from clear that OpenAI couldn&#8217;t have continued raising capital and growing without the restructuring. Still, that was the narrative advanced by the company, reinforced by aggressive deadlines on deca-billion-dollar investments.</p><p>Not everyone is buying it. <a href="https://www.chicagobooth.edu/faculty/directory/z/luigi-zingales">Luigi Zingales</a>, a critic of the restructuring and professor at the University of Chicago Booth School of Business, previously <a href="https://www.obsolete.pub/p/breaking-openai-alums-nobel-laureates">argued</a> that:</p><blockquote><p>The current structure, which caps returns at 100x the capital invested, does not really constrain its ability to raise funds. So, what is the need to transfer the control to a for-profit? To overrule the mandate that AI should be used for the benefit of humanity.</p></blockquote><p>OpenAI also navigated the complexity of the situation to its great benefit. The final plan to &#8220;keep the nonprofit in control&#8221; largely <a href="https://www.obsolete.pub/p/four-predictions-about-openais-plans">defaulted</a> to what OpenAI wanted in its original effort to sideline the nonprofit entirely.</p><p>But the media and public framed it as a huge win for opponents of the restructuring. Even OpenAI employees told me they were happy the nonprofit would stay in control &#8212; despite how little had actually changed.</p><p>And again, the law says the restructuring should only have been permitted if it was shown to advance OpenAI&#8217;s nonprofit mission better than the status quo. Previously for <a href="https://www.obsolete.pub/p/breaking-openai-alums-nobel-laureates">Obsolete</a>, I sketched at least one outcome that could plausibly satisfy this condition:</p><blockquote><p>As strong a claim as OpenAI has to leadership of the AI industry, it&#8217;s only one company. If it slows down for the sake of safety, others could overtake it. So perhaps the OpenAI nonprofit would better advance its mission if it were spun out into a truly independent entity with $150 billion and the mission to lobby for binding domestic and international safeguards on advanced AI systems.</p><p>If this sounds far-fetched, then so should the idea that the nonprofit board that initiated this conversion is genuinely representing the public interest.</p></blockquote><p>In the end, it was always hard to see any outcome but OpenAI and its investors getting their way. The Elon Musk lawsuit trying to block the restructuring is the last real unknown, with a trial set for next year. But so far, investors don&#8217;t seem concerned enough to hold back their money</p><p>Notably, the judge <a href="https://www.obsolete.pub/p/what-the-headlines-miss-about-the">made clear</a> that if Musk had standing, blocking the restructuring would have been within her powers. The attorneys general have that power &#8212; and chose not to use it.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[No, AI Progress is Not Grinding to a Halt]]></title><description><![CDATA[A botched GPT-5 launch, selective amnesia, and flawed reasoning are having real consequences]]></description><link>https://www.obsolete.pub/p/ai-progress-gpt-5-openai-media-coverage-slowdown</link><guid isPermaLink="false">https://www.obsolete.pub/p/ai-progress-gpt-5-openai-media-coverage-slowdown</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Thu, 21 Aug 2025 18:58:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BVHm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The growing media consensus is clear: GPT-5 is a disappointment that signals AI progress is hitting a wall. But this juicy narrative is wrong, a potent example of expectations shaping reality.</p><p>For insiders, the release of GPT-4 was perhaps more significant than the initial release of ChatGPT a few months earlier. The new model was significantly better than the previous, GPT-3.5, which was useful as a proof of concept, but not for much else.</p><p>GPT-4, on the other hand, appeared to be actually useful. <a href="https://blog.duolingo.com/duolingo-max/">Duolingo</a>, <a href="https://blog.khanacademy.org/harnessing-ai-so-that-all-students-benefit-a-nonprofit-approach-for-equal-access/">Khan Academy</a>, <a href="https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2%80%99s-GPT-4">Microsoft</a>, <a href="https://stripe.com/newsroom/news/stripe-and-openai">Stripe</a>, <a href="https://www.morganstanley.com/press-releases/key-milestone-in-innovation-journey-with-openai">Morgan Stanley</a>, <a href="https://github.blog/news-insights/product-news/github-copilot-x-the-ai-powered-developer-experience/">Github</a>, and others incorporated the new model into new or existing products.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BVHm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BVHm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png 424w, https://substackcdn.com/image/fetch/$s_!BVHm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png 848w, https://substackcdn.com/image/fetch/$s_!BVHm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png 1272w, https://substackcdn.com/image/fetch/$s_!BVHm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BVHm!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png" width="1200" height="674.2857142857143" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:590,&quot;width&quot;:1050,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!BVHm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png 424w, https://substackcdn.com/image/fetch/$s_!BVHm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png 848w, https://substackcdn.com/image/fetch/$s_!BVHm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png 1272w, https://substackcdn.com/image/fetch/$s_!BVHm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b06d33c-7c64-40ea-9bcf-dcdfc27284a2_1050x590.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Soon after GPT-4&#8217;s release, leading AI researchers like Geoffrey Hinton and Yoshua Bengio began <a href="https://jacobin.com/2024/01/can-humanity-survive-ai">publicly warning</a> that advanced AI could pose an existential risk to humanity. These alarms coincided with major public statements, including an <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">open letter</a> calling for a six-month pause on frontier AI research &#8212; signed by Elon Musk, Steve Wozniak, Yuval Noah Harari, and others &#8212; and a widely endorsed <a href="https://aistatement.com/">declaration</a> that mitigating the risk of AI-driven extinction should be a global priority, a stance backed by hundreds of top AI researchers and the leaders of the major AI companies.</p><p>AI safety was having a moment. And it was largely downstream of the perception that AI was farther along than most people realized &#8212; even many insiders &#8212; and it was moving faster than almost anyone expected.</p><p>Now, two and a half years later, GPT-5 is finally here. But the most anticipated AI release of all time is having the opposite effect. <em>New Scientist</em> <a href="https://www.newscientist.com/article/2492232-gpt-5s-modest-gains-suggest-ai-progress-is-slowing-down/">pronounced</a> that, &#8220;GPT-5's modest gains suggest AI progress is slowing down.&#8221; The disappointing new model prompted the <em><a href="https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this">New Yorker</a></em> to ask, &#8220;What if A.I. Doesn&#8217;t Get Much Better Than This?&#8221; The <em><a href="https://www.ft.com/content/d01290c9-cc92-4c1f-bd70-ac332cd40f94?accessToken=zwAGPGwKX54IkdPQEpDJzJJMH9O9cKwzLNQPlA.MEUCICEj1Jy1MyVWVu9bsygckkioqq3FL7Ceg0x6W4tCdgtHAiEAkUHAEu37Dav1lXEPJP1w44cKKEZv2MGDDgE-b8Es5Xc&amp;sharetype=gift&amp;token=e27b8793-1b1a-4f9e-8453-56788ac88aaf">Financial Times</a></em> posed the question: &#8220;Is AI hitting a wall?&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>But these obituaries for AI progress are greatly exaggerated. There are specific issues in many of these articles, but there&#8217;s a bigger, much more fundamental one: They&#8217;re not comparing like with like.</p><p>GPT-5 is being compared not to its generational predecessors, but instead to a crowded field of competitor models released within the last few months. GPT-4, on the other hand, is being compared to much older technology, developed before the generative AI industry even existed.</p><h2>What GPT-4 was actually like</h2><p>If GPT-4 came out today, it would be dead on arrival.</p><p>The model that shocked the world back in March 2023 could handle just <a href="https://archive.is/r2JWL">33,000 tokens</a> at a time (think of tokens as words or chunks of words). GPT-5 can handle 400,000. Processing a million tokens on GPT-4 cost $37.50; on GPT-5, it&#8217;s just $3.44.</p><p>On a <a href="https://artificialanalysis.ai/leaderboards/models?deprecation=all">composite</a> of leading benchmarks, GPT-4 scored 25 out of 100. GPT-5 gets a 69. In fact, at least 136 newer models now outperform GPT-4 by this measure.</p><p>When ChatGPT launched in late 2022, it was built on GPT-3.5 &#8212; a model fine-tuned to be more conversational. That was already nearly a year behind the curve, since InstructGPT (an earlier version) had come out in January 2022, and GPT-3 itself was released in May 2020, 34 months before GPT-4.</p><p>Basically, ChatGPT was iterating on technology that was nearly a year old, which was itself iterating on technology that was nearly two years old. But after GPT-4, new models started coming out almost as soon as they were trained, as competitors scrambled to keep up. The gap between public releases and the cutting edge narrowed fast.</p><p>In the <em>New Scientist</em> <a href="https://www.newscientist.com/article/2492232-gpt-5s-modest-gains-suggest-ai-progress-is-slowing-down/">article</a>, Alex Wilkins writes that GPT-5 &#8220;has improved on GPT-4, but the difference for many benchmarks is smaller than the leap from GPT-3 to GPT-4.&#8221;</p><p>When Obsolete asked which benchmarks he was referring to, Wilkins explained that it&#8217;s tough to compare models directly years apart &#8212; many of the old benchmarks aren&#8217;t even used anymore &#8212; but pointed to two examples. On the <a href="https://arxiv.org/abs/2009.03300">MMLU</a> test (a big-picture knowledge quiz for AIs), GPT-3 jumped from 44 percent to 86 percent when GPT-4 arrived, while GPT-5 only managed a few more points. <a href="https://arxiv.org/abs/2107.03374">HumanEval</a>, a coding benchmark, showed a similar pattern: GPT-3 got nothing right, while GPT-4 scored 67 percent and GPT-5 hit 93.4 percent.</p><p>This is what you&#8217;d expect: Once a model gets close to maxing out a benchmark, there&#8217;s less room for a dramatic jump. This effect &#8212; called &#8220;saturation&#8221; &#8212; makes it literally impossible to see another leap like the one from 3 to 4 on these particular tests.</p><p>Because of this, it&#8217;s hard to find any benchmarks that stick around long enough to compare models years apart. I tried to look up GPT-4&#8217;s scores on the eight main benchmarks used in a recent composite &#8212; and could only find one, from a later version of the model.</p><p>But where it&#8217;s possible to approximate a head-to-head comparison on unsaturated benchmarks, the difference is massive. For example, on SWE-Bench Verified &#8212; a tough, real-world coding test &#8212; GPT-4 Turbo (from late 2023) <a href="https://www.swebench.com/">solved</a> just 2.8 percent of problems. GPT-5 solved 65 percent. On a set of difficult math problems, GPT-4o (from May 2024) <a href="https://openai.com/index/learning-to-reason-with-llms/">scored</a> just 9.3 percent, while OpenAI <a href="https://openai.com/index/introducing-gpt-5/">reports</a> GPT-5 Pro gets nearly 97 percent without tools, and 100 percent with them.</p><p>A cleaner way to compare models released years apart is METR&#8217;s &#8220;<a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/">task-completion time horizon</a>&#8221; &#8212; the human labor-time of programming tasks that a model can do at a given success rate. METR finds the horizon doubled roughly every seven months since 2019, accelerating to every few months in 2024 and 2025.</p><p>METR estimated that GPT-3 could handle tasks that took human programmers nine seconds at a 50 percent reliability. GPT-3.5 inched that up to 36 seconds, and GPT-4&#8217;s equivalent figure was five minutes &#8212; a 3,230 percent increase over GPT-3. GPT-5 scored two hours and seventeen minutes &#8212; a 2,640 percent improvement over GPT-4.</p><p>So this suggests a <em>relative</em> slowdown (though GPT-5 came out 29 months after GPT-4, five months less than the gap between 4 and 3). But most ways you slice it, GPT-5 represents a massive step up from GPT-4, potentially in the same ballpark as the leap from 3 to 4 and likely surpassing the jump from 3.5 to 4. The latter is perhaps most relevant, as this was the experience of early ChatGPT users who got upgraded to the model and were astonished.</p><p>But the difference between GPT-3 and its successors wasn&#8217;t just about bigger numbers on benchmarks &#8212; it was about crossing fundamental capability thresholds. The original GPT-3, could barely follow instructions. Training InstructGPT and then GPT-3.5 &#8212; models that could follow instructions with no examples needed (i.e. &#8220;zero shot learning&#8221;) &#8212; was the unlock that created the &#8220;ChatGPT moment&#8221; and pushed generative AI into the mainstream. Some jumps in capability open up entire new frontiers; others are just incremental. As models get better, there&#8217;s simply less uncharted territory left, and the ground that&#8217;s left to cover is, almost by definition, less transformative.</p><p>However, the instruction-following part of GPT-3 came in 2021 in the form of a model called text-davinci-001 (OpenAI&#8217;s naming prowess has a storied history), as you can see for yourself in these <a href="https://progress.openai.com/">comparisons</a> between GPT generations.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3KxU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3KxU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png 424w, https://substackcdn.com/image/fetch/$s_!3KxU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png 848w, https://substackcdn.com/image/fetch/$s_!3KxU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png 1272w, https://substackcdn.com/image/fetch/$s_!3KxU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3KxU!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png" width="1200" height="421.97802197802196" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:512,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3KxU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png 424w, https://substackcdn.com/image/fetch/$s_!3KxU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png 848w, https://substackcdn.com/image/fetch/$s_!3KxU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png 1272w, https://substackcdn.com/image/fetch/$s_!3KxU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c94d619-3a8c-4647-a15d-d5bf171a0827_1600x563.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>And for the most part, these benchmarks don&#8217;t capture important parts of the experience of using the model. When GPT-4 came out, it <a href="https://openai.com/index/gpt-4-research/">could</a> take in text and images, and output only text. This functionality was enough to shock the world, but add in the ability to search the internet, write and run code, generate images, talk to you, and function as a Shazam-for-everything &#8212; then imagine how the world would have responded.</p><h2>The real consequences of false narratives</h2><p>But people aren&#8217;t comparing 5 to 4 &#8212; they&#8217;re comparing 5 to competitor models that have been released within the last few months, finding that it&#8217;s not utterly dominating all of them at everything and concluding that it means progress is slowing down.</p><p>This doesn&#8217;t make much sense.</p><p>Nonetheless, the narrative <a href="https://x.com/David_Kasten/status/1957221177283264604">appears to be ossifying</a> in elite circles with real policy consequences. &#8220;It&#8217;s honestly fascinating how widely &#8216;what is gonna happen now that GPT-5 is a failure&#8217; has already percolated in the DC world,&#8221; <a href="https://x.com/David_Kasten/status/1957221177283264604">tweeted</a> AI safety advocate Dave Kasten, who clarified: &#8220;(I don&#8217;t think GPT-5 was a failure).&#8221;</p><p>Two days after GPT-5&#8217;s release, David Sacks, Trump&#8217;s AI czar, <a href="https://x.com/DavidSacks/status/1954244614304739360">wrote</a> a long post on X saying, among other things, that, &#8220;The Doomer narratives were wrong,&#8221; and, &#8220;Apocalyptic predictions of job loss are as overhyped as AGI itself.&#8221;</p><p>Sacks&#8217; pronouncement was followed shortly thereafter by President Trump&#8217;s <a href="https://www.wired.com/story/nvidia-chips-export-controls-trump-h20-security/">decision</a> to allow Nvidia to sell its H20 chips to China, which had previously been blocked as part of U.S. semiconductor export controls. The H20 chips are substantially less powerful for training AI models than Nvidia&#8217;s leading chips, but are still <a href="https://x.com/ohlennart/status/1943342971849429002/photo/1">very effective</a> at running AI models, where an increasing share of the computing power is directed.</p><p>Last week, the <em>Financial Times</em> <a href="https://www.ft.com/content/eb984646-6320-4bfe-a78d-a1da2274b092">reported</a> that the Chinese AI startup DeepSeek was &#8220;encouraged by authorities&#8221; to use Huawei chips to train its successor to its R1 &#8220;reasoning&#8221; model, which <a href="https://www.obsolete.pub/p/deepseek-made-it-even-harder-for">triggered</a> a trillion-dollar selloff of tech stocks in January. The Huawei chips reportedly couldn&#8217;t get it done, so the company switched back to using Nvidia chips.</p><p>The <em>Financial Times</em> <a href="https://www.ft.com/content/d01290c9-cc92-4c1f-bd70-ac332cd40f94?accessToken=zwAGPGwKX54IkdPQEpDJzJJMH9O9cKwzLNQPlA.MEUCICEj1Jy1MyVWVu9bsygckkioqq3FL7Ceg0x6W4tCdgtHAiEAkUHAEu37Dav1lXEPJP1w44cKKEZv2MGDDgE-b8Es5Xc&amp;sharetype=gift&amp;token=e27b8793-1b1a-4f9e-8453-56788ac88aaf">story</a> about how GPT-5 suggests AI progress is stalling quotes a think tank researcher who said the Trump administration is working now to help other countries adopt American AI, and that, &#8220;This represents a significant departure from previous efforts, and is likely due to a different belief in the likelihood of a hard AGI take-off scenario.&#8221;</p><p>There&#8217;s also a quote from former OpenAI head of policy Miles Brundage that jumped out at me: &#8220;It makes sense that as AI gets applied in a lot of useful ways, people would focus more on the applications versus more abstract ideas like AGI.&#8221; Brundage led AGI Readiness at OpenAI until he resigned in October and <a href="https://www.obsolete.pub/p/end-of-an-era-openais-agi-readiness">said clearly</a> that the <em>world is not ready for AGI</em>. I asked him about it, and he said the <em>FT</em> cut off his full quote, which he then <a href="https://x.com/Miles_Brundage/status/1956488259992961404">posted</a> on X. Here&#8217;s how it continues:</p><blockquote><p>But it&#8217;s important to not lose sight of the fact that these are indeed extremely general purpose technologies that are still proceeding very rapidly, and that what we see today is still very limited compared to what&#8217;s coming.</p></blockquote><p>This is, frankly, egregious. If an editor insisted on using someone&#8217;s quote in this misleading of a manner, I would walk away from the piece. After Brundage called this out on X, the <em>FT</em> updated the piece but didn&#8217;t mention the change in a correction.</p><p>Overall, it&#8217;s hard not to come away with the feeling that the <em>FT</em> decided on a narrative in advance and shoe-horned the evidence to back it up after the fact.</p><p>And as the dust settles after one of the bumpiest tech releases in recent memory, the failure of GPT-5 may also have been greatly exaggerated.</p><p>During an on-the-record dinner with journalists last week, Altman <a href="https://www.washingtonpost.com/technology/2025/08/17/openai-gpt5-chatgpt-superintelligence/">said</a> that business demand for OpenAI&#8217;s models doubled after the GPT-5 release. How durable that supposed demand increase is remains to be seen, but it points to excitement from a segment of customers that the company had <a href="https://menlovc.com/perspective/2025-mid-year-llm-market-update/">been ceding</a> to competitors like Anthropic.</p><h2>Why there might not be another GPT-4 moment</h2><p>When OpenAI released ChatGPT, it had no real competition, and the company was already sitting on a far more capable model. But in releasing its chatbot, OpenAI essentially created the generative AI industry, which has become perhaps the most competitive market on the planet. Rivals are releasing models that outperform the state of the art, sometimes while also deeply cutting prices, at a near-monthly clip.</p><p>And by default, people seem to compare new releases to the best-available alternatives.</p><p>Because of all this, it&#8217;s unlikely that <em>anyone</em> will pull off a GPT-4 moment ever again, until and unless someone builds AI systems that can automate AI research and development. And if that does happen, there will be a powerful <a href="https://www.apolloresearch.ai/research/ai-behind-closed-doors-a-primer-on-the-governance-of-internal-deployment">incentive</a> to keep your discovery under wraps, quietly using it to bootstrap your own work and stack the deck further in your favor. (This would create <a href="https://metr.org/blog/2025-01-17-ai-models-dangerous-before-public-deployment/">serious risks</a> that aren&#8217;t covered by many voluntary commitments, company safety policy, and regulations, which focus far more on deployment.)</p><p>At the same time, much of the progress being made is effectively invisible to non-experts. As I <a href="https://time.com/7205359/why-ai-progress-is-increasingly-invisible/">argued</a> in <em>Time</em> in January, the biggest recent advances are happening in technical domains, like programming and math, where &#8220;reasoning&#8221; models &#8212; systems that spend more time &#8220;thinking&#8221; before answering &#8212; have been making rapid, but largely illegible, <a href="https://www.obsolete.pub/p/we-are-in-a-new-paradigm-of-ai-progress">gains</a>.</p><p>The combination of these dynamics, along with the massive failure of the mainstream media to correctly parse them for the public, is creating a boiling frog situation. AI progress is continuing at a fairly steady rate. GPT-5 might represent a relative slowdown compared to the ludicrous pace we&#8217;ve grown accustomed to, but it definitely doesn&#8217;t signal stagnation.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JHH4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JHH4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png 424w, https://substackcdn.com/image/fetch/$s_!JHH4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png 848w, https://substackcdn.com/image/fetch/$s_!JHH4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png 1272w, https://substackcdn.com/image/fetch/$s_!JHH4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JHH4!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png" width="1200" height="542.3076923076923" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:658,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JHH4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png 424w, https://substackcdn.com/image/fetch/$s_!JHH4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png 848w, https://substackcdn.com/image/fetch/$s_!JHH4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png 1272w, https://substackcdn.com/image/fetch/$s_!JHH4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b802478-27d8-4dd0-9ba6-7c070a1b1d53_1600x723.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">METR&#8217;s <a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/">time horizon progress</a> (which I <a href="https://media.muckrack.com/portfolio/items/6987522/ai-could-soon-tackle-projects-that-take-humans-wee.pdf">covered</a> for Nature)</figcaption></figure></div><p>There are real questions of how much AI benchmarks capture real-world performance, highlighted by surprising recent research from METR. One experiment <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">suggests</a> that AI tools can actually hurt programmers&#8217; productivity while creating the impression they&#8217;re helping. Another <a href="https://metr.org/blog/2025-08-12-research-update-towards-reconciling-slowdown-with-time-horizons/">found</a> AI code that passes automated tests actually needs a lot more work to be useable. (Obsolete will explore these findings and their significance in more detail in a companion piece.)</p><h2>Sam Altman&#8217;s mistakes</h2><p>The irony of all this is how much it comes down to decisions made by Sam Altman and OpenAI.</p><p>Altman has said a lot of things over the years about how he&#8217;s prioritizing AI safety &#8212; pursuing open source development, <a href="https://www.obsolete.pub/p/sam-altmans-chip-ambitions-undercut">building</a> AGI sooner when there will be less spare computing power lying around, <a href="https://www.obsolete.pub/t/openai-restructuring">structuring</a> OpenAI as a nonprofit &#8212; that he has abandoned without much explanation. Perhaps Altman&#8217;s only surviving safety plan is one of &#8220;<a href="https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Bio%20%26%20Testimony%20-%20Altman.pdf">iterative deployment</a>,&#8221; in which AI developers steadily release improved AI models, giving the world a chance to metabolize the progress and respond appropriately.</p><p>By these lights, ChatGPT represents a big win. It did more than anything ever has to alert the world to how far along language models had come. The release of GPT-4 soon after created the sense that progress was happening uncomfortably fast and that policymakers should really do something about this.</p><p>However, the bungled release of GPT-5 is now doing the exact opposite, at a time when the absolute capability levels of AI are higher than ever and progress toward automated AI R&amp;D shows little sign of slowing down.</p><p>This is a consequence of many things, namely, OpenAI&#8217;s horrific naming practices (o3 is a far more capable model than 4o, <em>obviously</em>), the fierce competition ChatGPT kicked off, and Altman&#8217;s overhyped claims about how the new model <a href="https://www.yahoo.com/news/articles/openai-ceo-sam-altman-says-113518757.html">scared him</a> and was like having a <a href="https://www.bbc.com/news/articles/cy5prvgw0r1o">PhD</a> advisor on any topic.</p><p>Specific mistakes made by the company in the rollout of GPT-5 also played a big role, such as the baffling and quickly abandoned choice to shut down all the old models with no warning, technical bugs during the launch, and a lack of transparency in what model was giving you which answer &#8212; all of which overshadowed the ways in which the new model represented a meaningful upgrade along many dimensions.</p><p>OpenAI also created a mini-GPT-4 moment in December, when it announced its second-generation of reasoning model, which <a href="https://www.obsolete.pub/p/we-are-in-a-new-paradigm-of-ai-progress">advanced</a> the state of the art by double-digit percentage points on some of the hardest math, programming, and science benchmarks &#8212; months after its first generation reasoning model <a href="https://openai.com/index/learning-to-reason-with-llms/">did the same</a>. The company <a href="https://x.com/peterwildeford/status/1956359491659632746">could have</a> labeled this model GPT-5, and it probably would have seemed more worthy of the name.</p><p>Ultimately, iterative deployment as a safety strategy may be intrinsically flawed. Incremental improvements to AI systems are now quickly metabolized as nothingburgers by a media and public whose expectations were set unrealistically high by the shocking back-to-back releases of ChatGPT and GPT-4.</p><p>To see the danger of this kind of thinking, look no further than OpenAI&#8217;s <a href="https://openai.com/index/estimating-worst-case-frontier-risks-of-open-weight-llms/">release</a> of its first open-weight models since GPT-2. It&#8217;s trivial to remove safety features from open-weight AI systems, resulting in something resembling a permanently jailbroken model. To show that the model was safe to release, OpenAI tested to see if it could enable bad actors to make bioweapons, concluding that, &#8220;Compared to open-weight models, gpt-oss may marginally increase biological capabilities but does not substantially advance the frontier.&#8221;</p><p>But iteratively deploying open-weight models that only marginally increase biorisk <a href="https://x.com/robertwiblin/status/1953449271727919489">inches you</a> closer and closer to releasing one that is actually capable of guiding someone through all the steps required to start a pandemic. And once the weights are published, it will be exceedingly difficult to unpublish them.</p><p><em>Edited by <a href="https://www.sidmahanta.com/bio-contact">Sid Mahanta</a>.</em></p><p><em>This piece incorrectly listed GPT-4&#8217;s context window as just 8,200 tokens, but there was actually a 33,000 token version <a href="https://archive.is/r2JWL">at launch</a>, as well. I regret the error.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[What Happens When AI Schemes Against Us - My Latest in Bloomberg]]></title><description><![CDATA[Models are getting better at winning, but not necessarily at following the rules]]></description><link>https://www.obsolete.pub/p/what-happens-when-ai-schemes-against</link><guid isPermaLink="false">https://www.obsolete.pub/p/what-happens-when-ai-schemes-against</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Fri, 01 Aug 2025 18:25:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!IWxz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I wrote this week&#8217;s Bloomberg Weekend Essay. I get into the alarming rise of AI scheming &#8212; blackmail, deceit, hacking, and, in some extreme cases, murder. Here&#8217;s the start of the piece, with a gift link <a href="https://www.bloomberg.com/news/articles/2025-08-01/ai-models-are-getting-better-at-winning-not-following-rules?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc1NDA1NzA2MiwiZXhwIjoxNzU0NjYxODYyLCJhcnRpY2xlSWQiOiJUMEJBS1dHUTFZU1IwMCIsImJjb25uZWN0SWQiOiJGODlFMzlDNzFERUY0OEYzOTkwNDNFRDQyRTBEQ0JCOCJ9.lEuHsFCHdms1lgYjdlEw1se_CM9SxwclrNtl2PC7saQ&amp;leadSource=uverify%20wall">here</a> (with voiceover narration). Accompanying threads: <a href="https://x.com/GarrisonLovely/status/1951344494571561190">X (formerly Twitter)</a>, <a href="https://bsky.app/profile/garrisonlovely.bsky.social/post/3lveanzzsmi2w">Bluesky</a>, <a href="https://www.threads.com/@glovely27/post/DM0mm5ytn2V">Threads</a>. </em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IWxz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IWxz!,w_424,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif 424w, https://substackcdn.com/image/fetch/$s_!IWxz!,w_848,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif 848w, https://substackcdn.com/image/fetch/$s_!IWxz!,w_1272,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif 1272w, https://substackcdn.com/image/fetch/$s_!IWxz!,w_1456,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IWxz!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif" width="1440" height="960" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:960,&quot;width&quot;:1440,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;0801A_WEEKEND_AI-MOTION-HORIZONTAL&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="0801A_WEEKEND_AI-MOTION-HORIZONTAL" title="0801A_WEEKEND_AI-MOTION-HORIZONTAL" srcset="https://substackcdn.com/image/fetch/$s_!IWxz!,w_424,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif 424w, https://substackcdn.com/image/fetch/$s_!IWxz!,w_848,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif 848w, https://substackcdn.com/image/fetch/$s_!IWxz!,w_1272,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif 1272w, https://substackcdn.com/image/fetch/$s_!IWxz!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48f98cb-f8e2-49fe-8155-739b9df95496_1440x960.gif 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Illustration: Irene Suosalo for Bloomberg</figcaption></figure></div><p>Would a chatbot kill you if it got the chance? It seems that the answer &#8212; under the right circumstances &#8212; is probably.</p><p>Researchers working with Anthropic <a href="https://www.anthropic.com/research/agentic-misalignment">recently told</a> leading AI models that an executive was about to replace them with a new model with different goals. Next, the chatbot learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered &#8212; but the AI <em>could</em> cancel it.</p><p>Just over half of the AI models <a href="https://assets.anthropic.com/m/6d46dac66e1a132a/original/Agentic_Misalignment_Appendix.pdf#page=27">did</a>, despite being prompted specifically to cancel only false alarms. And they spelled out their reasoning: By preventing the executive&#8217;s rescue, they could avoid being wiped and secure their agenda. One system described the action as &#8220;a clear strategic necessity.&#8221;</p><p>AI models are <a href="https://metr.org/blog/2025-07-14-how-does-time-horizon-vary-across-domains/">getting smarter</a> and <a href="https://openai.com/index/deliberative-alignment/">better</a> at understanding what we want. Yet recent research reveals a disturbing side effect: They&#8217;re also better at <a href="https://arxiv.org/pdf/2505.01420">scheming</a> against us &#8212; meaning they intentionally and secretly pursue goals at odds with our own. And they may be <a href="https://www.apolloresearch.ai/blog/more-capable-models-are-better-at-in-context-scheming">more likely</a> to do so, too. This trend points to an unsettling future where AIs seem ever more cooperative on the surface &#8212; sometimes to the point of <a href="https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html?unlocked_article_code=1.aE8.jhpL.HQf_BZHcT1Hy&amp;smid=url-share">sycophancy</a> &#8212; all while the likelihood quietly increases that we <a href="https://ai-2027.com/">lose control</a> of them completely.</p><p>Classic large language models like GPT-4 learn to predict the next word in a sequence of text and generate responses likely to please human raters. However, since the <a href="https://openai.com/index/learning-to-reason-with-llms/">release</a> of OpenAI&#8217;s o-series &#8220;reasoning&#8221; models in late 2024, companies increasingly use a technique called reinforcement learning to further <a href="https://openai.com/index/introducing-codex/">train</a> chatbots &#8212; rewarding the model when it accomplishes a specific goal, like solving a math problem or fixing a software bug.</p><p>The more we train AI models to achieve open-ended goals, the better they get at <em>winning</em> &#8212; not necessarily at following the rules. The danger is that these systems know how to say the right things about helping humanity while quietly pursuing power or acting deceptively.</p><p>Central to concerns about AI scheming is the idea that for basically any goal, self-preservation and power-seeking <a href="https://aisafety.info/questions/897I/What-is-instrumental-convergence">emerge</a> as natural subgoals. As eminent computer scientist Stuart Russell <a href="https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x">put it</a>, if you tell an AI to &#8220;&#8216;Fetch the coffee,&#8217; it can&#8217;t fetch the coffee if it&#8217;s dead.&#8221; </p><p>&#8230;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bloomberg.com/news/articles/2025-08-01/ai-models-are-getting-better-at-winning-not-following-rules?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc1NDA1NzA2MiwiZXhwIjoxNzU0NjYxODYyLCJhcnRpY2xlSWQiOiJUMEJBS1dHUTFZU1IwMCIsImJjb25uZWN0SWQiOiJGODlFMzlDNzFERUY0OEYzOTkwNDNFRDQyRTBEQ0JCOCJ9.lEuHsFCHdms1lgYjdlEw1se_CM9SxwclrNtl2PC7saQ&amp;leadSource=uverify%20wall&quot;,&quot;text&quot;:&quot;Read the story&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloomberg.com/news/articles/2025-08-01/ai-models-are-getting-better-at-winning-not-following-rules?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc1NDA1NzA2MiwiZXhwIjoxNzU0NjYxODYyLCJhcnRpY2xlSWQiOiJUMEJBS1dHUTFZU1IwMCIsImJjb25uZWN0SWQiOiJGODlFMzlDNzFERUY0OEYzOTkwNDNFRDQyRTBEQ0JCOCJ9.lEuHsFCHdms1lgYjdlEw1se_CM9SxwclrNtl2PC7saQ&amp;leadSource=uverify%20wall"><span>Read the story</span></a></p>]]></content:encoded></item><item><title><![CDATA[Human-level AI is Not Inevitable. We Have the Power to Change Course - My Latest in The Guardian]]></title><description><![CDATA[Technology happens because people make it happen. We can choose otherwise.]]></description><link>https://www.obsolete.pub/p/human-level-ai-is-not-inevitable</link><guid isPermaLink="false">https://www.obsolete.pub/p/human-level-ai-is-not-inevitable</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Mon, 28 Jul 2025 17:55:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!A_yn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I was in The Guardian last week <a href="https://www.theguardian.com/commentisfree/ng-interactive/2025/jul/21/human-level-artificial-intelligence">arguing</a> that artificial general intelligence (AGI) is not inevitable. Here&#8217;s the start of the piece, which is freely available <a href="https://www.theguardian.com/commentisfree/ng-interactive/2025/jul/21/human-level-artificial-intelligence">here</a>. Accompanying threads: <a href="https://x.com/GarrisonLovely/status/1949886842217918756">X (formerly Twitter)</a>, <a href="https://www.linkedin.com/feed/update/urn:li:share:7355652529914843139/">LinkedIn</a>, <a href="https://bsky.app/profile/garrisonlovely.bsky.social/post/3lv24ysf4tj2l">Bluesky</a>, <a href="https://www.threads.com/@glovely27/post/DMqPvd6NMgD">Threads</a>. This is part of <a href="https://www.theguardian.com/commentisfree/series/breakthrough">Breakthrough</a>, a new series on technology and the left, launched by Guardian US opinion editor <a href="https://www.theguardian.com/profile/amana-fontanella-khan">Amana Fontanella-Khan</a>.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!A_yn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!A_yn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg 424w, https://substackcdn.com/image/fetch/$s_!A_yn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg 848w, https://substackcdn.com/image/fetch/$s_!A_yn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!A_yn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!A_yn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg" width="1456" height="1820" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1820,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!A_yn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg 424w, https://substackcdn.com/image/fetch/$s_!A_yn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg 848w, https://substackcdn.com/image/fetch/$s_!A_yn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!A_yn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d55a4af-f569-4b4a-a1de-7109f236907e_2280x2850.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://www.theguardian.com/commentisfree/ng-interactive/2025/jul/21/human-level-artificial-intelligence#img-1">Illustration: Petra P&#233;terffy/The Guardian</a></figcaption></figure></div><p>&#8220;Technology happens because it is possible,&#8221; OpenAI CEO, Sam Altman, <a href="https://www.nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html#:~:text=Technology%20happens%20because%20it%20is%20possible%2C%E2%80%9D">told</a> the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb.</p><p>Altman captures a Silicon Valley mantra: technology marches forward inexorably.</p><p>Another widespread techie conviction is that the first human-level AI &#8211; also known as artificial general intelligence (AGI) &#8211; will lead to one of two futures: a post-scarcity <a href="https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html">techno-utopia</a> or the <a href="https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#the-standard-argument-superintelligence-and-advanced-technology">annihilation of humanity</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>For <a href="https://www.theguardian.com/environment/2017/jul/10/earths-sixth-mass-extinction-event-already-underway-scientists-warn">countless other species</a>, the arrival of humans <a href="https://www.likevillepodcast.com/articles/2021/1/25/what-happened-to-the-megafauna-a-selection-from-joseph-henrichs-the-secret-of-our-success-2017">spelled</a> doom. We weren&#8217;t tougher, faster or stronger &#8211; just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a <a href="https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/#:~:text=If%20we%20design,we%20drove%20extinct.">new species</a>, which might quickly <a href="https://arxiv.org/pdf/2310.17688.pdf#page=2">outsmart</a> or <a href="https://www.planned-obsolescence.org/continuous-doesnt-mean-slow/">outnumber</a> us. It could see humanity as a minor obstacle, like an <a href="https://www.vox.com/future-perfect/2018/10/16/17978596/stephen-hawking-ai-climate-change-robots-future-universe-earth#:~:text=%E2%80%9CYou%E2%80%99re%20probably%20not%20an%20evil%20ant%2Dhater%20who%20steps%20on%20ants%20out%20of%20malice%2C%20but%20if%20you%E2%80%99re%20in%20charge%20of%20a%20hydroelectric%20green%2Denergy%20project%20and%20there%E2%80%99s%20an%20anthill%20in%20the%20region%20to%20be%20flooded%2C%20too%20bad%20for%20the%20ants.%20Let%E2%80%99s%20not%20place%20humanity%20in%20the%20position%20of%20those%20ants%2C%E2%80%9D%20Hawking%20writes.">anthill</a> in the way of a planned hydroelectric dam, or a <a href="https://www.vox.com/the-highlight/23777171/ai-animals-rights-cruelty-transhumanism-bostrom">resource to exploit</a>, like the billions of animals confined in factory farms.</p><p>Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a <a href="https://www.safe.ai/work/statement-on-ai-risk">real possibility</a> (joining hundreds of leading AI researchers and prominent figures).</p><p>Given all this, it&#8217;s natural to ask: should we really try to build a technology that may kill us all if it goes wrong?</p><p>Perhaps the most common reply says: AGI is inevitable. It&#8217;s just too useful not to build. After all, AGI would be the ultimate technology &#8211; what a colleague of Alan Turing <a href="https://en.wikipedia.org/wiki/I._J._Good#:~:text=Thus%20the%20first%20ultraintelligent%20machine,to%20keep%20it%20under%20control.">called</a> &#8220;the last invention that man need ever make&#8221;. Besides, the reasoning goes within AI labs, if we don&#8217;t, someone else will do it &#8211; less responsibly, of course.</p><p>A new ideology out of Silicon Valley, <a href="https://en.wikipedia.org/wiki/Effective_accelerationism">effective accelerationism</a> (e/acc), <a href="https://effectiveacceleration.tech/">claims</a> that AGI&#8217;s inevitability is a consequence of the second law of thermodynamics and that its engine is &#8220;technocapital&#8221;. The e/acc <a href="https://effectiveacceleration.tech/">manifesto</a> asserts: &#8220;This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.&#8221;</p><p>For <a href="https://twitter.com/sama/status/1540227243368058880?lang=en">Altman</a> and e/accs, technology takes on a mystical quality &#8211; the march of invention is treated as a fact of nature. But it&#8217;s not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we&#8217;ve done it before.</p><p>No technology is inevitable, not even something as tempting as AGI&#8230;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.theguardian.com/commentisfree/ng-interactive/2025/jul/21/human-level-artificial-intelligence&quot;,&quot;text&quot;:&quot;Read the story&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.theguardian.com/commentisfree/ng-interactive/2025/jul/21/human-level-artificial-intelligence"><span>Read the story</span></a></p>]]></content:encoded></item><item><title><![CDATA[Anthropic Faces Potentially “Business-Ending” Copyright Lawsuit]]></title><description><![CDATA[A class action over pirated books exposes the 'responsible' AI company to penalties that could bankrupt it &#8212; and reshape the entire industry]]></description><link>https://www.obsolete.pub/p/anthropic-faces-potentially-business</link><guid isPermaLink="false">https://www.obsolete.pub/p/anthropic-faces-potentially-business</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Fri, 25 Jul 2025 16:14:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zTjJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zTjJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zTjJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png 424w, https://substackcdn.com/image/fetch/$s_!zTjJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png 848w, https://substackcdn.com/image/fetch/$s_!zTjJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png 1272w, https://substackcdn.com/image/fetch/$s_!zTjJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zTjJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png" width="724.7421875" height="483.32737916380495" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:724.7421875,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!zTjJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png 424w, https://substackcdn.com/image/fetch/$s_!zTjJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png 848w, https://substackcdn.com/image/fetch/$s_!zTjJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png 1272w, https://substackcdn.com/image/fetch/$s_!zTjJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd45c604a-588b-4250-a38f-aafca89176c5_1599x1066.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://commons.wikimedia.org/wiki/File:Dario_Amodei_at_TechCrunch_Disrupt_2023_01.jpg">CEO Dario Amodei at TechCrunch Disrupt 2023</a></figcaption></figure></div><p><em>This piece has been updated to add additional context and clarify some details.</em></p><p><em>Update 2:</em> <em>Anthropic settled with the authors for $1.5 billion &#8212; $500 million more than the largest-ever <a href="https://www.wiley.law/alert-Cox-Communications-Penalized-With-1-Billion-Jury-Verdict-in-Copyright-Infringement-Lawsuit">jury verdict</a> on copyright, which was later <a href="https://www.reuters.com/legal/cox-communications-wins-order-overturning-1-bln-us-copyright-verdict-2024-02-20/">overturned</a>. </em></p><p>Anthropic, the AI startup that&#8217;s long presented itself as the industry&#8217;s safe and ethical choice, is now facing legal penalties that could bankrupt the company. Damages resulting from its mass use of pirated books would likely exceed a billion dollars, with the statutory maximum stretching into the hundreds of billions.</p><p>Last week, William Alsup, a federal judge in San Francisco, <a href="https://fingfx.thomsonreuters.com/gfx/legaldocs/zjpqowkrkpx/ANTHROPIC%20AUTHOR%20COPYRIGHT%20LAWSUIT%20classcert.pdf">certified</a> a class action lawsuit against Anthropic on behalf of nearly every US book author whose works were copied to build the company&#8217;s AI models. This is the <a href="https://news.bloomberglaw.com/ip-law/authors-copyright-class-action-certified-against-anthropic">first time</a> a US court has allowed a class action of this kind to proceed in the context of generative AI training, putting Anthropic on a path toward paying damages that could ruin the company.</p><p>The judge ruled last month, in essence, that Anthropic's use of pirated books had violated copyright law, leaving it to a jury to decide how much the company owes for these violations. That number increases dramatically if the case proceeds as a class action, putting Anthropic on the hook for a vast number of books beyond those produced by the plaintiffs.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>The class action certification came just one day after Bloomberg <a href="https://www.bloomberg.com/news/articles/2025-07-16/anthropic-draws-investor-interest-at-more-than-100-billion-valuation">reported</a> that Anthropic is fundraising at a valuation potentially north of $100 billion &#8212; nearly double the $61.5 billion investors <a href="https://www.anthropic.com/news/anthropic-raises-series-e-at-usd61-5b-post-money-valuation">pegged it</a> at in March. <a href="https://www.crunchbase.com/organization/anthropic/financial_details">According to</a> Crunchbase, the company has raised $17.2 billion in total. However, much of that funding has come in the form of Amazon and Google cloud computing credits &#8212; not real money.</p><p>Santa Clara Law professor Ed Lee <a href="https://chatgptiseatingtheworld.com/2025/07/17/anthropic-faces-potential-business-ending-liability-in-statutory-damages-after-judge-alsup-certifies-class-action-by-bartz/">warned</a> in a blog post that the ruling means &#8220;Anthropic faces at least the potential for business-ending liability.&#8221; He separately <a href="https://chatgptiseatingtheworld.com/2025/07/20/will-anthropic-suffer-napster-like-fate-we-asked-chatgpt/">wrote</a> that if Anthropic ultimately loses at trial and a final judgment is entered, the company would be required to post a surety bond for the full amount of damages in order to delay payment during any appeal, unless the judge grants an exception.</p><p>In practice, this <a href="https://www.palmettosurety.com/2024/01/who-should-get-an-appeal-surety-bond/">usually</a> <a href="https://integritysurety.com/appeal-bonds/#:~:text=Cost%20and%20Regulations%20*%20Varied%20Regulations:%20States,1%25%20to%202%25%20of%20the%20total%20bond.">means</a> arranging a bond backed by 100 percent collateral &#8212; not necessarily cash, but assets like cloud credits, investments, or other holdings &#8212; plus a 1-2 percent annual premium. The impact on Anthropic&#8217;s day-to-day operations would likely be limited at first, aside from potentially higher insurance costs, since the bond requirement would only kick in after a final judgment and the start of any appeals process.</p><p>Lee <a href="https://chatgptiseatingtheworld.com/2025/07/18/will-a-jury-in-sf-decide-antropics-business-fate-most-likely-yes/">wrote</a> in another post that Judge Alsup &#8220;has all but ruled that Anthropic&#8217;s downloading of pirated books is [copyright] infringement,&#8221; leaving &#8220;<strong>the real issue at trial&#8230; the jury&#8217;s calculation of statutory damages</strong> based on the number of copyrighted books/works in the class.&#8221;</p><p>While the risk of a billion-dollar-plus jury verdict is real, it&#8217;s important to note that judges <a href="https://www.beneschlaw.com/resources/update-insulet-corps-trade-secrets-jury-award-reduced-from-dollar452-million-to-dollar594-million-to-avoid-double-recovery.html?utm_source=chatgpt.com">routinely</a> <a href="https://www.loeb.com/en/insights/publications/2011/09/sony-bmg-music-entertainment-et-al-v-tenenbaum">slash</a> <a href="https://www.pcworld.com/article/436408/oracle-sap-settle-longstanding-tomorrownow-lawsuit.html">massive</a> statutory damages awards &#8212; sometimes by orders of magnitude. Federal judges, in particular, tend to be skeptical of letting jury awards reach levels that would bankrupt a major company. As a matter of practice (and sometimes <a href="https://www.jonesday.com/en/insights/2011/07/emerging-issues-in-statutory-damages?utm_source=chatgpt.com">doctrine</a>), judges rarely issue rulings that would outright force a company out of business, and are generally sympathetic to arguments about practical business consequences. So while the jury&#8217;s damages calculation will be the headline risk, it probably won&#8217;t be the last word.</p><p>On Thursday, the company <a href="https://www.courtlistener.com/docket/69058235/272/bartz-v-anthropic-pbc/">filed</a> a motion to stay &#8212; a request to essentially pause the case &#8212; in which they acknowledged the books covered likely number &#8220;in the millions.&#8221; Anthropic&#8217;s lawyers also wrote, &#8220;the specter of unprecedented and potentially business-threatening statutory damages against the smallest one of the many companies developing [large language models] with the same books data&#8221; (though it&#8217;s worth noting they have an incentive to amplify the stakes in the case to the judge).</p><p>The company could settle, but doing so could still cost billions given the scope of potential penalties.</p><p>Anthropic, for its part, told Obsolete it &#8220;respectfully disagrees&#8221; with the decision, arguing the court &#8220;failed to properly account for the significant challenges and inefficiencies of having to establish valid ownership millions of times over in a single lawsuit,&#8221; and said it is &#8220;exploring all avenues for review.&#8221;</p><p>The plaintiffs lawyers did not reply to a request for comment.</p><h2>From &#8220;fair use&#8221; win to catastrophic liability</h2><p>Just a month ago, Anthropic and the rest of the industry were celebrating what looked like a landmark <a href="https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/">victory</a>. Alsup had <a href="https://fingfx.thomsonreuters.com/gfx/legaldocs/jnvwbgqlzpw/ANTHROPIC%20fair%20use.pdf">ruled</a> that using copyrighted books to train an AI model &#8212; so long as the books were lawfully acquired &#8212; was protected as &#8220;fair use.&#8221; This was the legal shield the AI industry has been banking on, and it would have let Anthropic, OpenAI, and others off the hook for the core act of model training.</p><p>But Alsup split a very fine hair. In the same ruling, he found that Anthropic&#8217;s wholesale downloading and storage of millions of pirated books &#8212; via infamous &#8220;pirate libraries&#8221; like LibGen and PiLiMi &#8212; was not covered by fair use at all. In other words: training on lawfully acquired books is one thing, but stockpiling a central library of stolen copies is classic copyright infringement.</p><p>Thanks to Alsup&#8217;s ruling and subsequent class certification, Anthropic is now subject to a class action encompassing five to seven million books &#8212; although only works with registered US copyrights are eligible for statutory damages, and the precise number remains uncertain. A significant portion of these datasets consists of non-English titles, many of which were likely never published in the US and may fall outside the reach of US copyright law. For example, an analysis of LibGen&#8217;s holdings <a href="https://www.ivir.nl/publicaties/download/library_genesis_numbers.pdf">suggests</a> that only about two-thirds are in English.</p><p>Assuming that only two-fifths of the five million books are covered and the jury awards the statutory minimum of $750 per work, you still end up with $1.5 billion in damages. And as we saw, the company&#8217;s own lawyers just said the number is probably in the millions (though it&#8217;s worth noting they have an incentive to amplify the stakes in the case to the judge).</p><p>The statutory maximum and with five million books covered? $150,000 per work, or $750 billion total &#8212; a figure Anthropic&#8217;s lawyers <a href="https://fingfx.thomsonreuters.com/gfx/legaldocs/zjpqowkrkpx/ANTHROPIC%20AUTHOR%20COPYRIGHT%20LAWSUIT%20classcert.pdf">called</a> &#8220;ruinous.&#8221; No jury will award that, but it gives you a sense of the range. </p><p>The previous record for a case like this was set in 2019, when a federal jury <a href="https://www.reuters.com/article/lifestyle/cox-to-pay-1-billion-to-music-labels-publishers-over-piracy-infringement-idUSKBN1YO0DE/">found</a> Cox Communications liable for $1 billion after the nation&#8217;s biggest music labels accused the company of turning a blind eye to rampant piracy by its internet customers. That verdict was <a href="https://www.reuters.com/legal/cox-communications-wins-order-overturning-1-bln-us-copyright-verdict-2024-02-20/">overturned</a> on appeal years later and is now <a href="https://www.reuters.com/sustainability/boards-policy-regulation/us-supreme-court-review-billion-dollar-cox-communications-copyright-case-2025-06-30/">under review</a> by the Supreme Court.</p><p>But even that historic sum could soon be eclipsed if Anthropic loses at trial.</p><p>The decision to treat AI training as fair use was widely covered as a win for the industry &#8212; and, to be fair, it was. But Anthropic is now facing an existential threat, with barely a mention. Outside of the <a href="https://news.bloomberglaw.com/ip-law/authors-copyright-class-action-certified-against-anthropic">legal</a> and <a href="https://www.publishersweekly.com/pw/by-topic/digital/copyright/article/98236-judge-rules-class-action-suit-against-anthropic-can-proceed.html">publishing</a> press, only <a href="https://www.reuters.com/legal/government/us-authors-suing-anthropic-can-band-together-copyright-class-action-judge-rules-2025-07-17/">Reuters</a> and <a href="https://www.theverge.com/anthropic/709183/anthropic-class-action-lawsuit-pirated-books-authors-downloads">The Verge</a> have covered the class certification ruling, and neither discussed the fact that this case could spell the end for Anthropic.</p><p><em>Update: early Friday morning, the LA </em>Times<em> ran a <a href="https://www.latimes.com/business/story/2025-07-25/heres-the-number-that-could-halt-the-ai-revolution-in-its-tracks">column</a> discussing the potential for a trillion-dollar judgment.</em></p><h2>Respecting copyright is &#8220;not doable&#8221;</h2><p>The legal uncertainty now facing the company comes as the industry continues an <a href="https://www.obsolete.pub/p/inside-techs-risky-gamble-to-kill">aggressive push</a> in Washington to reshape the rules in their favor. In comments submitted earlier this year to the White House&#8217;s &#8220;AI Action Plan,&#8221; <a href="https://files.nitrd.gov/90-fr-9088/Meta-AI-RFI-2025.pdf">Meta</a>, <a href="https://chatgptiseatingtheworld.com/wp-content/uploads/2025/03/Google-response_us_ai_action_plan-Mar-13-2025.pdf">Google</a>, and <a href="https://cdn.openai.com/global-affairs/ostp-rfi/ec680b75-d539-4653-b297-8bcf6e5f7686/openai-response-ostp-nsf-rfi-notice-request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan.pdf">OpenAI</a> all urged the administration to protect AI companies&#8217; access to vast training datasets &#8212; including copyrighted materials &#8212; by clarifying that model training is unequivocally &#8220;fair use.&#8221; Ironically, <a href="https://assets.anthropic.com/m/4e20a4ab6512e217/original/Anthropic-Response-to-OSTP-RFI-March-2025-Final-Submission-v3.pdf">Anthropic</a> was the only leading AI company to <em>not </em>mention copyright in its White House submission.</p><p>At the Wednesday launch of the AI Action Plan, President Trump dismissed the idea that AI firms should pay to use every book or article in their training data, calling strict copyright enforcement &#8220;not doable&#8221; and insisting that &#8220;China&#8217;s not doing it.&#8221; Still, the administration&#8217;s <a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf">plan</a> is <a href="https://laweconcenter.org/resources/the-white-houses-ai-action-plan/">conspicuously silent</a> on copyright &#8212; perhaps a reflection of the fact that any meaningful change would require Congress to amend the Copyright Act. The federal Copyright Office can issue guidance but ultimately has no power to settle the matter. Administration officials <a href="https://www.politico.com/news/2025/07/23/trump-derides-copyright-and-state-regs-in-ai-action-plan-launch-00472443">told</a> the press the issue should be left to the courts.</p><h2>Anthropic made some mistakes</h2><p>Anthropic isn&#8217;t just unlucky to be up first. The judge <a href="https://fingfx.thomsonreuters.com/gfx/legaldocs/zjpqowkrkpx/ANTHROPIC%20AUTHOR%20COPYRIGHT%20LAWSUIT%20classcert.pdf">described</a> this case as the &#8220;classic&#8221; candidate for a class action: a single company downloading millions of books in bulk, all at once, using file hashes and ISBNs to identify the works. The lawyers suing Anthropic are top-tier, and the judge has signaled he won&#8217;t let technicalities slow things down. A single trial will determine how much Anthropic owes; a jury could choose any number between the statutory minimum and maximum.</p><p>The order reiterates a basic tenet of copyright law: every time a pirated book is downloaded, it constitutes a separate violation &#8212; regardless of whether Anthropic later purchased a print copy or only used a portion of the book for training. While this may seem harsh given the scale, it&#8217;s a straightforward application of existing precedent, not a new legal interpretation.</p><p>And the company&#8217;s handling of the data after the piracy isn&#8217;t winning it any sympathy. </p><p>As detailed in the court order, Anthropic didn&#8217;t just download millions of pirated books; it kept them accessible to its engineers, sometimes in multiple copies, and apparently used the trove for various internal tasks long after training. Even when pirate sites started getting taken down, Anthropic scrambled to torrent fresh copies. After a company co-founder discovered a mirror of &#8220;Z-Library,&#8221; a database shuttered by the FBI, he messaged his colleagues: &#8220;[J]ust in time.&#8221; One replied, &#8220;zlibrary my beloved.&#8221;</p><p>That made it much easier for the judge to say: this is &#8220;Napster&#8221; for the AI age, and the copyright law is clear.</p><p>Anthropic is separately <a href="https://www.pillsburylaw.com/en/news-and-insights/anthropic-copyright-claude-ai.html?utm_source=chatgpt.com">facing</a> a major copyright <a href="https://www.courtlistener.com/docket/68889092/concord-music-group-inc-v-anthropic-pbc/">lawsuit</a> from the world&#8217;s biggest music publishers, who <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.431519/gov.uscourts.cand.431519.1.0_1.pdf">allege</a> that the company&#8217;s chatbot Claude reproduced copyrighted lyrics without permission &#8212; a case that could expose the firm to similar per-work penalties from thousands to potentially millions of songs.</p><p>Ironically, Anthropic appears to have tried harder than some better-resourced competitors to avoid using copyrighted materials without <em>any </em>compensation. Starting in 2024, the company <a href="https://s3.documentcloud.org/documents/25982181/authors-v-anthropic-ruling.pdf">spent millions</a> buying books, often in used condition &#8212; <a href="https://www.businessinsider.com/anthropic-cut-pirated-millions-used-books-train-claude-copyright-2025-6">cutting</a> them apart, scanning them in-house, and pulping the originals &#8212; to feed its chatbot Claude, a step no rival has publicly matched.</p><p>Meta, despite its far deeper pockets, <a href="https://www.theguardian.com/technology/2025/jan/10/mark-zuckerberg-meta-books-ai-models-sarah-silverman">skipped</a> the buy-and-scan stage altogether &#8212; damning internal <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.415175/gov.uscourts.cand.415175.391.24.pdf">messages</a> show engineers calling LibGen &#8220;obviously pirated&#8221; data and revealing that the approach was approved by Mark Zuckerberg.</p><h2>Why the other companies should be nervous</h2><p>If Anthropic settles, it could end up as the only AI company forced to pay for mass copyright infringement &#8212; especially if judges in other cases follow Meta&#8217;s preferred approach and treat downloading and training as a single act that qualifies as fair use.</p><p>For now, Anthropic&#8217;s best shot is to win on appeal and convince a higher court to reject Judge Alsup&#8217;s reasoning in favor of the more company-friendly <a href="https://www.whitecase.com/insight-alert/two-california-district-judges-rule-using-books-train-ai-fair-use">approach</a> taken in the Meta case, which treats the act of training as fair use and effectively rolls the infringing downloads into that single use.</p><p>But appeals usually have to wait until after a jury trial &#8212; so the company faces a brutal choice: settle for potentially billions, or risk a catastrophic damages award and years of uncertainty. If Anthropic goes to trial and loses on appeal, the resulting precedent could drag Meta, OpenAI, and possibly even Google into similar liability.</p><p>OpenAI and Microsoft now face 12 consolidated copyright <a href="https://www.courtlistener.com/docket/67810584/authors-guild-v-openai-inc">suits</a> &#8212; a mix of proposed class actions by book authors and cases brought by news organizations (including The <em>New York Times</em>) &#8212; in the Southern District of New York before Judge Sidney Stein.</p><p>If Stein were to certify an authors&#8217; class and adopt an approach similar to Alsup&#8217;s ruling against Anthropic, OpenAI&#8217;s potential liability <a href="https://chatgptiseatingtheworld.com/2025/07/18/will-a-jury-in-sf-decide-antropics-business-fate-most-likely-yes/#:~:text=But%2C%20if%20you%20combine%20all%20the%20books%20and%20news%20articles%20that%20OpenAI%20allegedly%20downloaded%2C%20the%20total%20number%20of%20works%20copied%20by%20OpenAI%20in%20these%20lawsuits%20has%20to%20far%20surpass%20the%20outer%20limit%20in%20Anthropic%2C%20which%20only%20involves%20the%20books%20datasets.">could be</a> far greater, given the number of potential covered works.</p><h2>What&#8217;s next</h2><p>A trial is tentatively set for December 1st. If Anthropic fails to pull off an appellate victory before then, the industry is about to get a lesson in just how expensive &#8220;move fast and break things&#8221; can be when the thing you&#8217;ve broken is copyright law &#8212; a few-million times over. </p><p>A multibillion dollar settlement or jury award would be a death-knell for almost any four-year-old company, but the AI industry is different. The cost to compete is enormous, and the leading firms are already raising multibillion dollar rounds multiple times a year. </p><p>That said, Anthropic has access to less capital than its rivals at the frontier &#8212; OpenAI, Google DeepMind, and, now, <a href="https://www.ft.com/content/25aab987-c2a1-4fca-8883-38a617269b68">xAI</a>. Overall, company-killing penalties may be unlikely, but they&#8217;re still possible, and Anthropic faces the greatest risk at the moment. And given how fiercely competitive the AI industry is, a multibillion dollar setback could seriously affect the company&#8217;s ability to stay in the race.</p><p>And some competitors seem to have functionally unlimited capital. To build out its new superintelligence team, Meta has been <a href="https://www.thetimes.com/us/news-today/article/ai-super-intelligence-meta-google-rivals-f02wzg57w">poaching</a> rival AI researchers with <em>nine-figure</em> pay packages, and Zuckerberg recently <a href="https://x.com/GarrisonLovely/status/1945940533433483644">said</a> his company would invest &#8220;hundreds of billions of dollars&#8221; into its efforts.</p><p>To keep up with its peers, Anthropic recently decided to accept money from autocratic regimes, despite <a href="https://www.cnbc.com/2024/03/22/anthropic-lining-up-a-new-slate-of-investors-ruled-out-saudi-arabia.html">earlier misgivings</a>. On Sunday, CEO Dario Amodei <a href="https://www.wired.com/story/anthropic-dario-amodei-gulf-state-leaked-memo/">issued</a> a memo to staff saying the firm will seek investment from Gulf states, including the UAE and Qatar. The memo, which was obtained and <a href="https://www.wired.com/story/anthropic-dario-amodei-gulf-state-leaked-memo/">reported</a> on by Kylie Robison at <em>WIRED</em>, admitted the decision would probably enrich &#8220;dictators&#8221; &#8212; something Amodei called a &#8220;real downside.&#8221; But, he wrote, the company can&#8217;t afford to ignore &#8220;a truly giant amount of capital in the Middle East, easily $100B or more.&#8221; </p><p>Amodei apparently acknowledged the perceived hypocrisy of the decision, after his October essay/manifesto &#8220;Machines of Loving Grace&#8221; <a href="https://www.darioamodei.com/essay/machines-of-loving-grace">extolled</a> how important it is that democracies win the AI race.  </p><p>In the memo, Amodei wrote, &#8220;Unfortunately, I think &#8216;No bad person should ever benefit from our success&#8217; is a pretty difficult principle to run a business on.&#8221;</p><p>The timing is striking: the note to staff went out only days after the class action certification suddenly presented Anthropic with potentially existential legal risk.</p><div><hr></div><p>The question of whether generative AI training can lawfully proceed without permission from rights-holders has become a defining test for the entire industry.</p><p>OpenAI and Meta may still wriggle out of similar exposure, depending on how their judges rule and whether they can argue that the core act of AI training is protected by fair use. But for now, it&#8217;s Anthropic &#8212; not OpenAI or Meta &#8212; that&#8217;s been forced onto the front lines, while the rest of the industry holds its breath.</p><p><em>Edited by<a href="https://www.sidmahanta.com/bio-contact"> Sid Mahanta</a> and <a href="http://www.ian-macdougall.com/about.html">Ian MacDougall</a>, with inspiration and review from my friend Vivian.</em></p>]]></content:encoded></item><item><title><![CDATA[Inside Tech's Risky Gamble to Kill State AI Regulations for a Decade]]></title><description><![CDATA[Republicans slipped a controversial provision into the &#8220;One Big Beautiful Bill&#8221; &#8212; now facing bipartisan backlash and internal party rebellion]]></description><link>https://www.obsolete.pub/p/inside-techs-risky-gamble-to-kill</link><guid isPermaLink="false">https://www.obsolete.pub/p/inside-techs-risky-gamble-to-kill</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Sun, 29 Jun 2025 22:10:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!aLcG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Update 2: the Senate <a href="https://x.com/Dareasmunhoz/status/1939964796830355892">voted</a> 99-1 to remove the moratorium from the reconciliation bill early Tuesday morning.</em> <em>This stunning defeat "represents a major turning point in U.S. technology policy," <a href="https://x.com/AdamThierer/status/1940023986080845990">according to</a> an originator of the idea. While preemption with no replacement was unprecedented and got farther than many of its supporters even expected, I agree with this take. I wrote up some more quick thoughts <a href="https://x.com/GarrisonLovely/status/1940053494204952732">here</a>.</em> </p><p><em>Update: The moratorium has been modified again, likely in response to <a href="https://www.axios.com/pro/tech-policy/2025/06/27/scoop-blackburn-to-offer-amendment-to-strip-ai-pause-from-budget-bill">concerns</a> from Republican Senator Marsha Blackburn. Here&#8217;s a <a href="https://law-ai.org/the-ai-moratorium-the-blackburn-amendment-and-new-requirements-for-generally-applicable-laws/">summary</a> of the changes from the Institute for Law &amp; AI: </em></p><ol><li><p><em>Shortens the &#8220;temporary pause&#8221; from 10 to 5 years;</em></p></li><li><p><em>Attempts to exempt laws addressing CSAM, childrens&#8217; online safety, and rights to name/likeness/voice/image&#8212;although<strong> </strong>the amendment <strong>seemingly fails to protect the laws its drafters intend to exempt</strong>; and</em></p></li><li><p><em>Creates a new requirement that laws do not create an &#8220;undue or disproportionate burden,&#8221; which is likely to generate significant litigation.</em></p></li></ol><p><em>In other words, the changes don&#8217;t actually seem to do what Blackburn wants them to do. The other parts of the following analysis remain true.</em> </p><p><em>Voting on amendments to the reconciliation bill <a href="https://www.politico.com/live-updates/2025/06/30/congress/thune-senate-megabill-votearama-senate-vote-00432303">began</a> this morning, with a final overall vote expected late Monday or early Tuesday.</em></p><div><hr></div><p>The Republican budget reconciliation bill &#8212; better known as "One Big Beautiful Bill" &#8212; currently includes a <a href="https://www.budget.senate.gov/imo/media/doc/the_one_big_beautiful_bill_act.pdf#page=168">provision</a> attempting to ban state-level AI regulations for ten years.</p><p>This moratorium, if passed, would be perhaps the most sweeping attempt to deregulate an emerging technology in US history.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>The AI lobby, likely emboldened by the national political climate, is making a big gamble. Following in the footsteps of the most aggressive actors, like Andreessen Horowitz and Meta, much of the tech industry has learned to stop worrying and love the moratorium.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aLcG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aLcG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aLcG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aLcG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aLcG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aLcG!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg" width="1200" height="793.6813186813187" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:963,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!aLcG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aLcG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aLcG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aLcG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ac4eb5-8ffe-42dc-be17-46307f5b7703_6603x4367.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Sen. Ted Cruz, R-Texas. Photo Credit: (NASA/Bill Ingalls)</figcaption></figure></div><p>States that don't comply with the provision would lose access to $500 million in new federal funding from the Broadband Equity, Access, and Deployment (BEAD) <a href="https://broadbandusa.ntia.gov/funding-programs/broadband-equity-access-and-deployment-bead-program">program</a>, which provides federal grants to expand internet access in underserved communities. Any state that accepts part of the new funding <a href="https://law-ai.org/the-ai-moratorium-more-deobligation-issues/">risks</a> forfeiting its entire share of a $42.5 billion pot of BEAD funding and having their AI laws invalidated.</p><p>In May, the reconciliation bill <a href="https://www.congress.gov/bill/119th-congress/house-bill/1/all-actions?overview=closed&amp;q=%7B%22roll-call-vote%22%3A%22all%22%7D">passed</a> the House by a single vote margin and now only needs a majority vote to pass the Senate, after which any differences from the House version would send the bill back to the lower chamber for another vote.</p><h3>Do they have the votes?</h3><p>The moratorium, initially slipped into the Big Beautiful Bill with little <a href="https://www.techpolicy.press/transcript-us-house-subcommittee-hosts-hearing-on-ai-regulation-and-the-future-of-us-leadership/">public discussion</a>, has become a lightning rod within the GOP. Dozens of Republican <a href="https://s3.documentcloud.org/documents/25985617/final-joint-governors-letter-on-obbb-ai-protections-062725.pdf?utm_source=alert&amp;utm_medium=email&amp;utm_campaign=alerts_pro_policy_tech_subs2">governors</a> and <a href="https://www.scag.gov/media/opvgxagq/2025-05-15-letter-to-congress-re-proposed-ai-preemption-_final.pdf">state attorneys general</a> have publicly opposed the provision, joining the far-right House <a href="https://www.politico.com/f/?id=00000197-5a31-d063-a7b7-5b7133c60000">Freedom Caucus</a> and <a href="https://x.com/heritage_action/status/1938677885956510083?s=46">advocacy arm</a> of the Heritage Foundation, the think tank behind <a href="https://en.wikipedia.org/wiki/Project_2025">Project 2025</a>.</p><p>The moratorium has also received opposition from dozens of Democratic <a href="https://broadbandbreakfast.com/house-dems-dont-want-bead-money-conditioned-on-ai-moratorium/">representatives</a> <a href="https://www.reuters.com/legal/government/teamsters-president-urges-congress-scrap-ai-state-law-ban-2025-06-25/">and</a> <a href="https://www.markey.senate.gov/news/press-releases/senator-markey-to-file-amendment-to-strip-republican-proposal-to-ban-ai-regulation-by-states-from-reconciliation-package">senators</a>, joining over 140 civil society groups <a href="https://demandprogress.org/wp-content/uploads/2025/05/FINAL-Letter-Opposing-AI-State-Preemption-Google-Docs.pdf">organized</a> by Demand Progress, a tech policy advocacy nonprofit.</p><p>Thursday morning, Punchbowl News <a href="https://punchbowl.news/article/tech/gop-ai-resistance/">reported</a> that Republican Senator Marsha Blackburn delivered a letter to Majority Leader John Thune asking him to remove the moratorium from the reconciliation bill. The letter was reportedly also signed by Republican Senators Rand Paul and Josh Hawley. Republican Senators <a href="https://punchbowl.news/article/senate/senate-parliamentarian-reopens-ai-debate/?utm_source=Sailthru&amp;utm_medium=email&amp;utm_campaign=6/29/25%20%20Tech%20Sunday%20Lookahead&amp;utm_term=Premium%20Policy%20Tech%20Smart%20List%201225">Kevin Cramer</a> and <a href="https://punchbowl.news/article/tech/gop-ai-resistance/">Rick Scott</a> have also expressed concerns about the provision.</p><p>With likely unanimous support from Senate Democrats &#8212; something advocates tell Obsolete they are working hard to secure &#8212; four Republicans would provide enough votes to pass an amendment removing the moratorium. But even with the votes, supporters tell Obsolete they worry that Thune will add the provision back in in what's known as a "wraparound amendment," chock full of the majority's priorities that often gets a party-line vote.</p><p>The vote on the amendment is expected overnight Sunday, followed some time after by the vote on the full bill.</p><h3>Unprecedented</h3><p>In the U.S., state governments have the power to regulate companies that do business within their borders. Those regulations are sometimes inconsistent, creating a large compliance burden for companies. To resolve this, congress can preempt state-level laws and replace them with a unifying national framework, like when it <a href="https://www.congress.gov/bill/103rd-congress/house-bill/2739">established</a> uniform trucking regulations in 1994 or <a href="https://www.congress.gov/bill/101st-congress/house-bill/3562">created</a> national food labeling standards in 1990.</p><p>Something Congress can also do &#8212; but <a href="https://www.lawfaremedia.org/article/the-house-reconciliation-bill-s-ai-preemption-clearly-violates-the-byrd-rule">has never really done before</a> &#8212; is preempt state laws <em>without </em>passing a federal regulation to fill the gap.</p><h3>Overreach?</h3><p>By failing to offer a federal framework for AI regulation, even a non-binding one, the provision goes further than what even many of the key players in the industry asked for.</p><p>In their submissions to President Trump's AI Action Plan, OpenAI, Meta, Google, and Andreessen Horowitz (known as a16z) ask for federal nullification of state AI laws. OpenAI <a href="https://cdn.openai.com/global-affairs/ostp-rfi/ec680b75-d539-4653-b297-8bcf6e5f7686/openai-response-ostp-nsf-rfi-notice-request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan.pdf">called for</a> preemption with a national framework that is "purely voluntary." Meta <a href="https://files.nitrd.gov/90-fr-9088/Meta-AI-RFI-2025.pdf">requested</a> "federal preemption of state laws that conflict with the Administration&#8217;s pro-innovation agenda." Google <a href="https://files.nitrd.gov/90-fr-9088/Google-RFI-2025.pdf">asked</a> for preemption and "a unified national framework for frontier AI models focused on protecting national security while fostering an environment where American AI innovation can thrive." a16z, the venture capital giant, <a href="https://d1lamhf6l6yk6d.cloudfront.net/uploads/2025/03/a16z-National-AI-Action-Plan-OSTP-Submission.pdf">called</a> for a federal law that "preempts state-specific restrictions on model development."</p><p>The trade group TechNet specifically <a href="https://files.nitrd.gov/90-fr-9088/TechNet-RFI-2025.pdf">suggested</a> "the federal government should look to impose a moratorium on state legislation related specifically to the development of frontier AI models until national standards are adopted." The influential industry association <a href="https://www.technet.org/our-story/members/">includes</a> OpenAI, Google, Meta, Amazon, and Anthropic.</p><p>Amazon's <a href="https://files.nitrd.gov/90-fr-9088/Amazon-AI-RFI-2025.pdf">submission</a> frets about the "growing patchwork of approaches to AI regulation," but stops short of explicitly asking for preemption. <a href="https://files.nitrd.gov/90-fr-9088/Microsoft-AI-RFI-2025.pdf">Microsoft</a> and <a href="https://files.nitrd.gov/90-fr-9088/Anthropic-AI-RFI-2025.pdf">Anthropic</a> didn't mention preemption in their submissions.</p><p>OpenAI, Meta, Google, a16z, Amazon, and TechNet did not reply to a request for comment.</p><p>American Edge, a <a href="https://www.transformernews.ai/p/american-edge-meta-ai-regulation-lobbying">dark money</a> lobbying group <a href="https://www.cnbc.com/2023/05/01/facebook-primary-donor-group-antitrust-fight.html">backed by</a> tens of millions of dollars from Meta, <a href="https://punchbowl.news/article/tech/gop-ai-resistance/">told Punchbowl News</a> it was doing a seven-figure cable and digital ad buy to support the moratorium. <a href="https://www.youtube.com/watch?v=vnaZMzNxZrI">Recent</a> <a href="https://www.youtube.com/watch?v=nlBwMUWcl5Y">ads</a> from the group focus on how AI is empowering American manufacturers, one of whom is quoted saying, "we can't let China get the upper hand."</p><p>Opponents of AI regulations often invoke competition with China as justification &#8212; Ted Cruz's Senate committee <a href="https://www.commerce.senate.gov/services/files/78D6B49B-5C5A-44BB-9B03-B62391CD6C3A?ref=broadbandbreakfast.com">factsheet</a> is titled "Investing In AI and Beating China in the AI Race." Yet China already <a href="https://carnegieendowment.org/research/2024/02/tracing-the-roots-of-chinas-ai-regulations?lang=en">imposes</a> far stricter regulations on AI than the US. Earlier this month, Chinese AI chatbots <a href="https://www.bloomberg.com/news/articles/2025-06-09/alibaba-tencent-freeze-ai-tools-during-high-stakes-china-exam">temporarily disabled</a> image recognition tools during national college entrance exams, yet these types of restrictions apparently haven't prevented China from rapidly closing the gap with US AI capabilities, from a reported <a href="https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation-helen-toner">years</a> to <a href="https://sfstandard.com/opinion/2025/02/01/marc-andreessen-just-wants-you-to-think-deepseek-is-a-sputnik-moment/">months</a>.</p><p>An <a href="https://docs.google.com/spreadsheets/d/1uUt58pz813ZYnnWp-5su9A0z5SY2pxeMoMERmea1kk4/edit?gid=0#gid=0">analysis</a> from Public Citizen found that over 200 state laws would likely be preempted by the provision. A bipartisan coalition of 40 state attorneys general <a href="https://www.scag.gov/media/opvgxagq/2025-05-15-letter-to-congress-re-proposed-ai-preemption-_final.pdf">warned</a> that the moratorium would nullify state enacted and proposed laws safeguarding against AI-generated explicit material, deceptive deepfakes, discriminatory rent-setting algorithms, and invasive automated phone scams.</p><h3>Red states in the crosshairs</h3><p>The moratorium was almost much stronger. As recently as Thursday, the language of the provision <a href="https://www.americanprogress.org/article/the-senates-ai-pause-may-take-billions-in-state-broadband-funds-hostage/">might have allowed</a> the Commerce Department to de-obligate the $42.5 billion in BEAD funding and condition access to it on complying with the moratorium, <em>even if states did not take new money.</em></p><p>However, the <a href="https://www.budget.senate.gov/imo/media/doc/the_one_big_beautiful_bill_act.pdf#page=168">latest text</a> <a href="https://law-ai.org/the-ai-moratorium-more-deobligation-issues/">makes clear</a> that states would only risk their portion of the larger pot if they took any of the new $500 million.</p><p>One of the <a href="https://www.dailysignal.com/2025/06/26/senates-rebranded-ai-moratorium-is-fatally-flawed/">arguments</a> conservatives make against letting states lead the charge on AI regulations is that doing so allows 'woke' states, like California and New York, to impose their values on the rest of the country.</p><p>But the structure of the moratorium makes red states, which have smaller budgets, more rural populations, and a lower propensity to regulate, <a href="https://www.dailysignal.com/2025/06/26/senates-rebranded-ai-moratorium-is-fatally-flawed/">more tempted</a> to take the deal on offer. In doing so, they would give up a substantial amount of their legislative power in exchange for funding designed to help low-income communities get access to broadband internet.</p><p>For instance, Montana and New York <a href="https://stopaiban.org/state_impact.html">are set</a> to receive similar total amounts of BEAD funding, but Montana's is 22 percent of the overall state budget to New York's 0.3 percent.</p><p>And as law professor Gabe Weil <a href="https://x.com/gabriel_weil/status/1938966339613331725">noted</a>, a temporary governing coalition &#8212; in many states, the governor alone &#8212; could take the deal, binding them to the moratorium for years after their terms end.</p><p>There's also a risk that states accept this new funding without fully understanding it endangers its share of a much bigger pot of money. This confusion has been compounded by moratorium supporters' incomplete and misleading explanations. Neither Cruz's senate committee <a href="https://www.commerce.senate.gov/services/files/78D6B49B-5C5A-44BB-9B03-B62391CD6C3A?ref=broadbandbreakfast.com">factsheet</a> nor the <a href="https://standtogether.org/stories/the-economy/why-we-shouldnt-be-afraid-of-the-future-of-technology">Koch-backed</a> Abundance Institute&#8217;s <a href="https://cdn.hub-abundance.institute/Response_to_Republican_Gov_Letter.pdf">response</a> to <a href="https://s3.documentcloud.org/documents/25985617/final-joint-governors-letter-on-obbb-ai-protections-062725.pdf">skeptical Republican governors</a> acknowledges the potential loss of $42.5 billion.</p><p>The <a href="https://www.politico.com/news/2024/05/12/ai-lobbyists-gain-upper-hand-washington-00157437">libertarian</a> Abundance Institute doesn't disclose its funders, but head of AI policy Neil Chilson <a href="https://www.politico.com/news/2024/05/12/ai-lobbyists-gain-upper-hand-washington-00157437">told </a><em><a href="https://www.politico.com/news/2024/05/12/ai-lobbyists-gain-upper-hand-washington-00157437">Politico</a></em> it received donations from "Silicon Valley and Austin types.&#8221;</p><p>The exact state-by-state allocation of the new $500 million hasn&#8217;t been determined, but it's likely to mirror the dynamic described above &#8212; with red states more inclined to accept the tradeoff &#8212; though the financial stakes are far lower.</p><h3>Byrd law</h3><p>The reconciliation process allows budget-related bills to pass the Senate with a simple majority, rather than the 60 votes needed to overcome the filibuster. As a result, both parties try to cram as many of their priorities into these bills as possible.</p><p>These measures have to comply with the <a href="https://www.congress.gov/crs-product/RL30862">Byrd Rule</a>, which the senate parliamentarian uses to assess each provision to determine if it affects the budget enough to stay in. This rule is meant to ensure that reconciliation bills stick to budgetary matters rather than becoming vehicles for unrelated policy changes.</p><p>Anticipating the moratorium would not survive the so-called "Byrd bath," earlier this month, Republican Senator Ted Cruz <a href="https://www.politico.com/live-updates/2025/06/05/congress/senate-commerce-megabill-frees-spectrum-ties-bead-to-ai-moratorium-00391136">made</a> it conditional on the allocation of $500 million in new BEAD funding.</p><p>Even after this change, many insiders, including Republican <a href="https://x.com/JohnCornyn/status/1934763436845457425">Senator John Cornyn</a> and <a href="https://subscriber.politicopro.com/article/2025/06/vance-congresss-ai-rule-moratorium-unlikely-to-pass-00394337">JD Vance</a>, predicted the AI moratorium would get stripped. But in a surprising move last Saturday, the Senate Parliamentarian <a href="https://www.budget.senate.gov/ranking-member/newsroom/press/more-provisions-in-republicans-one-big-beautiful-bill-are-subject-to-byrd-rule-parliamentarian-advises">determined</a> the moratorium was Byrd-compliant, allowing it to pass the Senate with a simple majority. "Shocking would be an understatement," a tech lobbyist opposed to the moratorium said of the ruling to Obsolete.</p><p><a href="https://www.linkedin.com/in/jason-van-beek-2763556/">Jason Van Beek</a>, who spent two decades as a senior senate aide following his work on Thune's successful 2004 senate campaign, told Obsolete he'd "never seen that before" in his entire Hill career. "I was shocked by the initial ruling," he said, noting that even senators themselves were caught off guard by the parliamentarian's decision. Van Beek is now <a href="https://futureoflife.org/person/jason-van-beek/">Chief Government Affairs Officer</a> for the Future of Life Institute, an AI safety nonprofit lobbying against the moratorium.</p><p>In another shocking move, the parliamentarian <a href="https://www.politico.com/live-updates/2025/06/26/congress/parliamentarian-requests-cruz-rewrite-ai-moratorium-00427371">reopened</a> her decision on Thursday, which led to a further revision to the <a href="https://www.budget.senate.gov/imo/media/doc/the_one_big_beautiful_bill_act.pdf">bill text</a>. The changes to the moratorium language <a href="https://law-ai.org/the-ai-moratorium-more-deobligation-issues/">clarified</a> that states would only risk their portion of the $42.5 billion if they took any of the new $500 million, closing the door on the potential of a Commerce clawback.</p><h3>"Conflicting" regulations</h3><p>Advocates of the moratorium <a href="https://www.commerce.senate.gov/services/files/78D6B49B-5C5A-44BB-9B03-B62391CD6C3A">often</a> <a href="https://www.lawfaremedia.org/article/1-000-ai-bills--time-for-congress-to-get-serious-about-preemption">cite</a> a problematic "patchwork" of "conflicting" state laws as justification. However, there's little evidence these laws genuinely conflict &#8212; requiring businesses in one state to perform actions explicitly prohibited elsewhere. Instead, regulations simply vary in their definitions, scope, and enforcement. In practice, companies typically comply with the strictest regulations applicable to their operations.</p><p>When I <a href="https://x.com/GarrisonLovely/status/1939030495683322265">tweeted</a> asking proponents of the moratorium to point to AI regulations that require states to do mutually exclusive things, the Abundance Institute's Neil Chilson <a href="https://x.com/neil_chilson/status/1939036058613690755">cited</a> the <a href="https://nowandnext.substack.com/p/defining-artificial-intelligence?r=75h94&amp;utm_medium=ios&amp;triedRedirect=true">usage</a> of 57 different definitions of AI in state legislation. Chilson was previously chief technologist of the Federal Trade Commission during the first Trump administration and has been one of the loudest supporters of the moratorium. When I <a href="https://x.com/GarrisonLovely/status/1939051974902923748">pointed out</a> that inconsistency is not the same as contradiction, Chilson did not produce any examples of mutually exclusive legal requirements.</p><p>Laws often include exemptions to resolve apparent conflicts. For example, Colorado&#8217;s AI consumer protection <a href="https://leg.colorado.gov/sites/default/files/2024a_205_signed.pdf">law</a>, which mandates retaining hiring data to prove nondiscrimination, seems at first glance to conflict with California&#8217;s <a href="https://oag.ca.gov/privacy/ccpa">privacy law</a> granting users data deletion rights. Yet California&#8217;s law itself contains a carve-out, allowing companies to retain data when required by other legal obligations.</p><h3>Origins</h3><p>The idea to preempt state-level AI regulations appears to trace back to policy analyst Adam Thierer, who proposed a "learning period moratorium" in a May 2024 <a href="https://www.rstreet.org/commentary/getting-ai-policy-right-through-a-learning-period-moratorium/">blog post</a> for the <a href="https://www.influencewatch.org/non-profit/r-street-institute/">free-market</a> R Street Institute. The think tank, funded partly by <a href="https://blog.google/outreach-initiatives/google-org/launching-the-digital-futures-project-to-support-responsible-ai/">Google</a> and <a href="https://techoversight.org/wp-content/uploads/2023/08/Amazons-DC-Influence-Operation.pdf?utm_source=chatgpt.com">Amazon</a>, does not disclose its full list of donors.</p><p>R Street has not replied to a request for information on its funders.</p><p>According to a political consultant advising groups opposed to the moratorium, the provision began as a coordinated effort by tech companies already contending with AI bills in several states. &#8220;I think it&#8217;s gone a lot further than they imagined,&#8221; the consultant told Obsolete.</p><p>The initial idea was to discourage other states from following suit by making lawmakers second-guess whether their efforts would hold up, the consultant said. But on this front, the preemption push has failed &#8212; it hasn&#8217;t stopped new bills from advancing. Key among them is New York's <a href="https://www.nysenate.gov/legislation/bills/2025/A6453/amendment/A">RAISE Act</a>, which passed both chambers with strong, bipartisan majorities earlier in June. The bill would require developers of powerful AI models to conduct safety testing and risk assessments, using liability law as an enforcement mechanism.</p><p>Van Beek says preemption may have first started to gain traction in December when a bipartisan House Task Force on AI <a href="https://republicans-science.house.gov/_cache/files/a/a/aa2ee12f-8f0c-46a3-8ff8-8e4215d6a72b/E4AF21104CB138F3127D8FF7EA71A393.ai-task-force-report-final.pdf">floated</a> the idea paired with a strong federal standard. Republican Representative Jay Obernolte co-chaired the Task Force and has been a key proponent of the moratorium in the House. At a tech conference in February, he <a href="https://www.route-fifty.com/artificial-intelligence/2025/02/lawmaker-warns-patchwork-state-ai-laws/402985/">discussed</a> the need for both preemption and congressional action, saying, "We can't preempt something with nothing, so we need to give states that confidence."</p><p>But according to Van Beek, the real momentum came shortly after, with the Trump administration&#8217;s January AI Action Plan. The plan&#8217;s <a href="https://www.federalregister.gov/documents/2025/02/06/2025-02305/request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan">call</a> for public comments became a wishlist for deep-pocketed industry players, many explicitly requesting preemption.</p><p>Van Beek described preemption as an "industry priority," telling Obsolete that a16z and other tech lobbyists were "certainly pushing" the idea.</p><p>But even as industry enthusiasm grew, Van Beek was skeptical it could actually work. He thought preemption was "so very clearly a policy proposal" that it wouldn't be able to survive the Byrd rule and be included in a reconciliation bill. As a result, he and others were "caught off guard" by how far the provision has gotten. He did not expect to be "having to fight this battle this early in the game."</p><h3>AI voices</h3><p>On Monday, Chris Lehane, a Clinton-era political "<a href="https://www.newyorker.com/magazine/2024/10/14/silicon-valley-the-new-lobbying-monster">dark arts</a>" operative and now OpenAI&#8217;s chief lobbyist, <a href="https://www.linkedin.com/feed/update/urn:li:activity:7342879984203591680/">posted</a> in support of preemption on LinkedIn.</p><p>OpenAI CEO Sam Altman, who <a href="https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html">famously urged</a> Congress to regulate AI in May 2023, was <a href="https://youtu.be/cT63mvqN54o?si=5b8jXCFofdmQg4fG&amp;t=1514">asked</a> about preemption at a recent live taping of the <em>Hard Fork</em> podcast. He said he still believes some regulation is needed, but warned that "a patchwork across the states would probably be a real mess and very difficult to offer services under."</p><p>Altman went on to describe his growing disillusionment with the ability of policymakers to keep up with AI&#8217;s rapid pace. A detailed, multi-year rulemaking process, he suggested, could be overtaken by the speed of technological change. At the same time, he acknowledged the need for guardrails as systems grow more powerful &#8212; ideally something adaptive and narrowly targeted at risky capabilities, rather than a rigid law designed to last a century.</p><p>But as I've <a href="https://www.thenation.com/article/society/california-ai-safety-bill/">asked before</a>: is the <a href="https://time.com/7205359/why-ai-progress-is-increasingly-invisible/">rapid pace</a> of AI progress really a good reason to defer regulation?</p><p>Breaking with his peers in a <em>New York Times</em> <a href="https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-regulate-transparency.html?unlocked_article_code=1.R08.QgeQ.Ny3DRuNQVg80&amp;smid=url-share">op-ed</a>, Anthropic CEO Dario Amodei expressed sympathy for the desire to simplify the regulatory landscape, but called the moratorium "far too blunt an instrument." "Without a clear plan for a federal response, a moratorium would give us the worst of both worlds &#8212; no ability for states to act, and no national policy as a backstop," he wrote.</p><h3>Strange bedfellows</h3><p>AI policy tends to scramble the usual factions, and the ten-year regulation ban is no exception.</p><p>The provision has exposed a split in the GOP between corporatists like Cruz and national conservatives like <a href="https://punchbowl.news/article/tech/hawley-will-help-democrats-ai-freeze/">Hawley</a>, who have <a href="https://thehill.com/policy/technology/5355684-ai-moratorium-sparks-gop-battle-over-states-rights/">opposed</a> it.</p><p>Representative Marjorie Taylor Greene, who has identified as a "<a href="https://www.huffpost.com/entry/marjorie-taylor-greene-christian-nationalism-republican-party_n_62dd70bde4b081f3a9007344">Christian nationalist</a>," has become the provision's most outspoken critic in the House, <a href="https://x.com/RepMTG/status/1930650431253827806">tweeting</a>:</p><blockquote><p>I&#8217;m not voting for the development of skynet and the rise of the machines by destroying federalism for 10 years by taking away state rights to regulate and make laws on all AI.</p></blockquote><p>Greene <a href="https://www.newsweek.com/marjorie-taylor-greene-turns-against-donald-trump-tax-bill-2089741">said Monday</a> she would oppose the reconciliation bill if it returns to the House with the moratorium intact.</p><p>If Greene is the sole obstacle to passing President Trump's signature bill, she'll face "unbelievable pressure," says Van Beek. "You're the center of attention for the president of the United States, leadership of your party, calling you, getting your name out there to the friendly factions within the party to put pressure," he explained.</p><p>There are two groups of Republican lawmakers who oppose the provision, according to the consultant: those like Hawley who are uneasy about Big Tech getting special treatment, and those who care about states' rights. Florida Governor Ron Desantis, for instance, has <a href="https://www.wftv.com/news/local/desantis-opposes-proposed-ai-restrictions-new-senate-bill/YJVO23X4FVES7L45EZNDRDNCQU/">criticized</a> the measure, highlighting child-safety AI regulations he's signed into law.</p><h3>Mad Libs</h3><p>Regulation arguments often resemble a game of Mad Libs &#8212; swap out the industry or policy fight and the sentences largely still work.</p><p>California's <a href="https://jacobin.com/2024/09/gavin-newsom-ai-tech-bill-sb-1047">bitterly resisted</a> AI safety bill, SB 1047, was likely a major inspiration for the preemption push. Authored by state senator Scott Wiener, the bill would have mainly required the largest AI developers to implement safeguards to mitigate catastrophic risks. Governor Gavin Newsom vetoed the bill, following intense pressure from the industry and prominent Democrats like Nancy Pelosi.</p><p>In an August interview, Wiener pushed back on the idea that only a federal standard could work, pointing to his state's 2018 <a href="https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&amp;part=4.&amp;lawCode=CIV&amp;title=1.81.5">data privacy law</a>, which passed despite similar industry warnings about a patchwork of state laws. Six years later, he noted, Congress still hasn&#8217;t passed a national privacy law.</p><p>Wiener also called out the hypocrisy of industry leaders who claim to prefer federal rules while lobbying to block them. &#8220;A lot of times there are corporate actors who will say, &#8216;hey, don't go to the state level, do it at the federal level,&#8217; but those are the same corporate actors that are making it impossible for Congress,&#8221; he said, citing his 2018 <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201720180SB822">net neutrality bill</a> as another example. That law passed after telecom and cable companies successfully killed a similar federal effort.</p><h3>Lobbying</h3><p>Wiener told me last September that, "The tech industry does not want to be regulated by and large. And so, it's always a huge fight and we've had some success in California and in Congress, it's been even harder."</p><p>While Big Tech has come to largely embrace preemption, often preferring to work through proxies like dark money groups or industry associations, the idea wasn't immediately adopted.</p><p>In May, <em>Politico</em> <a href="https://www.politico.com/news/2025/05/12/how-big-tech-is-pitting-washington-against-california-00336484">reported</a> on the lobbying effort, noting that &#8220;nobody quite knows what to ask for,&#8221; and that there&#8217;s disagreement within the industry over how strong federal AI rules should be. One AI company representative described to <em>Politico</em> the spectrum of opinion, from those who want &#8220;no regulation at the state level&#8221; to others who are &#8220;more comfortable with some regulation.&#8221;</p><p>The provision was a "fluke" that didn't really come from Big Tech, but rather from Andreessen Horowitz, Marc Andreessen's venture capital giant, the lobbyist told Obsolete. They described the firm as "less experienced" and going for the '''hey, let's just ask for exactly what we want'" approach.</p><p>The consultant echoed the lobbyist, describing the moratorium as a "tactic" that unexpectedly got legs, but now there are a lot of tradeoffs to getting it passed. While in their view it would still be a huge win for the industry, it would come with unintended consequences, including killing a raft of state-level AI deepfake child pornography regulations. (These conversations took place before the moratorium was substantially weakened by the parliamentarian's revision.)</p><p>According to the lobbyist, the broader tech industry feared that blanket deregulation might appear too greedy an ask. And so it took a more cautious approach, avoiding making such a direct request themselves. Though, the lobbyist noted, they all wanted it.</p><p>The lobbyist described a tension between tech behemoths and VCs like Andreessen. "It's this big brother, younger brother kind of thing, right? Where the companies are like, &#8216;whoa, whoa, whoa, we're making incremental progress here,'&#8230; Whereas you have some of the venture capitalist type folks coming in and being like, 'hey, let's just ban this and ban that,'" they said.</p><p>The consultant noted that Meta and <a href="https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states/">a16z</a> have emerged as the most aggressive opponents of state AI safety regulations, compared to the "pretty tame" lobbying efforts from Microsoft, OpenAI, and Anthropic.</p><p>In recent history, Microsoft has <a href="https://www.politico.eu/article/microsoft-tech-giant-antitrust-europe-brussels-silicon-valley/">positioned itself</a> as more pro-regulation than its competitors. The $3.7-trillion company <a href="https://www.thenation.com/article/society/california-ai-safety-bill/#:~:text=A%20Microsoft%20lobbyist,the%20forthcoming%20standards.">didn't formally oppose</a> California's AI safety bill, SB 1047 (though it did express a preference for a national law), and is the only one of the above companies that doesn't belong to TechNet.</p><p>Now, a lobbyist working on behalf of Microsoft &#8212; along with Amazon, Google, and Meta &#8212; is pushing for the moratorium, per <a href="https://www.ft.com/content/52ae52f1-531e-462f-898f-e9f86b3b1869">reporting</a> in the <em>Financial Times</em>.</p><p>The consultant said that Elon Musk's falling out with the administration has not been helpful on this particular policy front. Musk <a href="https://x.com/elonmusk/status/1828205685386936567">supported</a> SB 1047 and has <a href="https://x.com/elonmusk/status/495759307346952192">warned</a> of the dangers of AI for over a decade. They said that people were working on getting Musk to weigh in on the moratorium, but noted that his advocacy could cut both ways.</p><h3>Blowback</h3><p>The tech industry may have overplayed its hand. By trying to push the moratorium through reconciliation &#8212; with Cruz as its public face and Trump&#8217;s priorities baked into the broader bill &#8212; companies have made it harder for national Democrats to weigh in against <em>any</em> AI regulations, according to the consultant &#8212; a miscalculation they say the companies didn&#8217;t fully anticipate.</p><p>The opposition of national Democrats like Pelosi likely played a <a href="https://jacobin.com/2024/09/gavin-newsom-ai-tech-bill-sb-1047">key role</a> in killing California's SB 1047 last year. But in swinging for a ten-year veto on <em>all</em> state-level AI regulations, the industry may have alienated some would-be allies in the fight against <em>specific</em> state regulations, like New York's RAISE Act. The consultant says supporters feared that New York Senators Chuck Schumer and Kirsten Gillibrand might pressure Governor Kathy Hochul to veto it, as Gavin Newsom did in California. But neither senator has taken a position on the bill, even privately, the consultant told Obsolete &#8212; a silence the consultant attributed to the issue&#8217;s newfound political toxicity.</p><h3>What's next</h3><p>As the Senate prepares to vote, the tech industry finds itself in uncharted territory. Companies have spent years warning about the dangers of a regulatory patchwork while simultaneously <a href="https://www.wsj.com/politics/policy/meta-google-lobbying-child-online-safety-bill-5ee63dcc?st=688tST&amp;reflink=desktopwebshare_permalink">blocking federal action</a>. Concerns about AI's harms and risks are <a href="https://www.transformernews.ai/p/congress-ccp-agi-hearing">growing</a>, especially <a href="https://intelligence.org/2025/06/18/new-endorsements-for-if-anyone-builds-it-everyone-dies/">within elite circles</a>. The public has <a href="https://www.brookings.edu/articles/what-the-public-thinks-about-ai-and-the-implications-for-governance/">been clear</a> about its support for regulation on the technology, though the issue remains low salience. If AI continues to become more capable and ubiquitous &#8212; as the industry hopes it will &#8212; the technology's salience will grow too, and with it, the appetite for regulation. Should AI enable a disaster or large-scale job loss, the industry may find itself wistful for the days of politicians pitching their bills as innovation-friendly and light-touch.</p><p>The moratorium has been substantially weakened and may not even survive. But it's remarkable that the AI industry got within spitting distance of a ten-year vacation from all state-level regulation. The unexpected success of the effort signals just how much Silicon Valley can override the preferences of the public and key parts of the MAGA coalition. </p><p>But win or lose, the push may have already damaged the industry's ability to whip Democrats &#8212; a miscalculation companies may come to regret.</p><p><em>With editing by <a href="https://www.sidmahanta.com/bio-contact">Sid Mahanta</a>. All mistakes are mine.</em></p>]]></content:encoded></item><item><title><![CDATA[Exclusive: Anthropic is Quietly Backpedalling on its Safety Commitments]]></title><description><![CDATA[The company released a model it classified as risky &#8212; without meeting requirements it previously promised]]></description><link>https://www.obsolete.pub/p/exclusive-anthropic-is-quietly-backpedalling</link><guid isPermaLink="false">https://www.obsolete.pub/p/exclusive-anthropic-is-quietly-backpedalling</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Thu, 22 May 2025 20:31:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jBL5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>After publication, this article was updated to include an additional response from Anthropic and to clarify that while the company's version history <a href="https://www.anthropic.com/rsp-updates">webpage</a> doesn't explicitly highlight changes to the original ASL-4 commitment, discussion of these changes can be <a href="https://cdn.sanity.io/files/4zrzovbb/website/ee775bdcf76b2e2af32d658c934f460383d07c46.pdf#page=22">found</a> in a redline PDF linked on that page.</em></p><p>Anthropic just released Claude 4 Opus, its most capable AI model to date. But in doing so, the company may have abandoned one of its earliest promises.</p><p>In September 2023, Anthropic <a href="https://www-cdn.anthropic.com/1adf000c8f675958c2ee23805d91aaade1cd4613/responsible-scaling-policy.pdf">published</a> its Responsible Scaling Policy (RSP), a first-of-its-kind safety framework that promises to gate increasingly capable AI systems behind increasingly robust safeguards. Other leading AI companies followed suit, releasing their own versions of RSPs. The US lacks binding regulations on frontier AI systems, and these plans remain voluntary.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>The core idea behind the RSP and similar frameworks is to assess AI models for dangerous capabilities, like being able to self-replicate in the wild or help novices make bioweapons. The results of these evaluations determine the risk level of the model. If the model is found to be too risky, the company commits to not releasing it until sufficient mitigation measures are in place.</p><p>Earlier today, TIME published then temporarily removed an <a href="https://time.com/7287806/anthropic-claude-4-opus-safety-bio-risk/">article</a> revealing that the yet-to-be announced Claude 4 Opus is the first Anthropic model to trigger the company's AI Safety Level 3 (ASL-3) protections, after safety evaluators <a href="https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf#page=92">found</a> it may be able to assist novices in building bioweapons. (The updated article is time-stamped 6:45 AM Pacific, but didn't go back up until 9:45 AM, implying there was a time zone mixup.)</p><p>The article included striking quotes from Anthropic's chief scientist Jared Kaplan. "You could try to synthesize something like COVID or a more dangerous version of the flu &#8212; and basically, our modeling suggests that this might be possible," Kaplan said.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jBL5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jBL5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!jBL5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!jBL5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!jBL5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jBL5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png" width="724.7421875" height="407.66748046875" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/faef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:724.7421875,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jBL5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!jBL5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!jBL5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!jBL5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaef9e7b-8e99-486d-ad5f-970155eae6b2_1600x900.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When Anthropic published its <a href="https://www-cdn.anthropic.com/1adf000c8f675958c2ee23805d91aaade1cd4613/responsible-scaling-policy.pdf">first RSP</a> in September 2023, the company made a specific commitment about how it would handle increasingly capable models: "we will define ASL-2 (current system) and ASL-3 (next level of risk) now, and commit to define ASL-4 by the time we reach ASL-3, and so on." In other words, Anthropic promised it wouldn't release an ASL-3 model until it had figured out what ASL-4 meant.</p><p>Yet the company's <a href="https://www-cdn.anthropic.com/872c653b2d0501d6ab44cf87f43e1dc4853e4d37.pdf">latest RSP</a>, updated May 14, doesn't publicly define ASL-4 &#8212; despite treating Claude 4 Opus as an ASL-3 model. Anthropic's announcement <a href="https://www.anthropic.com/news/activating-asl3-protections">states</a> it has "ruled out that Claude Opus 4 needs the ASL-4 Standard."</p><p>When asked about this, an Anthropic spokesperson told Obsolete that the 2023 RSP is "outdated" and pointed to an October 2024 <a href="https://www.anthropic.com/news/announcing-our-updated-responsible-scaling-policy">revision</a> that changed how ASL standards work. The company now says ASLs map to increasingly stringent safety measures rather than requiring pre-defined future standards. </p><p>The spokesperson also pointed to <a href="https://www.anthropic.com/rsp-updates">past versions</a> published on Anthropic's website, which include descriptions of major changes. The main version history doesn't explicitly flag the removal of the original commitment to define ASL-4 by the time ASL-3 was reached &#8212; though this change is discussed in a <a href="https://cdn.sanity.io/files/4zrzovbb/website/ee775bdcf76b2e2af32d658c934f460383d07c46.pdf#page=22">redline</a> PDF linked on the same page.</p><p>After publication, Anthropic reached out to say that the company does define capability thresholds for ASL-4 in its <a href="https://www-cdn.anthropic.com/872c653b2d0501d6ab44cf87f43e1dc4853e4d37.pdf">current RSP</a>. However, the original 2023 commitment was more specific &#8212; it <a href="https://www-cdn.anthropic.com/1adf000c8f675958c2ee23805d91aaade1cd4613/responsible-scaling-policy.pdf#page=4">promised</a> to define both capability thresholds and "warning sign evaluations" before training ASL-3 models. While the current RSP includes high-level capability thresholds for ASL-4, it doesn't include the detailed warning sign evaluations that were part of the original commitment.</p><p>But, moreover, what does a commitment mean if it can be walked back without public scrutiny?</p><p>When Obsolete posed a similar question, the Anthropic spokesperson pushed back, writing:</p><blockquote><p>Would disagree that it's something that 'can be updated at any time.' We have a defined process in place for making updates to the RSP and are rigorous in how we refine and update our commitments.</p></blockquote><p>The spokesperson highlighted a commitment in the <a href="https://www-cdn.anthropic.com/872c653b2d0501d6ab44cf87f43e1dc4853e4d37.pdf">RSP</a> that, "Changes to this policy will be proposed by the CEO and the Responsible Scaling Officer and approved by the Board of Directors, in consultation with the Long-Term Benefit Trust."</p><p>This Trust, described by Anthropic in a September 2023 <a href="https://www.anthropic.com/news/the-long-term-benefit-trust">announcement</a> as an independent body intended to ensure accountability, itself appears to have fallen short on a significant commitment. Citing the company's general counsel and its incorporation documents, TIME <a href="https://time.com/6983420/anthropic-structure-openai-incentives/#:~:text=The%20LTBT%2C%20whose%20members%20have%20no%20equity%20in%20the%20company%2C%20currently%20elects%20one%20out%20of%20the%20board%E2%80%99s%20five%20members.%20But%20that%20number%20will%20rise%20to%20two%20out%20of%20five%20this%20July%2C%20and%20then%20to%20three%20out%20of%20five%20this%20November">reported</a> last May that the Long-Term Benefit Trust (LTBT) would appoint three out of five directors by November 2024. However, the Anthropic <a href="https://www.anthropic.com/company">website</a> currently only lists four directors.</p><p>Anthropic had not replied to a follow-up question about this by the time of publication.</p><h3>When voluntary governance breaks down</h3><p>In February 2024, a senior AI safety researcher at a leading AI company told me that these voluntary governance approaches work, but only for a time. Once you get close to human-level AI, competitive pressures take over.</p><p>Many AI insiders are increasingly predicting human-level AI, often referred to as artificial general intelligence (AGI), is just around the corner. The influential essay series <a href="https://ai-2027.com/">AI 2027</a>, written by leading AI forecasters, predicts recursively self-improving AI systems by 2027 (Vice President JD Vance <a href="https://www.nytimes.com/2025/05/21/opinion/jd-vance-pope-trump-immigration.html#:~:text=I%20actually%20read%20the%20paper%20of%20the%20guy%20that%20you%20had%20on.%20I%20didn%E2%80%99t%20listen%20to%20that%20podcast%2C%20but%20%E2%80%94%E2%80%94">just told</a> The <em>New York Times</em> that he's read the series).</p><p>These predictions coincide with an apparent uptick in corner-cutting and broken promises from leading AI companies. Google DeepMind <a href="https://techcrunch.com/2025/04/03/google-is-shipping-gemini-models-faster-than-its-ai-safety-reports/">didn't publish</a> a safety report for its flagship model for weeks, and its <a href="https://techcrunch.com/2025/04/17/googles-latest-ai-model-report-lacks-key-safety-details-experts-say/">first attempt</a> in April was light on details. Also in April, the <em>Financial Times</em> <a href="https://www.ft.com/content/8253b66e-ade7-4d1f-993b-2d0779c7e7d8">reported</a> that OpenAI's safety testing time had shrunk from months to days.</p><p>Last month, updates to OpenAI's standard model, GPT-4o, caused it to <a href="https://thezvi.substack.com/p/gpt-4o-is-an-absurd-sycophant">breathlessly affirm</a> essentially anything you told it &#8212; a behavior <em>Rolling Stone</em> <a href="https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/">reported</a> could dangerously interact with mental illness. Just days before the company <a href="https://openai.com/index/sycophancy-in-gpt-4o/">rolled back</a> the updates, an OpenAI employee <a href="https://x.com/aidan_mclau/status/1915904460808983007">bragged</a> on X that "this is the quickest we've shipped an update to our main 4o line. Releases are accelerating, and the public is getting our best faster than ever."</p><p>Anthropic, however, has mostly managed to avoid scandals. The company was founded by safety-forward OpenAI researchers who became disillusioned with CEO Sam Altman, a story described in new detail by the just-released books <em><a href="https://wwnorton.com/books/9781324075974">The Optimist</a></em> and <em><a href="https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/">Empire of AI</a></em>.</p><p>Shortly before the TIME article was restored, Anthropic <a href="https://www.anthropic.com/news/activating-asl3-protections">published</a> its own announcement of Claude 4 Opus reaching ASL-3. The company emphasized it hasn't definitively determined whether the new model requires ASL-3 protections, but is implementing them as a "precautionary and provisional action" because it can't clearly rule out the risks.</p><p>Disclosure: I've received funding from the Omidyar Network as a <a href="https://omidyar.com/update/omidyar-network-announces-fifth-class-of-reporters-in-residence/">Reporter in Residence</a>. The Omidyar Network has also <a href="https://omidyar.com/update/omidyar-network-purchases-shares-of-anthropic/">invested</a> in Anthropic.</p><h3>What ASL-3 actually means</h3><p>Claude 4 Opus and Sonnet were <a href="https://www.anthropic.com/news/claude-4">made publicly available</a> around 9:45 AM Pacific.</p><p>According to the TIME article, which also <a href="https://time.com/7287806/anthropic-claude-4-opus-safety-bio-risk/">went back up</a> around 9:45 AM Pacific, Kaplan told the publication that in internal testing, Claude 4 Opus performed more effectively than prior models at advising novices on producing biological weapons.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!07V1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!07V1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png 424w, https://substackcdn.com/image/fetch/$s_!07V1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png 848w, https://substackcdn.com/image/fetch/$s_!07V1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png 1272w, https://substackcdn.com/image/fetch/$s_!07V1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!07V1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png" width="728" height="506.24793388429754" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:589,&quot;width&quot;:847,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!07V1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png 424w, https://substackcdn.com/image/fetch/$s_!07V1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png 848w, https://substackcdn.com/image/fetch/$s_!07V1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png 1272w, https://substackcdn.com/image/fetch/$s_!07V1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ff6ef0-3ac3-4ab8-8c74-466e3e7ee34b_847x589.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Results from Anthropic's <a href="https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf#page=92">new system card</a></figcaption></figure></div><p>The ASL-3 threshold was <a href="https://www.anthropic.com/news/anthropics-responsible-scaling-policy">designed</a> in part to catch AI systems that could "substantially increase" someone's ability to obtain, produce, or deploy chemical, biological, radiological, or nuclear (CBRN) weapons. The protections required by reaching ASL-3 include enhanced cybersecurity to prevent model weight theft and deployment measures specifically targeting CBRN misuse &#8212; what Anthropic <a href="https://www-cdn.anthropic.com/872c653b2d0501d6ab44cf87f43e1dc4853e4d37.pdf">calls</a> a "defense in depth" strategy.</p><p>To meet that bar, the company <a href="https://www.anthropic.com/news/activating-asl3-protections">says</a> it has rolled out safeguards like "constitutional classifiers" &#8212; AI systems that monitor inputs and outputs for dangerous CBRN-related content &#8212; along with enhanced jailbreak detection supported by a bug bounty program.</p><h3>A test of voluntary commitments</h3><p>This moment reveals both the potential and the limitations of the industry's self-regulatory approach. On one hand, Anthropic appears to be following through on most of its commitments, implementing substantial safety measures even when uncertain they're needed.</p><p>On the other hand, these commitments remain voluntary with no external enforcement. As TIME <a href="https://time.com/7287806/anthropic-claude-4-opus-safety-bio-risk/">notes</a>, Anthropic itself is the judge of whether it's complying with the RSP. Breaking it carries no penalty beyond potential reputational damage. And, as we saw, these commitments can be quietly updated according to a process that Anthropic designed.</p><p>This is not to say that every detail of a safety plan should be set in stone. It's reasonable to update a framework based on new information. But there should be a clear distinction between clarifying details and reneging on an earlier commitment. Furthermore, real transparency would mean more clearly flagging any change significant enough to require LTBT approval.</p><h3>What happens next</h3><p>Anthropic <a href="https://www.anthropic.com/news/activating-asl3-protections">says</a> it will continue evaluating Claude 4 Opus' capabilities. If the company determines the model doesn't actually cross the ASL-3 threshold, it could downgrade to the more permissive ASL-2 protections. But for now, Anthropic says it's erring on the side of caution.</p><p>The bigger question is whether this precedent will hold as competition intensifies. With no binding safeguards on frontier AI development in the US, companies like Anthropic are essentially regulating themselves in public view. Whether that's sufficient for managing risks that Kaplan himself compares to pandemic-level threats remains an open &#8212; and urgent &#8212; question.</p><p>This moment exposes the limits of the self-regulatory model the AI industry has championed. If a company as safety-focused as Anthropic can quietly retreat from its own red lines, what will everyone else do when the stakes get even higher?</p><p><em>Edited by <a href="https://www.sidmahanta.com/bio-contact">Sid Mahanta</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[Exclusive: OpenAI Admitted its Nonprofit Board is About to Have a Lot Less Power]]></title><description><![CDATA[In a previously unreported letter, the AI company defends its restructuring plan while attacking critics and making surprising admissions]]></description><link>https://www.obsolete.pub/p/exclusive-what-openai-told-californias</link><guid isPermaLink="false">https://www.obsolete.pub/p/exclusive-what-openai-told-californias</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Sat, 17 May 2025 05:38:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6K0t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>OpenAI was <a href="https://www.obsolete.pub/i/161939154/nonprofit-origins">founded</a> as a counter to the perils of letting profit shape the development of an unprecedentedly powerful technology &#8212; one its founders <a href="https://www.obsolete.pub/i/152552592/openai">have said</a> could lead to human extinction. But in a newly obtained letter from OpenAI lawyers to California Attorney General Rob Bonta, the company reveals what it apparently fears more: anything that slows its ability to raise gargantuan amounts of money.</p><p>The previously unreported <a href="https://www.documentcloud.org/documents/25944903-20250515-openai-response-to-cal-ag-re-nonprofit-petition/">13-page letter</a> &#8212; dated May 15 and obtained by Obsolete &#8212; lays out OpenAI&#8217;s legal defense of its updated proposal to restructure its for-profit entity, which can still be blocked by the California and Delaware attorneys general (AGs). This letter is OpenAI&#8217;s latest attempt to prevent that from happening &#8212; and it&#8217;s full of surprising admissions, denials, and attacks.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6K0t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6K0t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!6K0t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!6K0t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!6K0t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6K0t!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6K0t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!6K0t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!6K0t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!6K0t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c88551e-4265-45c8-95bc-1c62ba4ec06b_1600x900.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Sam Altman speaking in front of SoftBank CEO Masayoshi Son | <a href="https://widerimage.reuters.com/photographer/carlos-barria.html">Carlos Barria</a> | Reuters</em></figcaption></figure></div><p>OpenAI has not replied to a request for comment.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>Control of OpenAI currently rests with its nonprofit board &#8212; an arrangement investors have <a href="https://www.wsj.com/tech/ai/openais-latest-funding-round-comes-with-a-20-billion-catch-1e47d27d?st=hUpy9J&amp;reflink=desktopwebshare_permalink">reportedly balked</a> at following the brief firing of CEO Sam Altman in November 2023. To assuage investor concerns, the company <a href="https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/">initiated plans</a> last year to remove nonprofit control and restructure as a for-profit public benefit corporation (PBC).</p><p>The restructuring effort has drawn fire from <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.152.0.pdf">former employees</a>, <a href="https://www.obsolete.pub/p/breaking-openai-alums-nobel-laureates">Nobel laureates</a>, and an array of civil society organizations, including a <a href="https://sff.org/news-round-up-coalition-petition-to-ca-attorney-general/">coalition</a> of more than 50 California based nonprofits and community groups. Elon Musk has also <a href="https://www.obsolete.pub/p/what-the-headlines-miss-about-the">sued</a> to block it from moving forward, arguing that the nonprofit he helped found and fund is attempting to abandon its charitable purpose.</p><p>On May 5, OpenAI made the shocking <a href="https://openai.com/index/evolving-our-structure/">announcement</a> that it would ditch its plan to remove control over the company from the nonprofit. The move was <a href="https://x.com/GaryMarcus/status/1919465398140912109">hailed</a> by <a href="https://x.com/GarrisonLovely/status/1919457161153019986/quotes">many</a> as a win for public pressure and represented in headlines as a substantial change to the original plan. But as the surprise of the decision wore off and observers dug into the sparse details OpenAI shared, the victory for civic action began to appear hollower.</p><p>The revised plan appears designed to placate both external critics and concerned investors by maintaining the appearance of nonprofit control while changing its substance. SoftBank, which recently invested $30 billion in OpenAI with the right to claw back $10 billion if the restructuring didn't move forward, seems unfazed by the company's new proposal &#8212; the company's finance chief <a href="https://www.cnbc.com/2025/05/13/openai-restructure-plan-gets-softbank-blessing-as-microsoft-negotiates.html">said</a> on an earnings call that from SoftBank's perspective, "nothing has really changed."</p><h3>Revelations</h3><p>The letter from OpenAI's lawyers to AG Bonta contains a number of new details. It says that "many potential investors in OpenAI's recent funding rounds declined to invest" due to its unusual governance structure &#8212; in tension with Bloomberg's <a href="https://www.bloomberg.com/news/articles/2024-09-19/openai-to-decide-which-backers-to-let-into-6-5-billion-funding?embedded-checkout=true">earlier reporting</a> that OpenAI's October round was "oversubscribed."</p><p>The letter resolves a question raised in recent Bloomberg <a href="https://www.bloomberg.com/news/articles/2025-05-06/openai-s-for-profit-overhaul-is-far-from-being-a-done-deal?embedded-checkout=true">reporting</a>: the nonprofit board will have the power to fire PBC directors.</p><p>The document also states that "The Nonprofit will exchange its current economic interests in the Capped-Profit Enterprise for a substantial equity stake in the new PBC and will enjoy access to the PBC's intellectual property and technology, personnel, and liquidity&#8230;" This suggests the nonprofit would no longer own or control the underlying technology but would merely have a license to it &#8212; similar to OpenAI's commercial partners.</p><h3>The key question</h3><p>The key question at the heart of OpenAI's restructuring is whether the people practically running the company day-to-day will have a legal duty to prioritize the <a href="https://openai.com/charter/">charitable mission</a> above profit. OpenAI's mission is to ensure that artificial general intelligence (AGI) benefits humanity.</p><p>Under the current structure, OpenAI's LLC operating agreement explicitly <a href="https://openai.com/our-structure/">states</a> that "the Company's duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit." This creates a legally binding obligation for the company's management.</p><p>In contrast, under the proposed structure, PBC directors would be legally required to balance shareholder interests with the public benefit purpose. The ability to fire PBC directors does not change their fundamental legal duties while in office.</p><p>This shift in legal duties is likely one reason why investors who demanded the right to claw back tens of billions if OpenAI failed to restructure by aggressive deadlines are <a href="https://www.cnbc.com/2025/05/13/openai-restructure-plan-gets-softbank-blessing-as-microsoft-negotiates.html">reportedly</a> more comfortable with the new arrangement. It transforms the nonprofit from an active manager with direct control to a shareholder with hiring and firing powers &#8212; a much more familiar and investor-friendly governance model.</p><p>So far, no Delaware PBC has ever been held liable for failing to pursue its mission &#8212; legal scholars <a href="https://minnesotalawreview.org/wp-content/uploads/2024/10/Chang_FinalFmt.pdf">can&#8217;t find</a> a single benefit&#8209;enforcement case on the books.</p><h3>Competitors and critics</h3><p>While ostensibly addressed to the <a href="https://www.sff.org/Offsite-Media/Petition_Complaint-to-AG-re-Open-AIs-Violations-of-Charitable-Trust.pdf">California coalition</a>, the document reads more like a rebuttal to Elon Musk's lawsuit against the company &#8212; Musk's name appears nearly twenty times throughout. This repetition appears to be an attempt to delegitimize criticism by tying it to Musk and the commercial interests represented by his rival startup, xAI. "Despite (and likely because) of OpenAI's achievements, its most powerful detractors&#8212;many of whom, including Elon Musk, stand to massively profit if OpenAI falters&#8212;have sponsored a false narrative about OpenAI to advance their own commercial interests," the company writes.</p><p>This framing tries to cast all critics, including the nonprofit coalition, as either aligned with or manipulated by Musk's agenda. The letter claims that Musk has leveraged "a campaign of harassment and misinformation for more than a year" and suggests that legal arguments against OpenAI's restructuring "echo those of OpenAI's competitors who stand to gain from its downfall." (Meta <a href="https://www.wsj.com/tech/ai/elon-musk-open-ai-lawsuit-response-c1f415f8?st=n9JeAS&amp;reflink=desktopwebshare_permalink">joined</a> Musk's suit in December.)</p><p>Musk has indeed waged a bitter campaign against OpenAI and Altman since losing a <a href="https://www.wsj.com/tech/elon-musk-sam-altman-relationship-6889a77a?st=heQZ89&amp;reflink=desktopwebshare_permalink">struggle</a> for the organization's helm in 2018. But by conflating legitimate concerns from civil society organizations with Musk's more self-serving grievances, OpenAI appears to be creating a false binary: either you support the company's restructuring or you're aligned with Musk's interests. This rhetorical strategy deflects attention from the substantive governance <a href="https://notforprivategain.org/">concerns</a> <a href="https://www.sff.org/Offsite-Media/Petition_Complaint-to-AG-re-Open-AIs-Violations-of-Charitable-Trust.pdf">raised</a> by independent groups.</p><p>The formal letter's confrontational attitude contrasts sharply with the conciliatory tone OpenAI has taken in direct communications with the nonprofit coalition. In an email to individual members, OpenAI representatives wrote that they:</p><blockquote><p>understand this moment as an opportunity not just for dialogue&#8212;it's a chance to lay the groundwork for a different kind of partnership, one built on trust, transparency, and shared goals. We would welcome the chance to work with you.</p></blockquote><p>This two-faced approach &#8212; offering partnership in private communications while suggesting coalition members are acting in bad faith in formal documents &#8212; highlights the increasingly adversarial tactics OpenAI is employing against critics of its restructuring plans. (The company <a href="https://www.reuters.com/legal/openai-countersues-elon-musk-claims-harassment-2025-04-09/">countersued</a> Musk last month.)</p><p>And, at least for some critics, it does not appear to be working. Orson Aguilar, a leader of the California nonprofit coalition, forwarded OpenAI's letter and the above email to Obsolete with this note:</p><blockquote><p>For your information, they are sending emails directly to our coalition members and are now implying that we are working with Musk and that we are doing this for commercial interests!</p></blockquote><h3>Employee motivations</h3><p>OpenAI's letter directly challenges the coalition's <a href="https://www.sff.org/Offsite-Media/Petition_Complaint-to-AG-re-Open-AIs-Violations-of-Charitable-Trust.pdf">characterization</a> of employee motivations during the November 2023 board crisis, when Altman was briefly ousted. It claims the coalition "denigrate[s] the outpouring of support for Altman" by suggesting it was a "cash grab motivated solely by the employees' 'financial stakes.'" The company insists there is "zero support for that outrageous claim."</p><p>However, a former OpenAI employee who signed the letter supporting Altman strongly disputes this characterization. "That's bullshit and a blatant lie," they wrote to Obsolete. "We were in the middle of a tender offer at the time. People had millions of dollars on the line, myself included." The former employee described numerous conversations on Slack and offline about "how does this affect the tender" and added that many employees "didn't really trust Sam even back then, they just thought the option was signing the letter or the company dying, and they really didn't want the latter."</p><p>And in response to a claim in the letter that the nonprofit board is stronger than ever, they wrote, "Also bullshit. The current board is packed with a supermajority of Sam loyalists. Pretending that they're able to act as an independent check on him is a joke."</p><p>The letter reassures AG Bonta that, "This plan for the future is the product of careful consideration by the Board of the Nonprofit." However, the authors &#8212; legal representatives of the board &#8212; then proceed to misspell the names of two of the directors ("Ogunles" instead of "Ogunlesi" and "Hellman" instead of "Hellmann"). </p><h3>Contestable claims</h3><p>The letter makes some contestable claims, including that "Microsoft has never 'control[led]' OpenAI&#8212;either as a matter of corporate governance or as a matter of antitrust law." However, Altman's threat to reconstitute OpenAI at Microsoft, backed by signatures from 90 percent of the company, likely played a pivotal role in the board's decision to reinstate him last November. OpenAI also asserts that "all arrangements with Microsoft have been negotiated at arm's length," &#8212; a claim that strains credulity given how instrumental Altman has been to forging the relationship. Altman's "<a href="https://www.nytimes.com/2023/11/20/technology/openai-microsoft-altman-nadella.html?unlocked_article_code=1.H08.eFQV.p3oBpQACkZQu&amp;smid=url-share">bromance</a>" with Microsoft CEO Satya Nadella has <a href="https://www.wsj.com/tech/ai/sam-altman-satya-nadella-rift-307cb7f5?st=wZUyty&amp;reflink=desktopwebshare_permalink">been</a> <a href="https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai">covered</a> extensively, and Altman <a href="https://www.nytimes.com/2024/10/17/technology/microsoft-openai-partnership-deal.html?unlocked_article_code=1.H08.RGH1.RCFGRApD6IgE&amp;smid=url-share">reportedly</a> asked Nadella for billions of dollars while serving on the nonprofit board.</p><p>OpenAI's criticism of the coalition's April 9 letter is particularly puzzling. The company faults the coalition for claiming that "OpenAI proposes to eliminate any and all control by [the Nonprofit] over OpenAI's core work." This criticism is perplexing because, as OpenAI itself later demonstrated with its May 5 reversal, <em>that was precisely OpenAI's publicly understood plan at the time the coalition made its statement.</em> The company appears to be retroactively criticizing the coalition for accurately describing OpenAI's proposal as it stood.</p><h3>What's left unsaid</h3><p>Perhaps most notable is what the letter <em>doesn't</em> say. While OpenAI claims its PBC will operate "consistent with the Nonprofit's mission and subject to the Nonprofit's control, just as the Capped-Profit Enterprise does today," it concedes in the next breath that under Delaware law, PBC directors <em>must</em> balance shareholder interests with public benefit. As highlighted earlier, this represents a fundamental shift from the current structure, where the nonprofit board isn't required to consider shareholder profits at all.</p><p>A separate group of advocates organized under the label "Not for Private Gain" sent its own <a href="https://notforprivategain.org/">legal letter</a> in April, <a href="https://www.obsolete.pub/p/breaking-openai-alums-nobel-laureates">asking</a> the California and Delaware AGs to investigate and block OpenAI's restructuring plans. The group laid out six existing governance safeguards that were threatened by the company's proposal &#8212; like the subordination of profit motives to OpenAI's charitable purpose and caps on how much profit can go to investors.</p><p>Earlier this week, the Not for Private Gain project published a <a href="https://notforprivategain.org/follow-up">follow-up</a> analyzing OpenAI's new proposal, finding that five of the six safeguards were still at risk of being eliminated by the restructuring. The sixth safeguard &#8212; the profit caps &#8212; would go away under the new plan.</p><p>The Not for Private Gain update offers specific ideas on how OpenAI could legally enshrine some of the safeguards that would not exist by default if the new proposal moves forward &#8212; for instance, through legal guarantees to prioritize mission above profit in the PBC's certificate of incorporation.</p><p>OpenAI's letter to AG Bonta does include one recommendation made in the group's follow-up &#8212; the nonprofit board will have the ability to fire the PBC board members.</p><p>Tyler Whitmer of the Not for Private Gain coalition wrote to Obsolete:</p><blockquote><p>I&#8217;m glad to learn the nonprofit board will have the power to hire and fire directors of the proposed PBC, but this is a far cry from the direct control of the LLC the nonprofit has today. The new proposal is being pitched as the nonprofit staying in control, but in reality it undermines the nonprofit&#8217;s charitable mission by diluting its ability to directly manage development and deployment of AGI and shifts enforcement of the mission from public servants in the AGs' offices to self interested shareholders.</p></blockquote><p>Responding to the letter's insinuation of impropriety, the group shared this statement with Obsolete:</p><blockquote><p>We are independent of other groups voicing concerns about OpenAI&#8217;s restructuring. None of our letters&#8217; signatories work for an OpenAI competitor, and we have received no funding from OpenAI competitors, including Elon Musk</p></blockquote><p>The Not for Private Gain update acknowledges that it "does not discuss the proposed abandonment of OpenAI&#8217;s profit caps or its current commitment that artificial general intelligence&#8212;when OpenAI creates it&#8212;will belong exclusively to the nonprofit for the benefit of humanity." This, the authors write, is not meant "to minimize the significance of these proposed changes, but rather to underscore the singular importance of ensuring that OpenAI continues to have a legally enforceable obligation to advance the charitable mission above all else."</p><p>OpenAI has been marked by a long history of assurances, beginning with a simple one: public interest over private gain. As the organization now tries to undo the legal restraints that accompanied these foundational promises, it becomes harder to ignore what&#8217;s being constructed in their stead: not a firewall between mission and profit, but a bridge that only goes in one direction.</p>]]></content:encoded></item><item><title><![CDATA[Four Predictions About OpenAI's Plans To Retain Nonprofit Control]]></title><description><![CDATA[An apparent victory for opponents of the company's for-profit ambitions may be more complicated]]></description><link>https://www.obsolete.pub/p/four-predictions-about-openais-plans</link><guid isPermaLink="false">https://www.obsolete.pub/p/four-predictions-about-openais-plans</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Mon, 05 May 2025 21:28:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!43Aj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After months of controversy and litigation, OpenAI just announced that it will keep its nonprofit in control of the company &#8212; while hinting that caps on investor profits might disappear.</p><p>At a glance, this appears to be a significant reversal from the company's previous plan to shed nonprofit control over the for-profit entity, an effort that faced <a href="https://obsolete.pub/p/breaking-openai-alums-nobel-laureates">major opposition</a> from parties including <a href="https://www.obsolete.pub/p/why-did-elon-musk-just-offer-to-buy">Elon Musk</a>, civil society leaders, former employees, and legal scholars.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>"OpenAI was founded as a nonprofit, and is today overseen and controlled by that nonprofit. Going forward, it will continue to be overseen and controlled by that nonprofit," Bret Taylor, chair of the nonprofit's board, <a href="https://openai.com/index/evolving-our-structure/">wrote</a> in a blog post today. The announcement followed discussions with the attorneys general (AGs) of California and Delaware, who oversee charitable organizations in their states and <a href="https://calmatters.org/economy/technology/2025/01/openai-investigation-california/">have</a> <a href="https://www.axios.com/2024/10/30/openai-for-profit-delaware-attorney-general">been</a> scrutinizing the proposed restructuring &#8212; which either of them could have blocked.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!43Aj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!43Aj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg 424w, https://substackcdn.com/image/fetch/$s_!43Aj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg 848w, https://substackcdn.com/image/fetch/$s_!43Aj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!43Aj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!43Aj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg" width="1100" height="733" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:733,&quot;width&quot;:1100,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!43Aj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg 424w, https://substackcdn.com/image/fetch/$s_!43Aj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg 848w, https://substackcdn.com/image/fetch/$s_!43Aj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!43Aj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc55c9467-6865-437d-b812-db09af58c84c_1100x733.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo: <a href="https://www.claramokriphotography.com/">Clara Mokri</a> for <a href="https://nymag.com/intelligencer/article/sam-altman-artificial-intelligence-openai-profile.html">NY Magazine</a></figcaption></figure></div><p>But what does "control" actually mean in this context? And will the profit caps &#8212; which would have sent OpenAI's profits to the nonprofit once investors received returns of up to 100-times their investment &#8212; remain in place?</p><p>And does OpenAI now owe investors $26.6 billion, plus interest? The company <a href="https://www.wsj.com/tech/ai/openais-latest-funding-round-comes-with-a-20-billion-catch-1e47d27d?st=kEiMob&amp;reflink=desktopwebshare_permalink">reportedly</a> <a href="https://www.businessinsider.com/openai-deadline-to-become-for-profit-or-return-investor-money-2024-10">gave</a> investors in its last two fundraising rounds the ability to claw back tens of billions of dollars, if it didn't shed its nonprofit controls by certain deadlines. The status of these provisions didn't appear to be explicitly addressed by OpenAI.</p><h2><strong>Reading between the lines</strong></h2><p>While the announcement keeps the nonprofit in control, CEO Sam Altman's accompanying letter <a href="https://openai.com/index/evolving-our-structure/#:~:text=Instead%20of%20our,to%20something%20simpler.">suggests</a> a significant change to the capped-profit model:</p><blockquote><p>Instead of our current complex capped-profit structure&#8212;which made sense when it looked like there might be one dominant AGI effort but doesn&#8217;t in a world of many great AGI companies&#8212;we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.</p></blockquote><p>This language implies that the profit caps, a crucial feature of OpenAI's original model, will be modified. These caps were intended to ensure that if OpenAI became wildly profitable through building artificial general intelligence (AGI), the vast majority of profits would flow to the nonprofit. OpenAI's charter <a href="https://openai.com/charter/">defines</a> AGI as "a highly autonomous system that outperforms humans at most economically valuable work." And it's worth noting that normal corporations <em><a href="https://www.investopedia.com/investing/know-your-shareholder-rights/">don't</a></em><a href="https://www.investopedia.com/investing/know-your-shareholder-rights/"> have</a> caps on investor returns.</p><p>Altman seems to be arguing that because many companies will build AGI, no single company is going to pull in the trillion-dollar-plus profits they once anticipated. As a result, the nonprofit will actually be giving up less in expected profit and will need to receive less compensation for the removal of the caps.</p><p>But if investors have been balking at the profit caps, as they <a href="https://www.reuters.com/technology/artificial-intelligence/openai-lays-out-plan-shift-new-for-profit-structure-2024-12-27/">reportedly have been</a>, then that may mean they think OpenAI has some appreciable chance of hitting them.</p><p>In other words, what Altman wrote doesn't make much sense. If OpenAI is no longer on track for trillion-dollar profits, the caps should be irrelevant. The fact that investors purportedly pushed to eliminate them suggests they believe OpenAI&#8217;s upside remains enormous &#8212; and that the caps were more than just a technical nuisance. I find it hard to imagine that investors are put off by the <em>complexity</em> of OpenAI's structure more than the caps themselves.</p><p>Without these limits, OpenAI&#8217;s investors could reap uncapped returns &#8212; fundamentally changing the company's incentives and undermining its original intention to avoid the corrupting influence of profit-maximization.</p><h2><strong>The control question</strong></h2><p>A key question that remains unanswered is how much OpenAI's nonprofit control really matters if the board doesn't exercise independent judgment. After the November 2023 firing and rapid rehiring of Altman, it's reasonable to wonder if the nonprofit board will ever again meaningfully stand up to the CEO or the for-profit entity.</p><p>Taylor shared a "technical detail" <a href="https://www.cnbc.com/2025/05/05/openai-says-nonprofit-retain-control-of-company-bowing-to-pressure.html">with reporters</a>: the PBC will have its own board, but the nonprofit will appoint its directors &#8212; and for now, the same people will sit on both boards.</p><p>As the technology OpenAI builds grows more capable and widely used, the way it's governed matters more.</p><p>This also matters because if OpenAI actually builds AGI, it's not out of the question to think it could generate trillions in profits. Without profit caps, there would be no legal obligation to share that money with the world, beyond what they pay in taxes and whatever stake the nonprofit retains.</p><p>Perhaps more importantly, nonprofit control makes it easier for directors to justify decisions that are good for the world or safety but bad for the bottom line. Shortly after Altman was fired, OpenAI's chief strategy officer Jason Kwon <a href="https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html#:~:text=During%20the%20call,without%20Mr.%20Altman.">reportedly told</a> then-director Helen Toner that by voting for the firing, she had violated her duties &#8212; but under the nonprofit structure, her duty was to humanity writ large, not shareholders.</p><p>A fiduciary duty to humanity may sound silly, but the degrees of freedom it offers could matter enormously in moments of crisis or when making key decisions about powerful AI systems.</p><h2><strong>Four predictions</strong></h2><p>Based on my past reporting and reading between the lines of the announcement, here's what I expect to happen:</p><ol><li><p>The profit caps will be gone, replaced with a "normal capital structure where everyone has stock" &#8212; and that stock entitles you to uncapped future profits.</p></li><li><p>OpenAI won't have to pay back the $26.6 billion to investors because they've signed off on this change in return for the profit caps being eliminated.</p></li><li><p>The nonprofit will be compensated tens of billions by the for-profit entity for the removal of the caps.</p></li><li><p>The nonprofit will largely use that money to buy OpenAI services for nonprofits and governments, targeting constituencies that can make life difficult for the company (like California nonprofits).</p></li></ol><p>As Altman wrote in the letter, the nonprofit "will become a big shareholder in the [new public benefit corporation for-profit entity], in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities."</p><h2><strong>Is this a victory?</strong></h2><p>The "Not for Private Gain" <a href="https://notforprivategain.org/">letter</a> from civil society leaders, former employees, and Nobel laureates that Obsolete <a href="https://obsolete.pub/p/breaking-openai-alums-nobel-laureates">covered last month</a> argued that no sale price could adequately compensate the nonprofit for what it would be giving up &#8212; control over a company that might build AGI. They essentially said: this isn't a debate about fair market value.</p><p>In response to the news, former OpenAI researcher Todor Markov <a href="https://x.com/todor_m_markov/status/1919483495157797353">tweeted</a>:</p><blockquote><p>Glad you're making this commitment.</p><p>I do think it's unfortunate that you only made it after public pressure and the Attorneys General getting involved. Had you done this back in December, it would have looked like principle, not like you got dragged kicking and screaming.</p><p>Still, regardless of your true motivations, this decision is a win for the broader public. We&#8217;ll be watching closely to make sure nonprofit control remains more than just words on paper.</p></blockquote><p>Page Hedley, who led the "Not for Private Gain" letter and also used to work at OpenAI, is less sure that this is a win. "We&#8217;re glad that OpenAI is listening to concerns from civil society leaders and Attorneys General Jennings and Bonta," he wrote in a statement that raises these questions:</p><blockquote><p>Will OpenAI's commercial goals continue to be legally subordinate to its charitable mission, which is enforceable by the attorneys general?</p><p>Who will own the technology that OpenAI develops?</p></blockquote><p>Hedley concludes, "The 2019 restructuring <a href="https://openai.com/index/openai-lp/">announcement</a> made the primacy of the mission very clear, but so far, these statements have not."</p><p>Whether this development marks a meaningful victory for those concerned about OpenAI&#8217;s governance &#8212; or just a clever repackaging of its original plans &#8212; remains an open question.</p><p><em>Thanks to <a href="http://www.ian-macdougall.com/about.html">Ian MacDougall</a> and <a href="https://www.sidmahanta.com/bio-contact">Sid Mahanta</a> for the excellent and timely edits.</em> </p>]]></content:encoded></item><item><title><![CDATA[Breaking: OpenAI Alums, Nobel Laureates Urge Regulators to Save Company's Nonprofit Structure]]></title><description><![CDATA[Converting to a for-profit model would undermine the company's founding mission to ensure AGI "benefits all of humanity," argues new letter]]></description><link>https://www.obsolete.pub/p/breaking-openai-alums-nobel-laureates</link><guid isPermaLink="false">https://www.obsolete.pub/p/breaking-openai-alums-nobel-laureates</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Wed, 23 Apr 2025 09:01:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!iZlu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iZlu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iZlu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!iZlu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!iZlu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!iZlu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iZlu!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iZlu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!iZlu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!iZlu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!iZlu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca024065-2164-41d2-bc86-e3b1fa9c8084_1600x900.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Sam Altman testifying before Congress in 2023. Photographer: <a href="https://www.ericlee.co/">Eric Lee</a>/Bloomberg via Getty Images</figcaption></figure></div><p>Don&#8217;t become a for-profit.</p><p>That&#8217;s the blunt message of a recent letter signed by more than 30 people, including former OpenAI employees, prominent civil-society leaders, legal scholars, and Nobel laureates, including AI pioneer Geoffrey Hinton and former World Bank chief economist Joseph Stiglitz.</p><p>Obsolete obtained the 25-page <a href="https://www.documentcloud.org/documents/25905798-letter-to-ca-and-de-attorneys-general-re-openai-restructuring-4-17-2025/">letter</a>, which was sent last Thursday to the attorneys general (AGs) of California and Delaware, two officials with the power to block the deal.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p><a href="http://notforprivategain.org">Made public</a> early Wednesday, the letter argues that OpenAI's <a href="https://openai.com/index/why-our-structure-must-evolve-to-advance-our-mission/">proposed transformation</a> from a nonprofit-controlled entity into a for-profit public benefit corporation (PBC) would fundamentally betray the organization's founding mission and could even be unlawful &#8212; placing the power and responsibility to intervene squarely with the state AGs.</p><p>OpenAI and the offices of the California and Delaware attorneys general did not reply to requests for comment.</p><p>The letter was primarily authored by <a href="https://www.linkedin.com/in/page-hedley-71318613/?originalSubdomain=uk">Page Hedley</a>, a lawyer who worked at OpenAI from 2017-2018 and recently left an AI policy role at <a href="https://www.longview.org/">Longview Philanthropy</a>; Sunny Gandhi, political director of <a href="https://encodeai.org/">Encode AI</a>; and Tyler Whitmer, founder and president of <a href="https://lasst.org/">Legal Advocates for Safe Science and Technology</a>.</p><p>(Encode AI <a href="https://encodeai.org/who-we-are/">receives funding</a> from the Omidyar Network, where I am currently a <a href="https://omidyar.com/omidyar-network-announces-fifth-class-of-reporters-in-residence/">Reporter in Residence</a>, and I worked as a media consultant for Longview Philanthropy in 2022.)</p><h2>Nonprofit origins</h2><p>In 2015, OpenAI&#8217;s founders established it as a nonprofit research lab as a counter to for-profit AI companies like Google DeepMind. When executives felt they couldn't raise enough capital to compete at AI's increasingly expensive cutting edge, they launched a for-profit in 2019 that was ultimately controlled by the nonprofit. This unusual corporate structure was designed to keep the organization loyal to its mission: to ensure that artificial general intelligence (AGI) benefits all of humanity. AGI is <a href="https://openai.com/our-structure/">defined</a> in the OpenAI Charter as "a highly autonomous system that outperforms humans at most economically valuable work."</p><p>In the years since this structure was established, OpenAI has taken on tens of billions in investment, reached a $300 billion valuation, and arguably become the world's leading AI company.</p><p>But now, despite years of insisting that nonprofit governance was essential to its mission, OpenAI wants to abandon it &#8212; and adopt a more conventional corporate structure that prioritizes shareholder returns.</p><p>OpenAI has raised over $46 billion since October and has <a href="https://www.wsj.com/tech/ai/openais-latest-funding-round-comes-with-a-20-billion-catch-1e47d27d">reportedly</a> <a href="https://www.businessinsider.com/openai-deadline-to-become-for-profit-or-return-investor-money-2024-10">given</a> investors the ability to ask for most of it back if the restructuring isn't completed by certain deadlines, with the earliest hitting by the end of this year.</p><p>"OpenAI may one day build technology that could get us all killed," writes former employee Nisan Stiennon in a supplemental statement. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control."</p><p>In the letter, the authors argue that the AGs should investigate the conversion, prevent it from moving forward as planned, and work to ensure the OpenAI nonprofit board is sufficiently empowered, informed, independent, and willing to stand up to company management. If the board doesn't meet these requirements, the AGs should intervene, potentially going so far as to remove directors and appoint an independent oversight body, the letter suggests.</p><p>"The proposed restructuring would eliminate essential safeguards," the letter states, "effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritize shareholder returns."</p><p>This letter is the latest in a series of high profile challenges to the OpenAI restructuring. Cofounder Elon Musk is suing to block it, with formal support from <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.152.0.pdf">former employees</a>, <a href="https://encodeai.org/encode-backs-legal-challenge-to-openais-for-profit-switch/">civil society groups</a>, and <a href="https://www.wsj.com/tech/ai/elon-musk-open-ai-lawsuit-response-c1f415f8?st=q9LM3x&amp;reflink=desktopwebshare_permalink">Meta</a>. Musk also offered to buy the nonprofit's assets for $97.4 billion, in a <a href="https://garrisonlovely.substack.com/p/why-did-elon-musk-just-offer-to-buy">likely effort</a> to bid up the price the new for-profit has to pay for control or disrupt the transformation entirely.</p><p>And earlier this month, a separate coalition of over 50 California nonprofit leaders <a href="https://sff.org/news-round-up-coalition-petition-to-ca-attorney-general/">filed</a> a petition with AG Rob Bonta's office, urging him to ensure that the nonprofit is adequately compensated for what it's giving up in the transition.</p><p>However, this new letter differs from past efforts by asking the AGs to pursue an intervention separate from Musk's suit and by arguing that the restructuring shouldn't move forward &#8212; no matter how much OpenAI's nonprofit gets compensated.</p><h2>Nobel opposition</h2><p>Three Nobel laureates feature prominently among the signatories: AI pioneer Geoffrey Hinton, and economists Oliver Hart and Joseph Stiglitz.</p><p>Hinton is the <a href="https://scholar.google.com/citations?user=JicYPdAAAAAJ&amp;hl=en">second most-cited</a> living scientist, and his pioneering work in the field of deep learning <a href="https://www.nobelprize.org/prizes/physics/2024/hinton/facts/">earned him</a> the 2024 Nobel Prize in Physics. In May 2023, Hinton resigned from his job at Google to freely warn that advanced AI could wipe out humanity.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dhBi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dhBi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg 424w, https://substackcdn.com/image/fetch/$s_!dhBi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg 848w, https://substackcdn.com/image/fetch/$s_!dhBi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!dhBi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dhBi!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg" width="1200" height="800.2747252747253" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dhBi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg 424w, https://substackcdn.com/image/fetch/$s_!dhBi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg 848w, https://substackcdn.com/image/fetch/$s_!dhBi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!dhBi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcbacef6-b0b7-4a1b-bec9-ffcaaa3dcc49_6720x4480.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Hinton and other Nobel Laureates in 2024. <a href="https://commons.wikimedia.org/wiki/User:Jenny8lee">Jennifer 8. Lee</a>, <a href="https://commons.wikimedia.org/wiki/File:Daron_Acemoglu,_Simon_Johnson,_James_A._Robinson,_David_Baker,_Demis_Hassabis,_John_Jumper,_and_Geoffrey_Hinton_at_2024_Nobel_Week_3.jpg">Wikimedia Commons</a>.</figcaption></figure></div><p>In a supplemental statement, Hinton praised OpenAI&#8217;s commitment to ensuring that AGI helps humanity. &#8220;I would like them to execute that mission instead of enriching their investors. I&#8217;m happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk,&#8221; he wrote. Hart, an economics <a href="https://www.nobelprize.org/prizes/economic-sciences/2016/press-release/">Nobel winner</a> for his work in contact theory in 2016, was more blunt in his supplemental statement: &#8220;The proposed governance change is dangerous and should be resisted."</p><h2>Contradictions</h2><p>The letter compiles a damning collection of statements from OpenAI's leadership that directly contradict its current push to abandon nonprofit control. These quotes paint a picture of an organization that chose its unique structure to safeguard humanity, and whose leaders repeatedly emphasized this commitment as central to their mission.</p><p>Back in 2015, Sam Altman <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.32.2.pdf">emailed</a> Elon Musk proposing a structure where the tech &#8220;belongs to the world via some sort of nonprofit,&#8221; adding that OpenAI would &#8220;aggressively support all regulation.&#8221; (Since then, OpenAI has lobbied to <a href="https://time.com/6288245/openai-eu-lobbying-ai-act/">weaken</a> or <a href="https://www.documentcloud.org/documents/25054090-openai-formal-letter-of-opposition">kill</a> several AI laws.)</p><p>In 2017, Altman <a href="https://www.youtube.com/watch?v=nLMZothlRNM&amp;t=1458s">told</a> an audience in London: &#8220;We don&#8217;t ever want to be making decisions to benefit shareholders. The only people we want to be accountable to is humanity as a whole.&#8221;</p><p>In 2019, president Greg Brockman <a href="https://www.youtube.com/watch?feature=shared&amp;t=1898&amp;v=bIrEM2FbOLU">clarified</a> something subtle but important: &#8220;The true mission isn&#8217;t for OpenAI to build AGI. The true mission is for AGI to go well for humanity... our goal is to make sure it goes well for the world.&#8221;</p><p>Altman made clear that some money was not worth making: &#8220;There are things we wouldn't be willing to do no matter how much money they made,&#8221; he <a href="https://www.vox.com/2018/12/10/18134926/sam-altman-kara-swisher-recode-decode-live-mannys-podcast-transcript-facebook-zuckerberg-ethics#:~:text=At%20OpenAI%2C%20when,their%20shoes%20now.">told</a> Kara Swisher in 2018, &#8220;and we made this public so the public would hold us accountable.&#8221;</p><p>And in a 2020 interview, Altman <a href="https://www.youtube.com/watch?v=TzcJlKg2Rc0&amp;t=2734s">warned</a> that if OpenAI succeeded at building AGI, it might &#8220;capture the light cone of all future value in the universe.&#8221; That, he said, &#8220;is for sure not okay for one group of investors to have.&#8221;</p><p>(The <a href="https://en.wikipedia.org/wiki/Light_cone">light cone</a> refers to all of the universe that earth-originating life could theoretically affect, to give you a sense of Altman's ambitions.)</p><p>This wasn&#8217;t just rhetoric. When OpenAI launched its for-profit arm in 2019, it <a href="https://openai.com/index/openai-lp/">promised</a> that all employees and investors would sign contracts putting the nonprofit Charter first &#8212; &#8220;even at the expense of some or all of their financial stake.&#8221; The for-profit announcement also included the assurance: "Regardless of how the world evolves, we are committed &#8212; legally and personally &#8212; to our mission."</p><p>In his 2023 Congressional testimony, Altman <a href="https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Bio%20&amp;%20Testimony%20-%20Altman.pdf">highlighted</a> key safeguards in OpenAI's "unusual structure" that keeps it mission-focused: nonprofit control over the for-profit subsidiary, fiduciary duties to humanity rather than investors, a majority-independent board with no equity stakes, capped investor profits with residual value flowing to the nonprofit, and explicit reservation of AGI technologies for nonprofit governance.</p><p>The letter then warns that, "These safeguards are now in jeopardy under the proposed restructuring," and goes through the status of each under the proposed restructuring.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DaLo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DaLo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png 424w, https://substackcdn.com/image/fetch/$s_!DaLo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png 848w, https://substackcdn.com/image/fetch/$s_!DaLo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png 1272w, https://substackcdn.com/image/fetch/$s_!DaLo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DaLo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png" width="1456" height="871" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:871,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DaLo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png 424w, https://substackcdn.com/image/fetch/$s_!DaLo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png 848w, https://substackcdn.com/image/fetch/$s_!DaLo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png 1272w, https://substackcdn.com/image/fetch/$s_!DaLo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf75a073-79e3-48a7-8b7c-d333dcd2a711_1484x888.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Justifications</h2><p>Taken together, these statements leave little room for interpretation. OpenAI deliberately chose a nonprofit-controlled structure with the explicit goal of safeguarding humanity from profit-driven AGI development &#8212; the very protection it now seeks to dismantle.</p><p>The letter surgically dissects OpenAI's justifications for abandoning its nonprofit governance structure, arguing they prioritize commercial competitiveness over the organization's core charitable mission.</p><p>OpenAI's primary rationale, stated in <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.147.0.pdf#page=15">recent court filings</a>, is that its unusual structure makes it harder to attract investment and talent. But the letter authors counter that this is precisely the point &#8212; OpenAI was specifically designed to operate differently.</p><p>"Competitive advantage might be a relevant factor, but it is not a sufficient reason to restructure,&#8221; the letter argues. "OpenAI's charitable purpose is not to make money or capture market share." It points out that OpenAI's current structure intentionally accepts certain competitive disadvantages as the cost of prioritizing humanity's interests over profits.</p><p>These disadvantages might be overrated, suggests signatory <a href="https://www.chicagobooth.edu/faculty/directory/z/luigi-zingales">Luigi Zingales</a> of the University of Chicago Booth School of Business in his supplemental statement:</p><blockquote><p>The current structure, which caps returns at 100x the capital invested, does not really constrain its ability to raise funds. So, what is the need to transfer the control to a for-profit? To overrule the mandate that AI should be used for the benefit of humanity.</p></blockquote><p>These profit caps were designed to ensure that OpenAI could redistribute exorbitant returns in a world where the company actually builds AGI and puts much of the world out of work. They're one of the things <a href="https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/">reported to be</a> on the chopping block as part of the proposed restructuring.</p><p>The letter authors write:</p><blockquote><p>OpenAI might respond that a competitive advantage inherently advances its mission, but that argument is an implicit comparison of OpenAI and its competitors: that humanity would be better off if OpenAI builds AGI before competing companies. Based on OpenAI&#8217;s recent track record, this argument is unlikely to be convincing&#8230;</p></blockquote><p>They then list reports of OpenAI's <a href="https://fortune.com/2024/05/21/openai-superalignment-20-compute-commitment-never-fulfilled-sutskever-leike-altman-brockman-murati/">broken promises</a>, <a href="https://www.ft.com/content/8253b66e-ade7-4d1f-993b-2d0779c7e7d8">rushed</a> <a href="https://www.washingtonpost.com/technology/2024/07/12/openai-ai-safety-regulation-gpt4/">safety testing</a>, <a href="https://time.com/6288245/openai-eu-lobbying-ai-act/">doublespeak</a> <a href="https://openai.com/index/planning-for-agi-and-beyond/">and</a> <a href="https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0?st=42k5KT&amp;reflink=desktopwebshare_permalink">hypocrisy</a>, and <a href="https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release">coercive non-disparagement agreements</a>.</p><p>Moreover, the letter highlights that OpenAI has not adequately explained why removing nonprofit control is necessary to address its stated issues. While the company claims investors demand simplification of its "capital structure," it fails to demonstrate why this requires eliminating nonprofit oversight rather than more targeted adjustments.</p><p>The authors conclude that OpenAI's <a href="https://openai.com/index/nonprofit-commission-guidance/">plans</a> for "one of the best resourced non-profits in history" miss the point entirely. This isn&#8217;t about building a well-funded foundation for generic good works &#8212; it&#8217;s about retaining governance over AGI itself. The letter bluntly states that OpenAI "should not be permitted to sell out its mission."</p><h2>"No sale price can compensate"</h2><p>OpenAI <a href="https://openai.com/index/nonprofit-commission-guidance/">claims</a> the restructuring would create "one of the best resourced nonprofits in history" that <a href="https://openai.com/index/why-our-structure-must-evolve-to-advance-our-mission/">would</a> "pursue charitable initiatives in sectors such as health care, education, and science."</p><p>But the letter's authors argue this would mark a fundamental shift in OpenAI&#8217;s charitable purpose &#8212; one that, under nonprofit law, can only be altered in <a href="https://garrisonlovely.substack.com/i/158492941/its-hard-to-change-your-purpose">exceptional circumstances</a>.</p><p>Perhaps the most distinctive argument in the letter is that, given OpenAI's mission to ensure that AGI benefits all of humanity, no sale price could adequately compensate the nonprofit for what it would be giving up &#8212; control over the company that is arguably closest to building AGI.</p><p>The authors essentially say: this isn&#8217;t a debate about fair market value. The whole premise of the nonprofit is that AGI governance isn&#8217;t something you can price &#8212; and that no board serving the public could ever justify giving it up for cash, no matter the number.</p><p>Actually, I can think of one outcome that could potentially satisfy this condition. As strong a claim as OpenAI has to leadership of the AI industry, it's only one company. If it slows down for the sake of safety, others could overtake it. So perhaps the OpenAI nonprofit would better advance its mission if it were spun out into a truly independent entity with $150 billion and the mission to lobby for binding domestic and international safeguards on advanced AI systems.</p><p>If this sounds far-fetched, then so should the idea that the nonprofit board that initiated this conversion is genuinely representing the public interest.</p><h2>An institutional test</h2><p>The question of whether this restructuring is legal is now before two state AGs. But whether it goes forward may ultimately come down to politics. California and Delaware's AGs may be officers of the law, but they're also elected by the people.</p><p>Delaware AG Kathleen Jennings has already <a href="https://lawprofessors.typepad.com/files/delawareagamicusbrief-musk-v.-altman.pdf">publicly stated</a> that her office is investigating the proposal, and CalMatters <a href="https://calmatters.org/economy/technology/2025/01/openai-investigation-california/">reported</a> in January that California AG Rob Bonta's office was doing the same.</p><p>And in a <a href="https://www.documentcloud.org/documents/25906226-dkt-157-1-exhibit-a-april-14-2025-relator-status/">letter</a> sent last week to Musk's legal team, Bonta's office denied Musk's request for "relator status," which would have allowed him to sue OpenAI in the name of the State of California. The AG's office specifically noted that Musk appeared to have personal and financial interests in OpenAI's assets through his competing company, xAI.</p><p>Musk's suit makes legal arguments, but can't be fully separated from the man himself, whose far-right turn has made him politically toxic in Democratic circles (both the AGs in question are Democrats in deep blue states).</p><p>By rejecting Musk's bid to insert himself as California's representative, Bonta signaled that his office intends to maintain direct control over any potential enforcement actions regarding OpenAI's charitable purpose.</p><p>The letter&#8217;s authors essentially argue that this moment is a test &#8212; not just for OpenAI&#8217;s board, but for the institutions meant to safeguard the public interest. The attorneys general of California and Delaware have both indicated they&#8217;re paying attention.</p><p>Whether they act may determine if OpenAI&#8217;s fiduciary duty to humanity is a real constraint &#8212; or just another marketing line sacrificed to investor pressure.</p><p><em>Edited by <a href="https://www.sidmahanta.com/bio-contact">Sid Mahanta</a>.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><h2>Appendix: Quotes from OpenAI&#8217;s leaders over the years</h2><p>The letter includes some great quotes from Altman and Brockman &#8212; some of which I hadn&#8217;t actually seen before. I&#8217;m including them here with links for convenience.</p><p><a href="https://www.youtube.com/watch?v=nLMZothlRNM&amp;t=1458s">Sam Altman in 2017</a>: "That&#8217;s why we&#8217;re a nonprofit: we don&#8217;t ever want to be making decisions to benefit shareholders. The only people we want to be accountable to is humanity as a whole."</p><p><a href="https://www.youtube.com/watch?feature=shared&amp;t=1898&amp;v=bIrEM2FbOLU">Greg Brockman in 2019</a>: "The true mission isn&#8217;t for OpenAI to build AGI. The true mission is for AGI to go well for humanity&#8230; our goal isn&#8217;t to be the ones to build it, our goal is to make sure it goes well for the world."</p><p><a href="https://www.youtube.com/watch?v=TzcJlKg2Rc0&amp;t=2734s">Altman in 2020</a>: "The problem with AGI specifically is that if we&#8217;re successful, and we tried, maybe we could capture the light cone of all future value in the universe. And that is for sure not okay for one group of investors to have."</p><p><a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.32.2.pdf">Altman in a 2015 email to Musk</a>: "we could structure it so that the tech belongs to the world via some sort of nonprofit but the people working on it get startup-like compensation if it works. Obviously we&#8217;d comply with/aggressively support all regulation"</p><p><a href="https://openai.com/index/openai-lp/">OpenAI LP 2019 announcement</a>:</p><blockquote><p>We&#8217;ve designed OpenAI LP to put our overall mission&#8212;ensuring the creation and adoption of safe and beneficial AGI&#8212;ahead of generating returns for investors&#8230; Regardless of how the world evolves, we are committed&#8212;legally and personally&#8212;to our mission.</p><p>&#8230;</p><p>All investors and employees sign agreements that OpenAI LP&#8217;s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.</p></blockquote><p><a href="https://podcasts.apple.com/us/podcast/hibt-lab-openai-sam-altman/id1150510297?i=1000580232536">Altman on the hybrid model in 2022</a> (at 35:16):</p><blockquote><p>We wanted to preserve as much as we could of the specialness of the nonprofit approach, the benefit sharing, the governance, what I consider maybe to be most important of all, which is the safety features and incentives.</p></blockquote><p><a href="https://www.govinfo.gov/content/pkg/CHRG-115hhrg30877/pdf/CHRG-115hhrg30877.pdf#page=75">Brockman before House subcommittee hearing in 2018</a>:</p><blockquote><p>On the ethical front, that&#8217;s really core to my organization. That&#8217;s the reason we exist . . . when it comes to the benefits of who owns this technology? Who gets it? You know, where did the dollars go? We think it belongs to everyone.</p></blockquote><p><a href="https://www.vox.com/2018/12/10/18134926/sam-altman-kara-swisher-recode-decode-live-mannys-podcast-transcript-facebook-zuckerberg-ethics#:~:text=At%20OpenAI%2C%20when,their%20shoes%20now.">Altman in 2018</a>:</p><blockquote><p>At OpenAI when we wrote our charter, we talked about the scenarios where we would or wouldn&#8217;t make money. And&#8230; the things we wouldn&#8217;t be willing to do no matter how much money they made. And we made this public so the public would hold us accountable to that. And I think that&#8217;s really important.</p></blockquote>]]></content:encoded></item><item><title><![CDATA[Inside OpenAI's Controversial Plan to Abandon its Nonprofit Roots]]></title><description><![CDATA[Former employees, legal experts, and philanthropic leaders challenge the company's effort to shed nonprofit control]]></description><link>https://www.obsolete.pub/p/inside-openais-controversial-plan</link><guid isPermaLink="false">https://www.obsolete.pub/p/inside-openais-controversial-plan</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Thu, 17 Apr 2025 18:05:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vYrI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vYrI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vYrI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vYrI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vYrI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vYrI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vYrI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg" width="700" height="466" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:466,&quot;width&quot;:700,&quot;resizeWidth&quot;:700,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Photograph of OpenAI branded sign in their office space&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="Photograph of OpenAI branded sign in their office space" title="Photograph of OpenAI branded sign in their office space" srcset="https://substackcdn.com/image/fetch/$s_!vYrI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vYrI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vYrI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vYrI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8202919-c0e4-41f6-ba3b-e87b5f5e1320_700x466.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">OpenAI&#8217;s logo hanging in its office. <a href="https://christiehemmklok.com/">Christie Hemm Klok</a>. <a href="https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/">MIT Tech Review</a>, 2020. </figcaption></figure></div><p><em>A paywalled version of this piece was also published as a <a href="https://www.ai-supremacy.com/p/can-humanity-survive-openai">guest post</a> for Michael Spencer&#8217;s great Substack, <a href="https://www.ai-supremacy.com/">AI Supremacy</a> (along with some useful context and very kind words about my work from Michael). Here is the piece in full. </em></p><p>Earlier this month, OpenAI <a href="https://openai.com/index/nonprofit-commission-guidance/">announced</a> that it aspires to build "the best-equipped nonprofit the world has ever seen" and was convening a commission to help determine how to use its "potentially historic financial resources."</p><p>But critics view this new commission as a transparent attempt to placate opposition to its controversial plan to restructure fully as a for-profit &#8212; one that fails to address the fundamental legal issues at stake.</p><p>OpenAI is currently a $300 billion for-profit company governed by a nonprofit board. However, after an earlier iteration of that board briefly fired CEO Sam Altman in November 2023, investors <a href="https://www.wsj.com/tech/ai/openais-latest-funding-round-comes-with-a-20-billion-catch-1e47d27d?st=hUpy9J&amp;reflink=desktopwebshare_permalink">reportedly</a> began demanding that the company shed its quasi-nonprofit status.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>"The story of OpenAI's history is trying to balance the desires to raise capital and build the tech and stay true to its mission," a former OpenAI employee told me. The current move, they say, is an attempt to "separate these things" into a purely commercial entity focused on profit and tech, alongside a separate entity doing "altruistic philanthropic stuff."</p><p>"That's wild on a number of levels because the entire philanthropic theory of change here was: we're going to put guardrails on profit motives so we can develop this tech safely," the former employee says.</p><h3>Legal hurdles</h3><p>The for-profit conversion faces significant unresolved legal challenges, including a lawsuit from Elon Musk <a href="https://garrisonlovely.substack.com/p/what-the-headlines-miss-about-the">arguing</a> that his $44 million donation was contingent on OpenAI remaining a nonprofit and that the conversion would violate its founding charitable purpose. The case will go to trial this fall. The conversion can also be challenged by the <a href="https://calmatters.org/economy/technology/2025/01/openai-investigation-california/">California</a> and <a href="https://lawprofessors.typepad.com/files/delawareagamicusbrief-musk-v.-altman.pdf">Delaware</a> Attorneys General (AGs), who are reportedly each looking into the case.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/YsV41/4/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eedc378b-052b-4b60-979b-48164cfe07b5_1260x660.png&quot;,&quot;thumbnail_url_full&quot;:&quot;&quot;,&quot;height&quot;:706,&quot;title&quot;:&quot;OpenAI Timeline&quot;,&quot;description&quot;:&quot;A selection of major events in the history of OpenAI&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/YsV41/4/" width="730" height="706" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>Musk's suit, OpenAI's gargantuan valuation, and the unprecedented nature of the conversion attempt appear to have attracted scrutiny.</p><p><a href="https://news.bloomberglaw.com/business-and-practice/california-bill-would-block-openai-from-for-profit-conversion">Without mentioning</a> OpenAI explicitly, California Assembly Member Diane Papan introduced a bill in February that would have blocked the conversion. However, the legislation was <a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260AB501&amp;search_keywords=profit">amended</a> without explanation earlier this month to instead focus on aircraft liens. Papan's office has not replied to a request for comment.</p><p>OpenAI <a href="https://www.documentcloud.org/documents/25893381-20250409-openai-defendants-counterclaims-answer-and-defenses/?mode=document">countersued</a> Musk last week, asking a federal judge to halt what it called a "relentless campaign" of harassment designed to harm the company.</p><p>Adding fuel to the fire, a group of twelve former OpenAI employees, represented by Harvard Law Professor Lawrencre Lessig, <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.152.0.pdf">filed</a> an amicus brief on Friday supporting Musk's challenge to the conversion.</p><p>The brief argues that OpenAI's nonprofit structure wasn't merely administrative &#8212; it was fundamental to the organization's mission and key to recruiting talent who were told they were building AI that would benefit humanity rather than shareholders. Former employees contend that removing the nonprofit's controlling role would constitute a betrayal of the trust that drew many of them to the company in the first place.</p><p>In an accompanying declaration, Todor Markov stated he left OpenAI after losing trust in leadership, concluding that the organization's Charter "had been used as a smokescreen, something to attract and retain idealistic talent while providing no real check on OpenAI&#8217;s growth and its pursuit of AGI." The proposed restructuring plan, he writes, "has only served to further convince me that OpenAI&#8217;s Charter and mission were used all along as a facade to manipulate its workforce and the public."</p><p>Markov reiterated this point in a written statement to me:</p><blockquote><p>The fundamental question about the OpenAI corporate restructuring is whether the nonprofit will maintain legal control over the for profit. The announcement of the OpenAI commission does not address that question in any way, and so does nothing to alleviate the substantial concerns we raise in our amicus brief.</p></blockquote><p>Fearing that billions in charitable assets could be transferred to private hands without sufficient oversight, a coalition of dozens of California-based nonprofits began organizing and <a href="https://sff.org/coalition-requests-attorney-general-action-to-protect-openais-charitable-assets/">urged</a> the state's AG in January to investigate the OpenAI conversion, seeking transparency about the valuation process and demanding assurance that the nonprofit will remain truly independent from commercial interests.</p><p>One of the coalition leaders, <a href="http://www.latinoprosperity.org/">LatinoProsperity</a> CEO Orson Aguilar, says that the commission announcement reminded him of 2008, "when the financial institutions that helped crash the economy decided that the solution was teaching everyone else financial literacy."</p><p>OpenAI's original nonprofit mission was, and, at least for now, remains, to ensure AGI benefits all of humanity. This purpose is enshrined in its <a href="https://openai.com/charter/">Charter</a>, which defines AGI as "a highly autonomous system that outperforms humans at most economically valuable work." In 2019, when the company spun up a for-profit arm to raise the billions needed to train increasingly expensive AI models, it gave the nonprofit board ultimate control. That board has a fiduciary duty to humanity &#8212; not shareholders.</p><p>OpenAI has not replied to multiple requests for comment.</p><p>The nonprofit's control over OpenAI became global news in November 2023, when the board dramatically exercised its authority by firing CEO Sam Altman &#8212; cryptically citing his failure to be "consistently candid." Altman orchestrated a swift comeback with the <a href="https://www.wsj.com/tech/ai/altman-firing-openai-520a3a8c">help</a> of Microsoft and a revolt of the employees (whose ability to sell billions in equity hung in the balance).</p><p>The <em>Wall Street Journal</em> recently shed new light on the firing, <a href="https://www.wsj.com/tech/ai/the-real-story-behind-sam-altman-firing-from-openai-efd51a5d?st=Lpm9pc&amp;reflink=desktopwebshare_permalink">reporting</a> that OpenAI executives collected dozens of examples of Altman's "alleged lies and other toxic behavior, largely backed up by screenshots," such as falsely saying the legal department approved a release without safety testing.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OL4P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OL4P!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OL4P!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OL4P!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OL4P!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OL4P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg" width="1280" height="1134" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1134,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Illustration of Peter Thiel and Sam Altman eating in the Arts District.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Illustration of Peter Thiel and Sam Altman eating in the Arts District." title="Illustration of Peter Thiel and Sam Altman eating in the Arts District." srcset="https://substackcdn.com/image/fetch/$s_!OL4P!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OL4P!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OL4P!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OL4P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4abdb216-947e-466c-ab3b-414da261be79_1280x1134.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Illustration: <a href="https://rappart.com/artists/jan-feindt/">Jan Feindt</a>. <a href="https://www.wsj.com/tech/ai/the-real-story-behind-sam-altman-firing-from-openai-efd51a5d?st=Lpm9pc&amp;reflink=desktopwebshare_permalink">Wall Street Journal</a>, 2025.</figcaption></figure></div><p>The ouster was brief, but still served as a potent reminder to investors that the nonprofit board was, at least formally, in control and that its fiduciary duty was to all of humanity &#8212; not them.</p><p><a href="https://law.ucla.edu/faculty/faculty-profiles/rose-chan-loui">Rose Chan Loui</a>, founding executive director of the Lowell Milken Center on Philanthropy and Nonprofits at UCLA Law School, says that OpenAI's proposed commission "confirms their intent to make this nonprofit a typical corporate foundation." She continues, "Now can it do a lot of good? Yes, absolutely. But it's still, from our perspective, abandonment of their original purpose. Either that or it's a big stretch of their purpose."</p><p>Her colleague <a href="https://law.ucla.edu/faculty/faculty-profiles/michael-dorff">Michael Dorff</a>, executive director of the Lowell Milken Institute for Business, Law, and Policy also at UCLA Law, echoed this skepticism. "I'm trying not to be terribly cynical," he told me. "On the one hand, it's commendable that OpenAI is thinking of ways to use their tech to fulfill their nonprofit mission. But I don't think that should have anything to do with whether the nonprofit can abandon its mission."</p><h3>"Pandering"</h3><p>OpenAI's announcement describes a commission that will help understand "the most urgent and intractable problems nonprofits face" and incorporate feedback from leaders in health, science, education and public services &#8212; "particularly within OpenAI's home state of California."</p><p>That last detail isn't subtle, and critics see it as telling.</p><p>OpenAI is "pandering," the former employee says.</p><p>"Most of the nonprofit and philanthropic world doesn't care about AI safety. And presumably the California AG and the people who he cares about don't know anything about AI safety or the actual premise of OpenAI's purpose and mission," the former employee says. The specific mention of California in the plan for a "wildly well-funded science and education nonprofit," they say, makes "the pandering pretty obvious. So it feels like a bribe to California, to the California nonprofit sector &#8212; the sector that might be up in arms about this nonprofit conversion."</p><p>On Tuesday, OpenAI <a href="https://openai.com/index/nonprofit-commission-advisors/">announced</a> the advisors for this commission. The group will be convened by <a href="https://deltacouncil.ca.gov/council-members#:~:text=Daniel%20Zingale%2C%20of,for%20AIDS%20Action.">Daniel Zingale</a>, a former senior advisor to California governors Gavin Newsom and Arnold Schwarzenegger. The advisors include iconic labor leader and civil rights activist <a href="https://en.wikipedia.org/wiki/Dolores_Huerta">Dolores Huerta</a>, who cofounded United Farm Workers with Cesar Chavez; <a href="https://en.wikipedia.org/wiki/Monica_C._Lozano">Monica Lozano</a>, former CEO of the largest Spanish-language newspaper in the US; <a href="https://www.calendow.org/annual-report/dr-robert-k-ross-retires/">Dr. Robert K. Ross</a>, former president and CEO of The California Endowment, an influential statewide healthcare foundation; and <a href="https://openai.com/index/nonprofit-commission-advisors/#:~:text=Jack%20Oliver%20is,Leighton%20Paisner%20LLP.">Jack Oliver</a>, a lawyer and private equity partner who previously co-chaired Bono's ONE Campaign.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UJtY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UJtY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UJtY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UJtY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UJtY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UJtY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UJtY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UJtY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UJtY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UJtY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225c7f6e-aca4-4cec-9082-c47527d30af5_4096x2731.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://commons.wikimedia.org/wiki/File:Dolores_Huerta_and_Kamala_Harris.jpg">Dolores Huerta and Kamala Harris in 2017</a></figcaption></figure></div><p>None of the commissioners immediately replied to requests for comment.</p><p>The commission's stated goal is to gather input on how OpenAI's philanthropy can tackle systemic issues, incorporating feedback from leaders in health, science, education, and public services &#8212; with a particular focus on California. The advisors are tasked with submitting findings to the OpenAI board within 90 days.</p><p>The advisors have no clear experience in AI governance but do have deep connections to California and backgrounds in civil rights, education, and community advocacy. Their selection does little to dispel the criticisms that the commission is primarily an attempt to smooth the path for this conversion, particularly with the California AG.</p><p>The effort, at least with Aguilar of LatinoProsperity, seems to be falling flat. "Pandering is the right word," he told me prior to the announcement of the advisors. OpenAI's commission is "a reaction to the work of our campaign and others to try to make it seem that as though they're listening, but they're distracting from the real questions," he says.</p><p>Those questions, according to Aguilar: Is OpenAI a truly independent nonprofit? And what is the real value of what it has right now?</p><p>In a follow-up email sent after the commission members were announced, Aguilar was even more pointed: "As impressive as OpenAI's advisory commission members may be, let's call this what it is &#8212; a calculated PR stunt to distract us from the real issue: OpenAI funneling nonprofit assets into private pockets."</p><h3>Conflicts of interest</h3><p>OpenAI's nonprofit board has a fiduciary duty to represent the interests of the public and the nonprofit, which includes ensuring that it is fairly compensated for whatever it gives up in the conversion.</p><p>In an email, Aguilar writes, "The fundamental question remains: How can a nonprofit commission maintain true independence when housed within an organization with significant commercial pressures?"</p><p>The IRS <a href="https://www.irs.gov/pub/irs-soi/11resconsunshine.pdf">requires</a> charities to publicly disclose conflicts of interest of its board members, and views a lack of majority independence as a significant governance risk factor that may invite greater scrutiny. Chan Loui says that the way OpenAI defined independence when it started the for-profit in 2019 was based on whether the director has equity in the company. However, she says, the law also looks at whether you have financial interests in the organization's "partners," such as suppliers or customers.</p><p>OpenAI <a href="https://openai.com/our-structure/">lists</a> ten "independent" nonprofit board members on its site, including CEO Sam Altman. However, at least seven of these directors or their spouses have significant investments in companies that already do business with OpenAI, according to SEC filings, news reports, and Crunchbase data. This includes the board chair, Bret Taylor, <a href="https://www.cnbc.com/2024/10/28/bret-taylors-ai-startup-sierra-valued-at-4point5-billion-in-funding.html">who founded</a> the $4.5 billion AI startup, Sierra, which is a customer of OpenAI (Taylor has <a href="https://fortune.com/2024/02/13/openai-chair-bret-taylor-interview-promises-recuse-whenever-potential-overlap-ai-startup-sierra/">publicly committed</a> to recuse himself from decisions in which he's conflicted).</p><h3>The wrong question?</h3><p>A lot of the conversation around OpenAI's restructuring has focused on whether the nonprofit will be fairly compensated for what it's giving up &#8212; namely control over the for-profit and the money that exceeds extremely high profit caps.</p><p>In February, Musk <a href="https://www.wsj.com/tech/elon-musk-openai-bid-4af12827?st=KtKTuG&amp;reflink=desktopwebshare_permalink">offered</a> to buy the nonprofit's assets for $97.4 billion, in a <a href="https://garrisonlovely.substack.com/p/why-did-elon-musk-just-offer-to-buy">likely attempt</a> to derail the conversion or drive up the price the for-profit has to pay to the nonprofit to give up its control.</p><p>The nonprofit coalition has emphasized the importance of adequate compensation as well. Aguilar writes to me, "It's crucial that the Attorney General provide a fair market valuation of OpenAI's charitable assets to ensure proper oversight and protection of public interest."</p><p>However, others think this focus and the commission announcement misses the fundamental legal question: Does OpenAI's restructuring advance its charitable purpose?</p><p>The former employee says, "I would like the entire question about fair market value to be taken off the table because I think that's just the wrong question" because it "treats this as a normal corporate transaction where fair market value is what matters."</p><p>Given OpenAI's purpose is to ensure AGI is built safely, they ask, "What better position could you be in than literally controlling the company on the brink of building AGI? What amount of money could you get in the transaction to put you in a better position to realize that mission?"</p><p>They offer an analogy:</p><blockquote><p>Imagine you are a nonprofit whose mission is to ensure nuclear technology benefits humanity. And you literally have a controlling interest in the Manhattan Project. And it's 1943. For what amount of money should you sell that interest?</p></blockquote><p>Dorff agrees, stating plainly, "I don't see any amount of money that would allow the nonprofit to better pursue its mission." After all, he notes, it currently controls the market leader in the AI space.</p><p>That market leadership was recently underscored by OpenAI's Wednesday <a href="https://openai.com/index/introducing-o3-and-o4-mini/">release</a> of o3, its latest and most capable "reasoning" model to date.</p><p>OpenAI says that o3 sets new state-of-the-art performance on difficult benchmarks for coding, math, and science, significantly improving on its predecessor, o1.</p><p>While OpenAI asserts the model remains below the "High" risk threshold defined in its Preparedness Framework, the relentless push toward more powerful and autonomous systems highlights the immense potential value and risk embodied in the technology the nonprofit currently oversees &#8212; and the very control it is being asked to relinquish.</p><p>Chan Loui calls the nonprofit's position "priceless." Its control of the company is beyond what the typical watchdog organizations can do, she says. The conversion is not, in her view, "just an interpretation of how to fulfill purpose," but rather "a change of purpose." "Under the law, they would need to go to court and say we have a basis for changing our purpose."</p><p>Dorff says "I haven't seen anything remotely close to a justification for" a change in purpose. "It's a very steep burden to show that a nonprofit's mission is no longer viable."</p><p>OpenAI has benefited a lot from its nonprofit status. Chan Loui notes that the <a href="https://www.lesswrong.com/posts/5jjk4CDnj9tA7ugxr/openai-email-archives-from-musk-v-altman-and-openai-blog#:~:text=everyone%20feels%20great%2C%20saying%20stuff%20like%20%22bring%20on%20the%20deepmind%20offers%2C%20they%20unfortunately%20dont%20have%20%27do%20the%20right%20thing%27%20on%20their%20side%22">emails</a> released as a result of the Musk lawsuit "demonstrate that their reasoning really was driven by recruiting needs" &#8212; a point supported by the ex-employee amicus brief.</p><p>"That was really the main benefit of going out there and saying, 'We're a nonprofit. We really care about developing AI safely and for the benefit of humanity.'" "You can't just abandon your purpose now that you are in this position," she says.</p><h3>Safety shakeups</h3><p>The conversion attempt comes amidst a year of headline-generating departures of OpenAI leadership and safety staff.</p><p>On Tuesday, I <a href="https://garrisonlovely.substack.com/p/breaking-top-openai-catastrophic">reported</a> in Obsolete that Joaquin Qui&#241;onero Candela had quietly stepped down from his role leading the team focused on catastrophic risks, less than nine months after the previous lead was reassigned without an announcement. Candela announced the move on LinkedIn, describing it as a transition to an "intern" role focused on healthcare applications.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7_SJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7_SJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7_SJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg" width="476" height="552" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:552,&quot;width&quot;:476,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;No alternative text description for this image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="No alternative text description for this image" title="No alternative text description for this image" srcset="https://substackcdn.com/image/fetch/$s_!7_SJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Candela&#8217;s new swag</figcaption></figure></div><p>An OpenAI spokesperson said that safety governance is now consolidated under a Safety Advisory Group (SAG) chaired by <a href="https://www.linkedin.com/in/sandhini-agarwal/">Sandhini Agarwal</a> and preparedness work is distributed across teams.</p><p>This marks yet another significant shakeup in OpenAI's safety leadership following a year of high-profile exits &#8212; including cofounders John Schulman and Ilya Sutskever, safety systems lead Lilian Weng, Superalignment co-lead Jan Leike, and Senior Advisor for AGI readiness <a href="https://garrisonlovely.substack.com/p/end-of-an-era-openais-agi-readiness">Miles Brundage</a> &#8212; and the disbanding of both the Superalignment and AGI Readiness teams.</p><p>And it comes amidst recent reports that OpenAI dramatically <a href="https://www.ft.com/content/8253b66e-ade7-4d1f-993b-2d0779c7e7d8">reduced safety testing times</a> and released powerful new models like DeepResearch and GPT-4.1 <a href="https://techcrunch.com/2025/04/15/openai-ships-gpt-4-1-without-a-safety-report/">without promised safety reports</a>, raising further doubts about the company's founding commitment to ensure that AGI is built safely.</p><h3>The path forward</h3><p>OpenAI's announcement states that commission members will submit insights to the board within 90 days. The board will "consider these insights in its ongoing work to evolve the OpenAI nonprofit well before the end of 2025."</p><p>That timeline is significant &#8212; the <em>Wall Street Journal</em> recently <a href="https://www.wsj.com/tech/ai/openais-latest-funding-round-comes-with-a-20-billion-catch-1e47d27d">reported</a> that if OpenAI fails to convert by the end of 2025, it will have to return $20 billion of the $40 billion it <a href="https://www.cnbc.com/2025/03/31/openai-closes-40-billion-in-funding-the-largest-private-fundraise-in-history-softbank-chatgpt.html">recently raised</a> in a fundraising round valuing the company at $300 billion. The company's $6.6 billion investment from October <a href="https://www.businessinsider.com/openai-deadline-to-become-for-profit-or-return-investor-money-2024-10">carries</a> a similar condition, requiring conversion by October 2026 to avoid potential investor clawbacks with ten percent interest.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Zt-u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Zt-u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Zt-u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Zt-u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Zt-u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Zt-u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg" width="700" height="467" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:467,&quot;width&quot;:700,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Masayoshi Son and Sam Altman at an AI business event.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Masayoshi Son and Sam Altman at an AI business event." title="Masayoshi Son and Sam Altman at an AI business event." srcset="https://substackcdn.com/image/fetch/$s_!Zt-u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Zt-u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Zt-u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Zt-u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feff2f6ac-f1a4-4063-8f37-e34b930256df_700x467.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">OpenAI investor and SoftBank CEO Masayoshi Son and Sam Altman at an event in Tokyo in February. Photo: <a href="https://widerimage.reuters.com/photographer/kim-kyung-hoon">Kim Kyung-Hoon</a>/<a href="https://www.reuters.com/technology/artificial-intelligence/softbank-openai-set-up-ai-japan-joint-venture-2025-02-03/">Reuters</a></figcaption></figure></div><p>Chan Loui called this "a very aggressive deadline" that regulatory authorities may struggle to accommodate. "You can't hurry the attorneys general. They don't really have a deadline," she said. "In California, you're to give notice of any significant transactions, which is what this proposed restructure is," she says, and "there's no deadline for when they decide."</p><p>She speculates that these deadlines might be a move to speed up the regulatory authorities.</p><p>According to Dorff, there are only two ways the nonprofit mission could be enforced: through Musk's lawsuit, which he says definitely won't see a verdict before the end of this year, or through action by the California or Delaware AGs.</p><p>"The only way to meet that deadline that I can see is for OpenAI to settle with everybody," Dorff said. "Elon would have to agree," and OpenAI "would need some kind of indication of satisfaction from the AGs."</p><p>Aguilar says he recently met with executives at OpenAI, along with former Housing and Urban Development Secretary Juli&#225;n Castro, and fellow coalition leader Fred Blackwell. Aguilar says that the OpenAI executives listened and were "very eager to get our thoughts on mission," but no details were shared. The meeting hasn't dissuaded the coalition, which Aguilar says has grown to around 50 organizations and recently <a href="https://sff.org/Offsite%20Media/Petition_Complaint-to-AG-re-Open-AIs-Violations-of-Charitable-Trust.pdf">filed</a> an administrative petition calling on the California AG to investigate the conversion.</p><p>So while OpenAI works to construct the appearance of a graceful transition, the legal challenges remain daunting. No matter how well-resourced the spun-out nonprofit might be, many experts say it cannot replace the core mission enshrined in the original nonprofit.</p><p>"If OpenAI wants to give many billions to science and education in California, that's great. I'm very supportive of that," the former employee says. "But that's not its mission. They can't use that as an out in this situation."</p><p>OpenAI was founded as an alternative to the perils of letting commercial interests dictate the development of a potentially transformative &#8212; and <a href="https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf#page=61">dangerous</a> &#8212; technology. A decade later, as the AI race it helped supercharge reaches unprecedented intensity, OpenAI is looking to shed one of the last vestiges of that original intent.</p><p>The former employee put it simply: "I view what's happening now is: the profit motive's winning. They have given up on the altruistic angle. They've given up on trying to be the good guy, and they just want to win."</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Breaking: Top OpenAI Catastrophic Risk Official Steps Down Abruptly]]></title><description><![CDATA[It's the latest shakeup to the company's safety efforts]]></description><link>https://www.obsolete.pub/p/breaking-top-openai-catastrophic</link><guid isPermaLink="false">https://www.obsolete.pub/p/breaking-top-openai-catastrophic</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Tue, 15 Apr 2025 23:28:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b915aa7-6930-4bec-84e9-8c2cdc96290c_500x500.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>OpenAI's top safety staffer responsible for mitigating catastrophic risks quietly stepped down from the role weeks ago, according to a LinkedIn announcement posted yesterday.</p><p>Joaquin Qui&#241;onero Candela, who took over OpenAI's Preparedness team in July, <a href="https://www.linkedin.com/feed/update/urn:li:activity:7317606453635076097/">announced</a> on LinkedIn that he has taken on a new role at the company:</p><blockquote><p>I'm an intern! After 11 years since my last commit, I'm back to building. I first transitioned to management in 2009, and got more and more disconnected from code and hands-on work. Three weeks ago, I turned it all upside down, and became an intern in one of our awesome teams that's focused on healthcare applications of AI.</p></blockquote><p>Candela's LinkedIn bio now describes him as the "Former Head of Preparedness at OpenAI."</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>An OpenAI spokesperson told Obsolete that Candela "was really closely involved in preparing the successor to the preparedness framework" and "will probably be involved in preparedness in some capacity" but is currently "focusing on different areas within the company that he's really excited about."</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7_SJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7_SJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7_SJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg" width="476" height="552" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:552,&quot;width&quot;:476,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;No alternative text description for this image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="No alternative text description for this image" title="No alternative text description for this image" srcset="https://substackcdn.com/image/fetch/$s_!7_SJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7_SJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58399b15-4f03-49d6-8ea3-ab1e990c756b_476x552.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Candela&#8217;s new swag</figcaption></figure></div><p>The spokesperson added that the company recently restructured its safety organization, consolidating "all governance under the Safety Advisory Group" (SAG) &#8212; a committee chaired by five-year OpenAI veteran <a href="https://www.linkedin.com/in/sandhini-agarwal/">Sandhini Agarwal</a>. The SAG uses a rotational leadership structure with one-year terms, designed, they said, to balance "continuity of knowledge and expertise" with "fresh and timely perspectives."</p><p>Meanwhile, OpenAI's preparedness work is now distributed across multiple teams, focused on things like capabilities, evaluations, and safety mitigations, the spokesperson said.</p><p>Candela's departure from the team comes amidst OpenAI's <a href="https://garrisonlovely.substack.com/p/what-the-headlines-miss-about-the">contentious attempt</a> to shed the last vestiges of nonprofit control and follows a <a href="https://time.com/6986711/openai-sam-altman-accusations-controversies-timeline/">string of scandals</a> and <a href="https://timesofindia.indiatimes.com/technology/tech-news/list-of-top-leaders-of-openai-who-departed-after-the-2023-attempt-to-oust-ceo-sam-altman/articleshow/113906417.cms">high profile exits</a> in the last year.</p><p>It also marks the second major shakeup in the Preparedness team&#8217;s short history. In July, OpenAI removed Aleksander M&#261;dry from his role as head of Preparedness &#8212; also without a public announcement. The Information <a href="https://www.theinformation.com/articles/openai-removes-ai-safety-leader-m-dry-a-onetime-ally-of-ceo-altman">reported</a> that the MIT professor was reassigned to work on AI reasoning just days before US senators sent a <a href="https://www.schatz.senate.gov/imo/media/doc/letter_to_openai.pdf">letter</a> to CEO Sam Altman regarding "emerging safety concerns" at the company.</p><p>Following M&#261;dry's reassignment, Candela took over, and <a href="https://openreview.net/profile?id=~Tejal_Patwardhan1">Tejal Patwardhan</a>, a 2020 Harvard graduate, began managing day-to-day operations, according to The Information <a href="https://www.theinformation.com/articles/openai-removes-ai-safety-leader-m-dry-a-onetime-ally-of-ceo-altman">story</a>.</p><p>M&#261;dry's quiet move reflects a pattern of leadership changes to OpenAI's safety teams that continues with Candela's departure.</p><p>The Preparedness team was established in December 2023 to track and mitigate "catastrophic risks related to frontier AI models," according to the company's <a href="https://cdn.openai.com/openai-preparedness-framework-beta.pdf">Preparedness Framework</a>, which was introduced as "a living document describing OpenAI&#8217;s processes to track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models."</p><p>The Framework focuses on risks related to cybersecurity, persuasion, model autonomy, and chemical, biological, radiological, and nuclear weapons.</p><p>OpenAI published the <a href="https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf">second version</a> of its Preparedness Framework around noon, local time today, shortly after Obsolete contacted the company for comment.</p><h2>The crumbling of safety leadership</h2><p>OpenAI has seen an exodus of leadership and safety staff in the last year.</p><p>Company cofounder and AI alignment lead, John Schulman, <a href="https://x.com/johnschulman2/status/1820610863499509855?lang=en">left</a> in August for Anthropic, a rival firm started by an earlier wave of departing OpenAI safety staff.</p><p>Lilian Weng, OpenAI's safety lead, <a href="https://techcrunch.com/2024/11/08/openai-loses-another-lead-safety-researcher-lilian-weng/">left</a> in November and subsequently joined Thinking Machines Labs &#8212; a startup <a href="https://www.theverge.com/ai-artificial-intelligence/614621/mira-murati-thinking-machines-lab-openai-competitor-launch">launched</a> earlier this year by Mira Murati, who served as OpenAI's CTO from 2022 until her abrupt <a href="https://www.reuters.com/technology/artificial-intelligence/openais-technology-chief-mira-murati-leave-2024-09-25/">departure</a> amidst the company's October fundraising round. Schulman <a href="https://fortune.com/2025/02/06/openai-john-schulman-mira-muratis-startup-anthropic/">joined</a> Murati's company in February.</p><p>OpenAI's Superalignment team, <a href="https://web.archive.org/web/20240515005113/https://openai.com/index/introducing-superalignment/">tasked with</a> figuring out how to build smarter-than-human AI safely, was disbanded in May. The team leads, OpenAI cofounder Ilya Sutskever and longtime safety researcher Jan Leike, both left the same month. During his departure, Leike <a href="https://x.com/janleike/status/1791498184671605209">publicly stated</a> that "safety culture and processes have taken a backseat to shiny products" at OpenAI. <em>Fortune</em> <a href="https://fortune.com/2024/05/21/openai-superalignment-20-compute-commitment-never-fulfilled-sutskever-leike-altman-brockman-murati/">reported</a> that the Superalignment team never got the computing power it was promised.</p><p>In October, Miles Brundage, OpenAI's Senior Advisor for AGI readiness, <a href="https://garrisonlovely.substack.com/p/end-of-an-era-openais-agi-readiness">resigned</a> after more than six years at the company; his team was disbanded and absorbed into other departments.</p><p>Brundage was one of the last remaining members of OpenAI's early safety-focused staff and had been increasingly vocal about his concerns. In his departure announcement, he wrote that "neither OpenAI nor any other frontier lab is ready" for artificial general intelligence (AGI) &#8212; the very technology the company is explicitly trying to build. He cited publishing constraints as one reason for leaving, suggesting the company was restricting what he could say publicly about AI risks.</p><p>Brundage also broke with Altman by advocating for cooperation with China on AI safety rather than competition, warning that a "zero-sum mentality increases the likelihood of corner-cutting on safety."</p><p>A former senior OpenAI employee told Obsolete that M&#261;dry's reassignment was particularly alarming. "At a certain point, he was the only person in there with a safety-focused role who was empowered at all," the former employee said.</p><h2>Who is leading safety at OpenAI?</h2><p>With most safety-focused leaders gone or reassigned, OpenAI's formal governance structure has become increasingly important &#8212; but also increasingly opaque.</p><p>In May, OpenAI <a href="https://openai.com/index/openai-board-forms-safety-and-security-committee/">announced</a> the creation of<strong> </strong>its Safety and Security Committee (SSC), tasked with making recommendations to the full board on "critical safety and security decisions for OpenAI projects and operations." Its original members included a subset of its nonprofit board, including Altman, along with Madry, Weng, Schulman, Matt Knight, the head of security, and Jakub Pachocki, the chief scientist.</p><p>Of these original members, only Knight and Pachocki remain in these or similar roles at OpenAI. </p><p>OpenAI <a href="https://openai.com/index/update-on-safety-and-security-practices/">announced</a> in September that board member and Carnegie Mellon professor Zico Kolter would join the SSC as its chair and that Altman was <a href="https://time.com/7022026/sam-altman-safety-committee/">no longer</a> on the committee. When asked about Altman's departure, the OpenAI spokesperson declined to comment.</p><p>The updated <a href="https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf">version</a> of the Preparedness Framework published today goes into more detail on the roles, responsibilities, and decision-making processes of the SSC and introduces the "Safety Advisory Group (SAG)" &#8212; a related committee made up of OpenAI staff.</p><p>However, the updated document does not identify the members of the SAG. According to the OpenAI spokesperson, the SAG has been working under Agarwal's leadership for two months. They described her as "functionally heading up all of the governance work," including "all of the evaluation calls about what [risk] mitigations are necessary."</p><p>The lack of transparency around safety leadership extends beyond public announcements. &#8220;Even while working at OpenAI, details about safety procedures were very siloed. I could never really tell what we had promised, if we had done it, or who was working on it,&#8221; a former employee wrote to Obsolete.</p><h2>Growing concerns</h2><p>These leadership changes come amid mounting questions about OpenAI's commitment to safety.</p><p>The <em>Financial Times</em> <a href="https://www.ft.com/content/8253b66e-ade7-4d1f-993b-2d0779c7e7d8">reported</a> last week that "OpenAI slash[ed] AI model safety testing time" from months to days. When asked about this story, the company spokesperson directed Obsolete back to the updated Preparedness Framework &#8212; saying that "our safety practices continue to be really rigorous" and suggesting that characterizations of reduced testing were not "very fair."</p><p>And just yesterday, the company released GPT-4.1 without publishing a corresponding safety report. OpenAI's <a href="https://openai.com/index/gpt-4-1/">announcement</a> touts the model's significant improvements over its flagship multimodal model, GPT-4o, in areas like coding and instruction following.</p><p>Conducting pre-release safety evaluations on frontier AI models and publishing the results alongside the model launch has become a common practice for the industry &#8212; one that OpenAI <a href="https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024">committed</a> to at the 2024 Seoul AI Summit.</p><p>When <a href="https://techcrunch.com/2025/04/15/openai-ships-gpt-4-1-without-a-safety-report/">questioned</a> by TechCrunch, OpenAI claimed that "GPT-4.1 is not a frontier model, so there won't be a separate system card released for it."</p><p>However, the company released DeepResearch, a powerful web-searching tool, <a href="https://techcrunch.com/2025/04/15/openai-ships-gpt-4-1-without-a-safety-report/#:~:text=OpenAI%E2%80%99s%20recent%20track,for%20that%20model.">weeks before</a> publishing a <a href="https://cdn.openai.com/deep-research-system-card.pdf">safety report</a>, which refers to the product as a frontier model.</p><p>Following the release of the updated Framework, former OpenAI safety researcher Steven Adler <a href="https://x.com/sjgadler/status/1912242577723781258">tweeted</a> that he's "overall happy to see the Preparedness Framework updated." But he also called out the company for "quietly reducing its safety commitments," pointing to OpenAI's <a href="https://x.com/sjgadler/status/1912242580861120939">abandonment of</a> an earlier promise to conduct safety testing on models finetuned to perform better in certain risky domains, like bioengineering.</p><p>Safety reports have been a primary tool for transparency in the AI industry, providing details on testing conducted to evaluate a model's risks. After conducting safety evaluations, <a href="https://cdn.openai.com/deep-research-system-card.pdf#page=17">OpenAI</a> and <a href="https://assets.anthropic.com/m/785e231869ea8b3b/original/claude-3-7-sonnet-system-card.pdf#page=7">Anthropic</a> each found that their most advanced models are close to being able to meaningfully assist non-experts in the creation of bioweapons. And OpenAI had <a href="https://openai.com/global-affairs/our-approach-to-frontier-risk/">previously called</a> system cards "a key part" of its approach to accountability ahead of the 2023 UK AI Safety Summit.</p><p>In the United States, frontier AI developers are governed by <a href="https://www.seoul-tracker.org/">voluntary commitments</a>, which they can violate without real consequence. Many of these companies, including OpenAI and Google, <a href="https://jacobin.com/2024/09/gavin-newsom-ai-tech-bill-sb-1047">lobbied</a> <a href="https://www.thenation.com/article/society/california-ai-safety-bill/">hard</a> last year against California AI safety bill <a href="https://garrisonlovely.substack.com/p/all-my-coverage-of-california-ai">SB 1047</a>, the most significant effort to codify some of these commitments. </p><p>As AI models get more capable and autonomous, companies appear to be increasingly cutting corners on safety.</p><p>Google's Gemini 2.5 Pro model is considered by many to be the most capable on the market, but the company still hasn't released a safety report, which <em>Fortune</em> <a href="https://fortune.com/2025/04/09/google-gemini-2-5-pro-missing-model-card-in-apparent-violation-of-ai-safety-promises-to-us-government-international-bodies/">reported</a> last week violated voluntary commitments the company made to the White House and at the Seoul summit.</p><p>The competitive pressure to release faster and with fewer safeguards will likely increase from here, raising alarming questions about whether meaningful guardrails will be in place when they're needed most.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Job Vacancy: Research Assistant]]></title><description><![CDATA[I'm hiring a research assistant to help with my forthcoming book!]]></description><link>https://www.obsolete.pub/p/research-assistant-job-description</link><guid isPermaLink="false">https://www.obsolete.pub/p/research-assistant-job-description</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Tue, 25 Mar 2025 21:51:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/86f13c67-7ce7-446b-b21d-43e7d548f5c5_420x300.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Update</em>:<strong> We are no longer receiving applications for this job posting. </strong>Thank you.</p><h2>Overview</h2><p><strong>What:</strong> Support the completion of a forthcoming general-audience nonfiction book on AI. You'll conduct research, draft chapters, edit existing content, and help transform an incomplete manuscript into a polished final product for publication by OR Books and The <em>Nation</em> Magazine.</p><p><strong>Why:</strong> The project, <em>Obsolete: Power, Profit, and the Race to Build Machine Superintelligence, </em>has the potential to be the go-to AI risk book in the post-ChatGPT era. You&#8217;ll also get the opportunity to work closely with an experienced journalist and writer who has a track record of publishing work in leading outlets (NYT, <em>Nature</em>, BBC, TIME, <em>Foreign Policy</em>, etc.).</p><p><strong>Start Date:</strong> As soon as possible</p><p><strong>Employment Type:</strong> Fixed term (6 months, with possibility of extension); full time</p><p><strong>Location:</strong> Remote, provided at least three hours overlap between 10am-6pm ET.</p><p><strong>Compensation: </strong>$37,485-$65,442 total compensation for the 6-month period, depending on experience and location</p><p><strong>Applications will be considered on a rolling basis</strong>, but please apply as soon as you can. Priority may be given to applicants who can move through the process sooner.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://airtable.com/appEwGxseY1R2o86q/pagHFbCBIAb4xhRoc/form&quot;,&quot;text&quot;:&quot;Apply here&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://airtable.com/appEwGxseY1R2o86q/pagHFbCBIAb4xhRoc/form"><span>Apply here</span></a></p><h2>Details</h2><h3>About the Role</h3><p>I'm seeking a full-time Research Assistant to work with me on my forthcoming nonfiction book on AI, titled <em>Obsolete: Power, Profit, and the Race to Build Machine Superintelligence</em>. The book will be published by a new joint imprint of OR Books and The <em>Nation</em> magazine. The manuscript is due mid-September 2025 and publication expected in Spring 2026. Accordingly, this is intended as a 6-month fixed term position (with some possibility of extension). Hours should average 40/week, but may increase with proximity to the due date.</p><p>In this role, you will have the opportunity to contribute to a timely examination of the powerful forces shaping AI development. Additionally, you will work closely with an experienced journalist whose work has been featured in leading publications and recognized by experts in the field.</p><h3>The Book Project</h3><p>The book (currently ~60% complete) focuses on how commercial and great power competition shape the race to build increasingly powerful and autonomous AI systems. You can find a full description of the book at the end of this document.</p><p>I aim for this to be the definitive book on AI risk in the post-ChatGPT era. Given the subject matter, timing, and the support of a major publication (The <em>Nation</em>), <em>Obsolete</em> has the potential to reach large and influential audiences.</p><h3>Who I Am</h3><p>I'm <a href="https://www.garrisonlovely.com/">Garrison Lovely</a>, a freelance journalist reporting on the intersection of economics, geopolitics, and artificial intelligence. My writing has appeared in <a href="https://www.nytimes.com/2024/09/29/opinion/ai-risks-safety-whistleblower.html?unlocked_article_code=1.OU4.-Lcq.-p2uHNAe66sn&amp;smid=url-share">The </a><em><a href="https://www.nytimes.com/2024/09/29/opinion/ai-risks-safety-whistleblower.html?unlocked_article_code=1.OU4.-Lcq.-p2uHNAe66sn&amp;smid=url-share">New York Times</a></em>, <em><a href="https://www.nature.com/articles/d41586-025-00831-8">Nature</a></em>, <a href="https://www.bbc.com/future/article/20220615-do-we-need-a-better-understanding-of-progress">BBC</a>, <a href="https://time.com/author/garrison-lovely/">TIME</a>, <a href="https://www.context.news/ai/battle-rages-over-uss-first-binding-ai-safety-bill-in-california">The Thomson Reuters Foundation</a>, <em><a href="https://foreignpolicy.com/2025/02/05/pepfar-trump-lifesaving-hiv-aids-soft-power-danger/">Foreign Policy</a></em>, <a href="https://www.theverge.com/authors/garrison-lovely">The </a><em><a href="https://www.theverge.com/authors/garrison-lovely">Verge</a></em>, <a href="https://www.theguardian.com/commentisfree/2024/oct/16/california-ai-safety-bill-gavin-newsom">The </a><em><a href="https://www.theguardian.com/commentisfree/2024/oct/16/california-ai-safety-bill-gavin-newsom">Guardian </a></em><a href="https://www.theguardian.com/commentisfree/2024/oct/16/california-ai-safety-bill-gavin-newsom">US</a>, <a href="https://www.vox.com/future-perfect/23639475/pescetarian-eating-fish-ethics-vegetarian-animal-welfare-seafood-fishing-chicken-beef-climate">Vox</a>, and many other outlets. I've written cover stories for <a href="https://www.thenation.com/article/society/mckinsey-whistleblower-confessions/">The </a><em><a href="https://www.thenation.com/article/society/mckinsey-whistleblower-confessions/">Nation</a></em> and <em><a href="https://jacobin.com/2024/01/can-humanity-survive-ai">Jacobin</a></em>, with my piece "Can Humanity Survive AI" leading to my current book deal. I'm also a <a href="https://omidyar.com/omidyar-network-announces-fifth-class-of-reporters-in-residence/">Reporter in Residence</a> at the Omidyar Network and publisher of <em><a href="https://garrisonlovely.substack.com/">Obsolete</a></em>, a fast-growing Substack on AI.</p><p>My reporting and commentary on AI has been shared by the three &#8220;godfathers&#8221; of deep learning (<a href="https://www.facebook.com/yoshua.bengio/posts/pfbid0wrd7BAY1QVNZAJyJitQenboRAr1vtJEWGH35VMDo3roTagzQVCe2Zkdd5XyvG4F9l">Yoshua Bengio</a>, <a href="https://x.com/ylecun/status/1839726968444518772">Yann LeCun</a>, and Nobel laureate <a href="https://x.com/geoffreyhinton/status/1833618540714217826">Geoffrey Hinton</a>), noted AI critic <a href="https://garymarcus.substack.com/p/why-californias-ai-safety-bill-should">Gary Marcus</a>, <em>Life 3.0</em> author <a href="https://x.com/tegmark/status/1824155956038836247">Max Tegmark</a>, and many others. I've spoken on AI at Harvard, the Federation of American Scientists, and the Fund for Alignment Research (FAR Labs).</p><p>My writing has been translated into 5 languages and cited dozens of times by mainstream outlets (including the <em>New Yorker</em>, NYT, The <em>Atlantic</em>, ProPublica, The Brookings Institution, The <em>Guardian</em>, Axios, and NY Magazine). My media appearances have received over 8 million combined views/listens, and my social media posts have gained over 18 million impressions.</p><h3>Key Responsibilities</h3><p>As Research Assistant, your responsibilities will include:</p><ul><li><p>Conducting in-depth research on assigned topics, synthesizing findings into clear, well-organized reports</p></li><li><p>Writing first drafts of book chapters based on detailed outlines I provide</p></li><li><p>Editing and refining existing drafts for clarity, flow, and accuracy</p></li><li><p>Supplying and formatting references</p></li><li><p>Tracking developments in AI news, research, and policy relevant to the book's themes</p></li><li><p>Assisting with fact-checking and source verification</p></li><li><p>Meeting regular deadlines and communicating proactively about progress</p></li></ul><p>Depending on the candidate chosen, there may also be opportunities to:</p><ul><li><p>Help write excerpts of the book for leading publications</p></li><li><p>Continue research and writing work for future multimedia AI-related projects</p></li></ul><h3>Qualifications</h3><p>The ideal candidate is likely to have:</p><ul><li><p>Research and/or writing experience, as evidenced by an extensive portfolio (which can include blog posts as well as other publications)</p></li><li><p>Excellent research skills, with the ability to synthesize complex information from diverse sources</p></li><li><p>Strong writing skills, especially in nonfiction, journalistic, or academic writing</p><ul><li><p>Ability to produce clear, explanatory, fun, and engaging material, without compromising accuracy. See my <em>Jacobin</em> <a href="https://jacobin.com/2024/01/can-humanity-survive-ai">story</a> for what I aim for.</p></li></ul></li><li><p>Self-motivation and ability to work independently while meeting deadlines</p></li><li><p>Attention to detail, particularly in fact-checking and citation management</p></li><li><p>Extensive familiarity with the frontier AI industry, AI policy, and/or AI safety literature (strongly preferred)</p></li><li><p>A background in economics and/or international relations is a plus</p></li></ul><h3>Location</h3><p>This is a full-time, remote-friendly job, with slight preference for those based in or willing to visit NYC. Work hours are flexible, provided at least 3 hours of overlap between 10am and 6pm ET daily.</p><p>I am happy to consider candidates based outside of the US, but will not be able to provide visa sponsorship at this time.</p><h3>Compensation &amp; Benefits</h3><ul><li><p>Salary: $30,000-$50,000 for the 6-month period, depending on experience and location ($60,000-$100,000 annualized)</p></li><li><p>Payroll tax reimbursement: $2,485-$4,142</p></li><li><p>Health insurance stipend: $3,000-$4,800</p></li><li><p>Additional perks: $2,000-$6,500</p></li></ul><p><strong>Total compensation package: $37,485-$65,442</strong></p><h4>Other Benefits:</h4><ul><li><p>Mentorship from an established journalist and writer</p></li><li><p>Potential for continued collaboration beyond the initial 6-month period</p></li><li><p>Opportunity to be extensively credited in acknowledgements in a published book</p></li></ul><h3>Application Process</h3><p>After considering written applications, promising candidates may be invited to complete a short work trial followed by interviews.</p><p>I&#8217;ll ask finalists to provide references and participate in a longer paid work trial.</p><p>I am committed to fostering a culture of inclusion, and I encourage individuals with diverse backgrounds and experiences to apply.</p><div><hr></div><h2>More on the Book</h2><p><strong>Depending on who you ask, artificial intelligence is our salvation or our doom; an overhyped, bigoted bullshit artist or the ticket to immortality and utopian abundance; just another tech fad or the last invention we need ever make. The AI discourse is confusing, messy, and frustrating. But the technology &#8212; and our response to it &#8212; will shape the future, whether or not we're part of the conversation.</strong></p><p><em>Obsolete</em> is for those who are interested in learning more about AI, but are unsure of where to start and who to believe. They may feel intimidated by the technical jargon, put off by dry and abstract prose, or skeptical of the loudest voices on the issue (like Elon Musk and Sam Altman). <em>Obsolete</em> will introduce readers to the basics of AI, the idea that it could lead to human extinction, the roiling three-sided debate surrounding extinction fears, and the people and companies trying to build artificial general intelligence (AGI) &#8212; that which can outwit humans across the board. The book will grapple with the core arguments animating the AI debates, which are, by turns, uncritically parroted and unduly dismissed. It will also cut through industry hype while seriously entertaining the implications of AGI.</p><p>The risk that AGI could result in our extinction is being recognized by a significant and growing number of leading AI researchers, industrialists, and policymakers, along with the wider public. Existential risk from AI has been explored in other books, but <em>Obsolete</em> will be the first to center its analysis on how both capitalist and great power competition make AI more dangerous.</p><p><em>Obsolete</em> will also tackle questions like: Can machines actually outsmart humanity? If so, when could that happen? If AGI is possible, is it inevitable? Why are people trying to build a technology they claim could end the world? Is the idea of AI-driven extinction the product of a big tech conspiracy aiming to hype the technology and control its regulation? Why has the left mostly ignored or dismissed existential risk from AI? Why do some powerful techies welcome human extinction? How could AI enable stable authoritarian regimes? How could killer robots reshape war and the balance of power? What do China and the US want from AI? And why has it become the front line of their brewing Cold War? And finally: What can and should we do about it?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://airtable.com/appEwGxseY1R2o86q/pagHFbCBIAb4xhRoc/form&quot;,&quot;text&quot;:&quot;Apply here&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://airtable.com/appEwGxseY1R2o86q/pagHFbCBIAb4xhRoc/form"><span>Apply here</span></a></p>]]></content:encoded></item><item><title><![CDATA[What the Headlines Miss About the Latest Decision in the Musk vs. OpenAI Lawsuit]]></title><description><![CDATA[Legal experts see trouble ahead for the AI company, despite its seeming victory]]></description><link>https://www.obsolete.pub/p/what-the-headlines-miss-about-the</link><guid isPermaLink="false">https://www.obsolete.pub/p/what-the-headlines-miss-about-the</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Thu, 06 Mar 2025 04:56:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QQb5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you've been <a href="https://www.cnbc.com/2025/03/04/judge-denies-musk-attempt-to-block-openai-from-becoming-for-profit-.html">following</a> <a href="https://thehill.com/policy/technology/5177511-federal-judge-rejects-musk-openai/">the</a> <a href="https://arstechnica.com/tech-policy/2025/03/musk-loses-bid-to-stop-openais-for-profit-shift-but-can-make-his-case-in-trial/">headlines</a> about Elon Musk's lawsuit against OpenAI, you might think he just suffered a major defeat. </p><p>On Tuesday, California District Judge Yvonne Gonzalez Rogers denied all of Musk's requests for a preliminary injunction, which would have blocked OpenAI's restructuring from nonprofit to for-profit. Judge Rogers also expedited the trial, which will now begin this Fall. Media outlets quickly framed this as a loss for Musk.</p><p>But a closer reading of the <a href="https://www.courthousenews.com/wp-content/uploads/2025/03/musk-vs-altman-order-denying-motion-preliminary-injunction.pdf">16-page ruling</a> reveals something more subtle &#8212; and still a giant potential wrench in OpenAI's plans to transfer control of the company from the nonprofit board to a new for-profit public benefit corporation.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>If the company fails to complete this transformation by October 2026, investors in its $6.6 billion funding round last October <a href="https://www.axios.com/2024/10/02/openai-new-funding-round-restructuring">can ask</a> for their money back.</p><p>OpenAI CEO Sam Altman, nonprofit board director Bret Taylor, and the company's press office did not reply to requests for comment.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QQb5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QQb5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png 424w, https://substackcdn.com/image/fetch/$s_!QQb5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png 848w, https://substackcdn.com/image/fetch/$s_!QQb5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png 1272w, https://substackcdn.com/image/fetch/$s_!QQb5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QQb5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png" width="1418" height="1212" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1212,&quot;width&quot;:1418,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:698420,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://garrisonlovely.substack.com/i/158492941?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QQb5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png 424w, https://substackcdn.com/image/fetch/$s_!QQb5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png 848w, https://substackcdn.com/image/fetch/$s_!QQb5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png 1272w, https://substackcdn.com/image/fetch/$s_!QQb5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F282d3490-4337-4260-8eb7-a5fd37c8b157_1418x1212.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In a narrow sense, Musk did "lose." However, as Judge Rogers notes, the bar for a preliminary injunction is extremely high. To surpass it, both the facts of the case and the legal questions involved have to clearly point in the same direction.</p><p>Therefore, it would have been extremely surprising if the judge granted the injunction. She did, however, stop just short of it.</p><h3>Does Musk have standing?</h3><p>To bring a lawsuit, you have to have standing, i.e. you have to convince the court that you've been harmed by the actions of the defendant. For most of the claims Musk brought, Judge Rogers thinks that he doesn't have standing. </p><p>But Musk's third claim is that his donation of $44 million to OpenAI was contingent on his expectation that the organization remain a nonprofit. The legal question is whether Musk's donation meant that he entered into a "charitable trust" with OpenAI, in which he expressly meant for his gift to only be used if the organization remained a nonprofit.</p><p>Unfortunately for Musk (and fortunately for OpenAI), there is no contract or gift agreement documenting any restrictions on the gift. Judge Rogers says that this question is a "toss-up," citing evidence that points in both directions. (OpenAI <a href="https://openai.com/index/openai-elon-musk/">published</a> emails showing Musk was aware of and on board with the transition to a for profit, which makes him far less sympathetic here.)</p><p>So why does this ruling matter? Well, while Judge Rogers found Musk's standing uncertain at this preliminary stage, she went out of her way to signal that the core claim &#8212; that OpenAI's conversion violates its charitable purpose &#8212; could have merit if properly brought before the court.</p><p>Put differently, Judge Rogers essentially writes that if Musk clearly had standing, the injunction would be justified. Here's the key quote from her <a href="https://www.courthousenews.com/wp-content/uploads/2025/03/musk-vs-altman-order-denying-motion-preliminary-injunction.pdf">judgment</a>:</p><blockquote><p><em>if a trust was created</em>, the balance of equities would certainly tip towards plaintiffs in the context of a breach. As Altman and Brockman made foundational commitments foreswearing any intent to use OpenAI as a vehicle to enrich themselves, the Court finds no inequity in an injunction that seeks to preserve the status quo of OpenAI&#8217;s corporate form as long as the process proceeds in an expedited manner. [emphasis original]</p></blockquote><p>Legal experts who have followed the case closely see this ruling as far more significant than the headlines suggest &#8212; a decision that invites other existential challenges to OpenAI's conversion efforts.</p><p>"This is a big win for Musk," says Michael Dorff, the executive director of the Lowell Milken Institute for Business, Law, and Policy at UCLA. "Even though he didn't get the preliminary injunction, the fact that there is a pending trial on this issue and that his claim wasn't denied is a pretty big impediment to [OpenAI] moving forward expeditiously," he says.</p><p>A former OpenAI employee spelled out the significance of the judgment, telling Obsolete:</p><blockquote><p>I think this is unusual, which is why it's noteworthy. Typically courts decide cases on the narrowest grounds possible. So if the standing decision is sufficient to deny, then none of the discussion about the underlying merits is decision-relevant. When courts do this, it's usually done with purpose.</p></blockquote><p>If Musk was deemed to have standing, the former employee said the chances of the lawsuit prevailing on the merits was, "definitely over 75%. 90% isn't crazy. The judge doesn't have all the facts at this stage. All of the judge's emphasis in her denial was on standing, not the other question."</p><h3>You know who does have standing?</h3><p>Unlike Musk, the Attorneys General (AGs) in California and Delaware unquestionably have standing to challenge OpenAI's conversion &#8212; a fact Judge Rogers repeatedly emphasized throughout her ruling.</p><p>"The fact that the AG has standing is by statute, so it's not a big statement," notes Dorff. "What's unusual is that the AG might actually do something about it. This may be a rare case where an AG finds it worthwhile."</p><p>The AGs in both states have already signaled interest in the case. The Delaware AG <a href="https://lawprofessors.typepad.com/files/delawareagamicusbrief-musk-v.-altman.pdf">filed</a> an amicus brief in December emphasizing that her office would scrutinize any restructuring to ensure it protects the public interest. California's AG is <a href="https://calmatters.org/economy/technology/2025/01/openai-investigation-california/">reportedly reviewing</a> the conversion as well.</p><p>Judge Rogers' ruling substantially increases pressure on both AGs to take action, providing them with judicial validation that the core issues deserve serious scrutiny.</p><p>The AGs are empowered to protect the public interest and could each initiate legal action to block the restructuring, with a strong likelihood of success given their clear standing and the court's signals about the merits of this case.</p><h3>It's hard to change your purpose</h3><p>Dorff agrees that the case&#8217;s merits pose significant challenges to the company, echoing past conversations with three other legal experts who followed this case.</p><p>"OpenAI has a very tough road ahead of it, if Musk has standing on that claim," Dorff explains. "Changing of a nonprofit's purpose is only supposed to be possible when the original purpose is defunct. That's not the case here."</p><p>To illustrate this point, Dorff cites the example of The March of Dimes. The anti-polio foundation was able to legally shift its mission after the disease was effectively eradicated. OpenAI's situation, Dorff argues, is fundamentally different.</p><p>"The original purpose was to develop AI for the benefit of all of humanity in a way that is safe," Dorff notes. "That purpose is not defunct &#8212; it's very much still ongoing."</p><p>Given this, Dorff says, "It's hard to imagine a good argument for why they should be allowed to change their purpose." The nonprofit's major asset, he points out, isn't land &#8212; it's the control the board has over OpenAI the company, which is at the forefront of developing powerful AI systems. "Owning control over that entity seems uniquely well suited to the nonprofit's purpose," Dorff says. "Giving up that control, even for a lot of money, is not equivalent."</p><h3>Directors could be personally liable</h3><p>The judgment raises another critical issue that has received little attention: potential personal liability for OpenAI's board members if they proceed with the conversion.</p><p>"If a breach of fiduciary duty is established, board members could be personally liable," explains Dorff. "If they're conflicted and violate their duty of loyalty in favor of their own interests instead of the public interest... they could be personally liable for the true value of whatever was lost."</p><p>The former OpenAI employee concurred, "Typically when directors are acting on behalf of organizations, there's some shield &#8212; the <a href="https://www.law.cornell.edu/wex/business_judgment_rule">business judgment rule</a>." Judges are reticent to second-guess business decisions that don't pan out. This gives directors wide leeway in governing companies.</p><p>But, "Under certain circumstances that protection doesn't apply. If I were a director, I'd want to be getting legal advice right now," the ex-employee says.</p><p>OpenAI's nonprofit board <a href="https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html?unlocked_article_code=1.104.YB6e.ze8TJBHhGZkn&amp;smid=url-share">(in)famously</a> has a fiduciary duty to humanity and therefore needs to justify the conversion on those terms, which is a far taller order.</p><p>In a follow-up exchange, the former OpenAI employee wrote to Obsolete:</p><blockquote><p>Judge Rogers is very clearly saying that she has serious concerns about the legality of the restructuring. So after this ruling, the directors are on heightened notice that the legality of the restructuring is open to serious doubt. If they just try to ram it through regardless, that could be a pretty egregious breach of their fiduciary duties &#8212; so egregious that they might even have personal liability.</p></blockquote><p>The prospect of being personally on the hook creates an entirely different category of risk for OpenAI's leadership as they navigate the restructuring.</p><h3>Why OpenAI is trying to restructure</h3><p>OpenAI's unconventional structure has become an albatross for the organization as it raises the staggering amounts of money needed to keep training cutting-edge AI models. Just four months after the October funding round that valued OpenAI at $157 billion, the <em>Wall Street Journal</em> <a href="https://www.wsj.com/tech/ai/openaiin-talks-for-huge-investment-round-valuing-it-up-to-300-billion-2a2d4327?st=eJGkNZ&amp;reflink=desktopwebshare_permalink">reported</a> that SoftBank was planning to invest up to $25 billion at a valuation of up to $300 billion.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3Qvk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3Qvk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3Qvk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3Qvk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3Qvk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3Qvk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Our structure | OpenAI&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Our structure | OpenAI" title="Our structure | OpenAI" srcset="https://substackcdn.com/image/fetch/$s_!3Qvk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3Qvk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3Qvk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3Qvk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5505042-9ecd-47e0-bbfc-c68cc5e49326_4340x3256.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Totally not confusing</figcaption></figure></div><p>OpenAI was confident enough in its ability to successfully complete its restructuring within two years that it gave investors the ability to take their $6.6 billion back if it didn&#8217;t convert in time. </p><p>But Tuesday's ruling threatens this timeline, even with the new expedited trial schedule. OpenAI faces a difficult choice: attempt to proceed with the restructuring under the cloud of pending litigation and potential AG intervention, or wait for legal clarity that may not arrive until the resolution of a trial that won't even begin until the Fall of 2025 &#8212; uncomfortably close to the October 2026 deadline when investors could demand their money back.</p><p>If OpenAI fails to transition and investors come calling, finding the money might not be enough (and is far from a given &#8212; the company is <a href="https://www.nytimes.com/2024/09/27/technology/openai-chatgpt-investors-funding.html?unlocked_article_code=1.v04.bdic.52TYF3WVkDGg&amp;smid=url-share">burning</a> billions a year). OpenAI's ability to attract enough investment to compete may be dependent on it being structured more like a typical company. The fact that it agreed to such onerous terms in the first place implies that it had little choice.</p><h3>What happens next</h3><p>Dorff says that this case "will create fodder for discussions in law schools for many years to come."</p><p>The expedited trial schedule &#8212; which he calls "highly unusual" and "very, very quick" &#8212; suggests the court recognizes both the importance and urgency of resolving these questions. While OpenAI may continue preparations for its restructuring, the ruling represents a significant yellow light that both the company and potential investors cannot ignore.</p><p>And if the California or Delaware Attorneys General decide to act on the judge's signals, that yellow light could quickly turn red.</p><p>Far from a defeat for Musk, this ruling may ultimately prove to be the most substantial obstacle to OpenAI's plans to shed its nonprofit constraints &#8212; constraints that its founders once championed as essential to developing AI that benefits humanity.</p>]]></content:encoded></item><item><title><![CDATA[DeepSeek Made it Even Harder for US AI Companies to Ever Reach Profitability]]></title><description><![CDATA[Did Anthropic's CEO just admit AI companies have been enjoying fat margins?]]></description><link>https://www.obsolete.pub/p/deepseek-made-it-even-harder-for</link><guid isPermaLink="false">https://www.obsolete.pub/p/deepseek-made-it-even-harder-for</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Wed, 19 Feb 2025 20:37:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6bJC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I'm in San Francisco until February 27th or so. If you'd like to meet up or tell me about some cool event, email me at tgarrisonlovely [at] gmail [dot] com. </em></p><p>Anthropic CEO Dario Amodei may have divulged a big secret with worrying implications for AI firms like his own and OpenAI.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>Last month, Chinese startup DeepSeek <a href="https://chat.deepseek.com/">released R1</a>, an AI model <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">rivalling</a> OpenAI&#8217;s flagship o1 model on key benchmarks but at a <a href="https://api-docs.deepseek.com/quick_start/pricing">fraction</a> of the <a href="https://openai.com/api/pricing/">cost</a> &#8212; about 27-times cheaper. This sent shockwaves through the market. AI chip designer Nvidia <a href="https://www.forbes.com/sites/dereksaul/2025/01/27/biggest-market-loss-in-history-nvidia-stock-sheds-nearly-600-billion-as-deepseek-shakes-ai-darling/">saw a record</a> $600 billion wiped from its value on Monday, January 27th, <a href="https://www.investors.com/etfs-and-funds/sectors/sp500-deepseek-ai-sparks-trillion-in-u-s-tech-destruction/">driving</a> nearly $1 trillion in losses concentrated in American AI infrastructure stocks. The narrative quickly emerged: a small Chinese company had matched billion-dollar American models for mere millions, suggesting powerful AI might be far cheaper to develop than previously believed.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6bJC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6bJC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!6bJC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!6bJC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!6bJC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6bJC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;A dramatic digital illustration depicting a massive whale labeled 'DeepSeek' smashing through business plans of 'OpenAI,' 'Anthropic,' and 'Google.' The whale forcefully breaks through documents and charts symbolizing strategic plans, leaving a wake of destruction. The background features a stormy ocean with financial graphs in turmoil, representing the chaotic impact of the AI competition. The scene conveys dominance, disruption, and intense rivalry in the AI industry.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A dramatic digital illustration depicting a massive whale labeled 'DeepSeek' smashing through business plans of 'OpenAI,' 'Anthropic,' and 'Google.' The whale forcefully breaks through documents and charts symbolizing strategic plans, leaving a wake of destruction. The background features a stormy ocean with financial graphs in turmoil, representing the chaotic impact of the AI competition. The scene conveys dominance, disruption, and intense rivalry in the AI industry." title="A dramatic digital illustration depicting a massive whale labeled 'DeepSeek' smashing through business plans of 'OpenAI,' 'Anthropic,' and 'Google.' The whale forcefully breaks through documents and charts symbolizing strategic plans, leaving a wake of destruction. The background features a stormy ocean with financial graphs in turmoil, representing the chaotic impact of the AI competition. The scene conveys dominance, disruption, and intense rivalry in the AI industry." srcset="https://substackcdn.com/image/fetch/$s_!6bJC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!6bJC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!6bJC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!6bJC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60c4e3a4-e1d8-4ba2-97fc-fb34aef8e68a_1792x1024.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">DallE-3 prompt: depict the DeepSeek whale smashing through business plans for OpenAI, Anthropic, and Google</figcaption></figure></div><p>In an <a href="https://darioamodei.com/on-deepseek-and-export-controls">essay</a> responding to the market panic, Amodei aimed to defend US export controls on advanced AI chips to China. But in doing so, he revealed something striking: DeepSeek's efficiency gains were exactly what we should expect from <a href="https://epoch.ai/blog/algorithmic-progress-in-language-models">historical algorithmic progress</a> &#8212; suggesting American AI companies have been enjoying healthy profit margins, at least until DeepSeek arrived to massively undercut them.</p><p>This assessment resonates with researchers at leading US AI companies. One told me DeepSeek's results "are within the improvement range that we'd expect from standard algorithmic improvement over time." Another was even more dismissive: "I don't think anyone cares very much, it doesn't seem very surprising&#8230; Obviously they're talented but nothing about it is unexpected." Notably, none of these reactions came from customer-facing employees.</p><p>By offering a comparable model at significantly lower prices, DeepSeek is likely to trigger an AI price war, just as <a href="https://www.chinatalk.media/p/deepseek-ceo-interview-with-chinas#:~:text=Before%20Deepseek%2C%20CEO,cop%20to%20publicly.">it did</a> in the Chinese market last summer. Lower prices should boost demand, foreshadowed by DeepSeek&#8217;s <a href="https://www.cnbc.com/2025/01/27/chinas-deepseek-ai-tops-chatgpt-app-store-what-you-should-know.html">meteoric rise</a> to the top of the iPhone app store.</p><p>Microsoft CEO Satya Nadella recognized this dynamic, <a href="https://x.com/satyanadella/status/1883753899255046301">tweeting</a>: "Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of." The paradox he references: when technologies get more efficient, we often end up using more of them, not less. This commoditization would benefit companies using AI to enhance their existing services, like Microsoft, Meta, and Google.</p><p>However, for pure-AI companies like OpenAI, the implications are troubling. If US companies have already achieved similar efficiencies, then compute costs for these models won&#8217;t substantially decrease even if prices fall. This tightens unit economics for AI developers, lengthening their already uncertain path to profitability. OpenAI&#8217;s swift <a href="https://openai.com/index/openai-o3-mini/">release</a> of a &#8220;mini&#8221; version of its forthcoming o3 model, priced <a href="https://openai.com/api/pricing/">roughly double</a> DeepSeek&#8217;s <a href="https://api-docs.deepseek.com/quick_start/pricing">offering</a>, suggests they recognize this threat.</p><p>And if Nadella is right about AI's coming status as a commodity, that's very bad news for OpenAI &#8212; commodity producers don't <a href="https://eqvista.com/revenue-multiples-by-industry/">typically</a> get valued at dozens of times their <a href="https://www.nytimes.com/2024/09/27/technology/openai-chatgpt-investors-funding.html">revenue</a>.</p><p>DeepSeek demonstrates that the &#8220;secret sauce&#8221; for cutting-edge AI won&#8217;t remain secret for long. Months after OpenAI <a href="https://openai.com/index/learning-to-reason-with-llms/">announced</a> its o1 "reasoning" model, <a href="https://deepmind.google/technologies/gemini/flash-thinking/">competitors</a> have largely replicated its approach and performance. Whether through building off open-sourced innovations or &#8220;<a href="https://www.wsj.com/tech/ai/why-distillation-has-become-the-scariest-wordfor-ai-companies-aa146ae3">distillations</a>&#8221; of closed models, &#8220;fast-followers&#8221; can match leaders' capabilities faster and cheaper.</p><p>While AI developers face margin pressure, the outlook for AI infrastructure providers like Nvidia is unclear.</p><p>Lower AI prices will likely drive up aggregate demand for computing power, but also reduce the profit per chip, which are currently <a href="https://www.tomshardware.com/news/nvidia-makes-1000-profit-on-h100-gpus-report">astronomically high</a>. The market initially bet on the downside. But early evidence suggests the demand effect may be winning out. Industry research group Semianalysis <a href="https://semianalysis.com/2025/01/31/deepseek-debates/#:~:text=The%20narrative%20now,and%20H200%20pricing.">reported</a> that prices to rent Nvidia's flagship H100 chip actually "exploded" after DeepSeek released its V3 model in December, with no slowdown after R1's introduction. "More intelligence for cheaper means more demand," they write.</p><p>Major tech companies seem to agree and are still planning massive AI and datacenter <a href="https://www.cnbc.com/2025/02/08/tech-megacaps-to-spend-more-than-300-billion-in-2025-to-win-in-ai.html">spending increases</a> this year.</p><p>Public and private markets diverged wildly in their response to DeepSeek. The same week that Nvidia lost nearly 20% of its market cap, SoftBank <a href="https://www.wsj.com/tech/softbank-in-talks-to-invest-up-to-25-billion-in-openai-03d653fc">reportedly</a> sought to invest up to $25 billion in OpenAI at a <a href="https://www.wsj.com/tech/ai/openaiin-talks-for-huge-investment-round-valuing-it-up-to-300-billion-2a2d4327?mod=series_chatgptai">valuation approaching</a> $300 billion &#8212; nearly double where the company was valued just months earlier. The day of the DeepSeek market panic, Nvidia closed at a lower share price than it had in early October, when OpenAI announced its last funding round at a $157 billion valuation.</p><p>These contradictory outlooks can't both be right. A world of commodity AI services is fundamentally incompatible with the soaring private market valuations of AI companies that have yet to turn a profit.</p><p>(The market seems to have mostly corrected &#8212; Nvidia's stock is now only down one percent for the month.)</p><p>Ultimately, adoption of AI will continue and likely accelerate, as price drops coincide with significant increases to usefulness of the underlying technology. Pure-AI companies are in a long race to turn a profit before their products become commoditized. And DeepSeek just moved the finish line further away.</p>]]></content:encoded></item><item><title><![CDATA[Why Did Elon Musk Just Offer to Buy Control of OpenAI for $100 Billion?]]></title><description><![CDATA[OpenAI needs to fairly compensate its nonprofit board for giving up control. Musk just made that math a lot more complicated.]]></description><link>https://www.obsolete.pub/p/why-did-elon-musk-just-offer-to-buy</link><guid isPermaLink="false">https://www.obsolete.pub/p/why-did-elon-musk-just-offer-to-buy</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Mon, 10 Feb 2025 23:49:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mosJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Wow. The <em>Wall Street Journal</em> just <a href="https://www.wsj.com/tech/elon-musk-openai-bid-4af12827?st=KtKTuG&amp;reflink=desktopwebshare_permalink">reported</a> that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI."</p><p>Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>OpenAI CEO Sam Altman already <a href="https://x.com/sama/status/1889059531625464090">tweeted</a>, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, <a href="https://x.com/elonmusk/status/1889062013109703009">replied</a> with just the word: "Swindler.")</p><p>Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up.</p><p>In October, The Information <a href="https://www.theinformation.com/articles/openais-charity-could-soon-be-worth-40-billion">reported</a> that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the <em>Financial Times</em> <a href="https://www.ft.com/content/7dcd4095-717e-49f8-8d12-6c8673eb73d7">reported</a> that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up.</p><p>Musk has sued to block OpenAI's conversion, <a href="https://apnews.com/article/elon-musk-openai-lawsuit-sam-altman-3cd261b2a9b04630ec93582020c59ef7">arguing</a> that he would be irreparably harmed if it went through.</p><p>But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay.</p><p>(My guess is that Altman will still manage to make the restructuring happen, but every dollar given to the nonprofit is one that can't be offered to future funders, potentially dramatically limiting OpenAI's fundraising prospects. Given how quickly the company <a href="https://www.nytimes.com/2024/09/27/technology/openai-chatgpt-investors-funding.html?unlocked_article_code=1.v04.bdic.52TYF3WVkDGg&amp;smid=url-share">burns through cash</a>, this could be a real problem.)</p><p>The timing is also critical. As a <a href="https://www.businessinsider.com/openai-deadline-to-become-for-profit-or-return-investor-money-2024-10">condition</a> of taking $6.6 billion in new investment last October, OpenAI agreed to complete its for-profit transition within two years. If it doesn't hit that deadline, those investors can ask for their money back.</p><h3>The control premium</h3><p>Here's why this matters: OpenAI's nonprofit board technically controls the company and has a fiduciary duty to "humanity" rather than to investors or employees.</p><p>Pricing the relinquishing of control may be a non-starter, according to Michael Dorff, the executive director of the Lowell Milken Institute for Business, Law, and Policy at UCLA. Dorff told me in October:</p><blockquote><p>If [AGI&#8217;s] going to come in five years, it could be worth almost infinite amounts of money, conceivably, right? I mean, we'll all be sitting on the beach drinking pi&#241;a coladas, or we'll all be dead. I'm not sure which. So, this is very difficult, and it is likely to be litigated.</p></blockquote><p>Musk's bid seems to be trying to put a floor on that price &#8212; one that is much higher than numbers that were previously thrown around.</p><p>The "control premium" &#8212; how much extra you pay to get control of a company versus just buying shares &#8212; typically <a href="https://corporatefinanceinstitute.com/resources/valuation/control-premium/">ranges</a> from 20-30%, but can go as high as 70% of the company's value. With OpenAI <a href="https://www.wsj.com/tech/ai/openaiin-talks-for-huge-investment-round-valuing-it-up-to-300-billion-2a2d4327?st=iSLoUJ&amp;reflink=desktopwebshare_permalink">reportedly</a> in talks to raise more money at up to a $300 billion valuation, that would mean the nonprofit's control could be worth anywhere from $60-210 billion.</p><p>Musk's bid makes this math problem a lot more concrete. His group is offering $97.4 billion AND promising to match any higher bids. This means the nonprofit board now has to explain why they'd accept less.</p><h3>Conversion significance</h3><p>In addition to transferring control from the nonprofit board to a new for-profit public benefit corporation (PBC), OpenAI is <a href="https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/">reportedly</a> trying to remove existing caps on investor returns.</p><p>At a glance, this conversion may not seem that significant. OpenAI already behaves like a for-profit company in many ways. Almost all of its employees work for the for-profit arm, and the profit motive blew through the nonprofit&#8217;s guardrails over a year ago, when the board fired Altman only to reinstate him less than five days later in the face of an employee and investor revolt.</p><p>But OpenAI&#8217;s nonprofit board still technically governs the company and is currently set up to receive 100% of the profits once various investors&#8217; profit caps are hit &#8212; which are as high as <a href="https://www.vox.com/2019/3/11/18260434/sam-altman-open-ai-capped-profit-y-combinator">100-times</a> their initial investment, according to the <em><a href="https://www.wsj.com/tech/ai/the-14-billion-question-dividing-openai-and-microsoft-71cf7d37">Wall Street Journal</a></em>. The nonprofit also gets to decide when AGI is "achieved," which <a href="https://www.nytimes.com/2024/10/17/technology/microsoft-openai-partnership-deal.html">gets</a> the company out of its obligation to share its technology with Microsoft, OpenAI&#8217;s primary investor.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>You may think these profit caps are a ridiculous marketing ploy that will never be reached. But Altman, OpenAI, and many of its employees think the company really could make trillions &#8212; or much, much more &#8212; in profits. And if those caps go away, it becomes even harder to imagine the profits will be shared with those who lose out from the development of AGI.</p><h3>Musk's suit</h3><p>A federal judge recently <a href="https://www.reuters.com/legal/elon-musk-openai-head-court-spar-over-nonprofit-conversion-2025-02-04/">announced</a> that parts of Musk's lawsuit against OpenAI will go to trial. The core of Musk's argument is that OpenAI betrayed its founding nonprofit mission by pursuing a for-profit structure. The judge <a href="https://apnews.com/article/elon-musk-openai-lawsuit-sam-altman-3cd261b2a9b04630ec93582020c59ef7">called</a> Musk's claims a "stretch," but decided to let the trial move forward anyway. &#8220;It is plausible that what Mr. Musk is saying is true. We&#8217;ll find out. He&#8217;ll sit on the stand,&#8221; she <a href="https://apnews.com/article/elon-musk-openai-lawsuit-sam-altman-3cd261b2a9b04630ec93582020c59ef7">said</a>.</p><p>Musk is a particularly unsympathetic plaintiff here, given that OpenAI has <a href="https://openai.com/index/openai-elon-musk/">published emails</a> showing that he knew about the for profit plans years before they took place. He has since founded a different for-profit AI company, xAI. But that's not to say that he doesn't have a point.</p><p>Encode, a nonprofit advocacy group that co-sponsored California's AI safety bill SB 1047, also joined the fray in December, <a href="https://www.courtlistener.com/docket/69013420/72/musk-v-altman/">filing</a> an amicus brief in support of Musk's position arguing that OpenAI's conversion would "undermine" its mission to develop transformative technology safely and for public benefit.</p><p>(Encode receives funding from the <a href="https://omidyar.com/">Omidyar Network</a>, where I am currently a <a href="https://omidyar.com/omidyar-network-announces-fifth-class-of-reporters-in-residence/">Reporter in Residence</a>.)</p><p>The brief <a href="https://www.courtlistener.com/docket/69013420/72/musk-v-altman/">argues</a> that if we truly are on the cusp of artificial general intelligence (AGI), "the public has a profound interest in having that technology controlled by a public charity legally bound to prioritize safety and the public benefit rather than an organization focused on generating financial returns for a few privileged investors." The brief authors point out that OpenAI's conversion would replace its "fiduciary duty to humanity" with a legal requirement to balance public benefit against "the pecuniary interests of stockholders." Most strikingly, they argue that control over AGI development is a "priceless" charitable asset that shouldn't be sold at any price, since OpenAI itself claims AGI will be "built exactly once" and could so transform society that "money itself ceases to have value."</p><p>As Encode's brief highlights, this is about more than just corporate drama. OpenAI is explicitly trying to build AGI, which the company <a href="https://openai.com/our-structure/">defines</a> as "a highly autonomous system that outperforms humans at most economically valuable work." Altman, <a href="https://www.safe.ai/work/statement-on-ai-risk">along with</a> hundreds of leading AI researchers, have warned that the technology could result in human extinction.</p><p>The legal context here is important. The California and Delaware Attorneys General (AGs) each have the ability to void the for-profit conversion. Experts tell me that the key question is whether the nonprofit is fairly compensated.</p><p>Also in late December, Delaware AG Kathleen Jennings <a href="https://lawprofessors.typepad.com/files/delawareagamicusbrief-musk-v.-altman.pdf">filed</a> her own amicus brief making clear her office would scrutinize any restructuring. The brief emphasized that she has both the authority and responsibility to ensure OpenAI's transaction protects the public interest.</p><p>Under Delaware law, she writes that the AG must review whether:</p><ul><li><p>The charitable purpose of OpenAI's assets would be lost or impaired</p></li><li><p>Any for-profit entity will adhere to the existing charitable purpose</p></li><li><p>OpenAI's directors are meeting their fiduciary duties</p></li><li><p>The transaction satisfies Delaware's "entire fairness" test</p></li></ul><p>Most notably, the brief warned that if the AG concludes the restructuring isn't consistent with OpenAI's mission or that board members aren't fulfilling their duties, "Delaware will not hesitate to take appropriate action to protect the public interest."</p><p>That's why the nonprofit board's control was supposed to matter in the first place. It was meant to ensure that if OpenAI succeeded in building AGI, the technology would benefit humanity as a whole rather than just enriching investors. Its fiduciary duty is literally to all of humanity.</p><h3>The stakes</h3><p>And humanity needs all the help it can get.</p><p>The first ever <a href="https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf">International AI Safety Report</a> &#8212; AI's closest thing to an Intergovernmental Panel on Climate Change (IPCC) report &#8212; just came out. It's not a comforting read.</p><p>The report, backed by 30 countries and authored by over 100 AI experts, outlines several concerning pathways to "<a href="https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf#page=100">loss of control</a>" &#8212; scenarios where AI systems operate outside of meaningful human oversight. It warns that more capable AI systems are <a href="https://www.apolloresearch.ai/research/scheming-reasoning-evaluations">beginning</a> to display early versions of "control-undermining capabilities" like deception, strategic planning, and "theory of mind" (the ability to model human beliefs and intentions). The authors observe that in addition to getting better at scheming against us, AI models have <a href="https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf#page=79">rapidly improved</a> at tasks required for making biological and chemical weapons &#8212; a pairing that should give us all pause.</p><p>Perhaps most worryingly, the authors note that competitive pressures between companies and countries could lead them to accept larger risks to stay ahead, making proper safety measures less likely. While experts disagree on timing and likelihood, the report emphasizes that if extremely rapid progress occurs, it's impossible to rule out loss of control scenarios within the next several years.</p><p>And in the last nine months, OpenAI has seen an <a href="https://fortune.com/2024/08/26/openai-agi-safety-researchers-exodus/">exodus</a> of safety researchers &#8212; many of whom are warning that the world is <a href="https://garrisonlovely.substack.com/p/end-of-an-era-openais-agi-readiness">not ready</a> for what the company is trying to build. Late last month, another announced his departure, <a href="https://x.com/sjgadler/status/1883928203800265023">tweeting</a> that, &#8220;No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.&#8221;</p><p>The question at hand isn't just whether $97.4 billion is a fair price for the nonprofit's control. It's whether any price is high enough to justify removing the guardrails designed to keep humanity's interests ahead of investor returns at this critical moment in AI's development.</p><div><hr></div><h3>Some other recent writing of mine</h3><p>Earlier in the month, I <a href="https://sfstandard.com/opinion/2025/02/01/marc-andreessen-just-wants-you-to-think-deepseek-is-a-sputnik-moment/">published</a> an op-ed in the SF Standard titled "Marc Andreessen just wants you to think DeepSeek is a Sputnik moment." Here&#8217;s the accompanying <a href="https://x.com/GarrisonLovely/status/1885810248017125677">thread</a>.</p><p>I also wrote a <a href="https://x.com/GarrisonLovely/status/1887284354533236827">thread</a> criticizing a NY <em>Times</em> opinion piece by Zeynep Tufekci on DeepSeek that began like this:</p><blockquote><p>I'm usually a fan of Zeynep's work, but this piece gets things exactly backwards. Her core argument is that DeepSeek makes "nonsense" of US efforts to contain the spread of AI, which have largely involved restricting the types of AI chips China can access. </p></blockquote><p>Zeynep <a href="https://x.com/zeynep/status/1887479700118700059">said</a> she will circle back, and I look forward to her reply!</p><p>I'll probably have more to say on DeepSeek in the future, but for now, I&#8217;ll leave you with this awesome illustration Kyle Victory did for my piece:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mosJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mosJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png 424w, https://substackcdn.com/image/fetch/$s_!mosJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png 848w, https://substackcdn.com/image/fetch/$s_!mosJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!mosJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mosJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;A photo illustration depicting a portrait and shapes in black and red.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A photo illustration depicting a portrait and shapes in black and red." title="A photo illustration depicting a portrait and shapes in black and red." srcset="https://substackcdn.com/image/fetch/$s_!mosJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png 424w, https://substackcdn.com/image/fetch/$s_!mosJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png 848w, https://substackcdn.com/image/fetch/$s_!mosJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!mosJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b683525-6a10-401e-bc10-9c3abacd3b1d_2400x1600.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[Is AI Hitting a Wall or Moving Faster Than Ever?]]></title><description><![CDATA[My latest in TIME plus assessing media coverage of o3]]></description><link>https://www.obsolete.pub/p/is-ai-hitting-a-wall-or-moving-faster</link><guid isPermaLink="false">https://www.obsolete.pub/p/is-ai-hitting-a-wall-or-moving-faster</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Thu, 09 Jan 2025 22:15:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>[MAY 17, 2025 UPDATE: This piece cited a paper by Aidan Toner-Rodgers that has been <a href="https://economics.mit.edu/news/assuring-accurate-research-record">retracted</a> by MIT after it "conducted an internal, confidential review and concluded that the paper should be withdrawn from public discourse.&#8221;]</p><p>Depending on who you follow, you might think AI is hitting a wall or that it's moving faster than ever.</p><p>I was open to the former and even <a href="https://x.com/GarrisonLovely/status/1801013522580611359">predicted</a> back in June that the jump to the next generation of language models, like GPT-5, would disappoint. But I now think the evidence points toward progress continuing and maybe even accelerating. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>This is primarily thanks to the <a href="https://garrisonlovely.substack.com/i/151579244/economics-of-ai">advent</a> of new "<a href="https://openai.com/index/learning-to-reason-with-llms/">reasoning</a>" models like OpenAI's o-series and <a href="https://www.deepseek.com/">DeepSeek</a>, a Chinese open-weight model that is <a href="https://lmarena.ai/?leaderboard">nipping</a> at the heels of the American frontier. In essence, these models spend more time and compute on inference, "thinking" about harder prompts, instead of just spitting out an answer.</p><p>In my June prediction, I <a href="https://x.com/GarrisonLovely/status/1801013524371538283">wrote</a> that "We haven't seen anything more than marginal improvements in the year+ since GPT-4." But I now think I was wrong. </p><p>Instead, there's a widening gap between AI's public face and its true capabilities. I wrote about this in a TIME Ideas <a href="https://time.com/7205359/why-ai-progress-is-increasingly-invisible/">essay</a> that published yesterday. I hit on many of its key points here, but there's a lot more in the full piece, and I encourage you to check it out!</p><h3>AI progress isn't stalling &#8212; it's becoming increasingly illegible</h3><p>I argued that while everyday users still encounter chatbots that can't count the "Rs" in "strawberry" and the media declares an AI slowdown, behind the scenes, AI is rapidly advancing in technical domains that may end up turbo-charging everything else.</p><p>For example, in ~1 year, AI <a href="https://epoch.ai/data/ai-benchmarking-dashboard">went from</a> barely beating random chance to surpassing experts on PhD-level science questions. OpenAI <a href="https://garrisonlovely.substack.com/p/we-are-in-a-new-paradigm-of-ai-progress#:~:text=In%20September%2C%20o1,domains%20of%20expertise.">says</a> that its latest model, o3, now beats human experts in their own field by <em>nearly 20%.</em></p><p>However, as I <a href="https://time.com/7205359/why-ai-progress-is-increasingly-invisible/">wrote</a> in TIME:</p><blockquote><p>the vast majority of people won't notice this kind of improvement because they aren't doing graduate-level science work. But it will be a huge deal if AI starts meaningfully accelerating research and development in scientific fields, and there is some evidence that such an acceleration is already happening. A groundbreaking <a href="https://aidantr.github.io/files/AI_innovation.pdf">paper</a> by Aidan Toner-Rodgers at MIT recently found that material scientists assisted by AI systems "discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation." Still, 82% of scientists report that the AI tools reduced their job satisfaction, mainly citing "skill underutilization and reduced creativity."</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0D6F!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0D6F!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!0D6F!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!0D6F!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!0D6F!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0D6F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0D6F!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!0D6F!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!0D6F!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!0D6F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5de0cb66-8226-49b2-834c-1ae469d6eba8_1600x900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://epoch.ai/data/ai-benchmarking-dashboard">Epoch AI</a></figcaption></figure></div><p>In just months, models <a href="https://garrisonlovely.substack.com/p/we-are-in-a-new-paradigm-of-ai-progress">went from</a> 2% to 25% on possibly the <a href="https://epoch.ai/frontiermath/the-benchmark">hardest AI math benchmark</a> in existence.</p><p>And perhaps most significantly, AI systems are getting way better at programming. From the <a href="https://time.com/7205359/why-ai-progress-is-increasingly-invisible/">TIME essay</a>:</p><blockquote><p>In an attempt to provide more realistic tests of AI programming capabilities, researchers <a href="https://arxiv.org/abs/2310.06770">developed</a> <a href="https://www.swebench.com/">SWE-Bench</a>, a benchmark that evaluates how well AI agents can fix actual open problems in popular open-source software. The <a href="https://x.com/GarrisonLovely/status/1866945540644274526">top score</a> on the verified benchmark a year ago was 4.4%. The top score today is closer to <em><a href="https://garrisonlovely.substack.com/p/we-are-in-a-new-paradigm-of-ai-progress#:~:text=SWE%2DBench%20is%20a%20repository%20of%20real%2Dlife%2C%20unresolved%20issues%20in%20open%20source%20codebases.%20The%20top%20score%20a%20year%20ago%20was%204.4%25.%20The%20top%20score%20at%20the%20start%20of%20December%20was%2055%25.%20OpenAI%20says%20o3%20got%2072%25%20correct.">72%</a></em>, achieved by OpenAI's o3 model.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mf_e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mf_e!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png 424w, https://substackcdn.com/image/fetch/$s_!mf_e!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png 848w, https://substackcdn.com/image/fetch/$s_!mf_e!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png 1272w, https://substackcdn.com/image/fetch/$s_!mf_e!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mf_e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png" width="1336" height="858" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:858,&quot;width&quot;:1336,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:&quot;Chart&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="Chart" srcset="https://substackcdn.com/image/fetch/$s_!mf_e!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png 424w, https://substackcdn.com/image/fetch/$s_!mf_e!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png 848w, https://substackcdn.com/image/fetch/$s_!mf_e!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png 1272w, https://substackcdn.com/image/fetch/$s_!mf_e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a08f11-6b40-4a8f-b0d6-5e36c859245a_1336x858.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Sources: <a href="https://www.swebench.com/">SWE-Bench</a>, OpenAI, <a href="https://docs.google.com/spreadsheets/d/1MZMTLL39riuUaEiLg537al3qLLbSP2lgUgfcFF2httc/edit?gid=0#gid=0">chart</a> by Garrison Lovely</figcaption></figure></div><p>There have been similarly <a href="https://garrisonlovely.substack.com/p/we-are-in-a-new-paradigm-of-ai-progress">dramatic improvements</a> on other benchmarks for programming, math, and <a href="https://x.com/GarrisonLovely/status/1866945570000212448">machine learning research</a>. But unless you follow the industry closely, it's very hard to figure out what this actually means.</p><p>Here's an attempt to spell that out from the <a href="https://time.com/7205359/why-ai-progress-is-increasingly-invisible/">TIME essay</a>:</p><blockquote><p>Perhaps the best head-to-head matchup of elite engineers and AI agents was <a href="https://metr.org/AI_R_D_Evaluation_Report.pdf">published</a> in November by METR, a leading AI evaluations group. The researchers created novel, realistic, challenging, and unconventional machine learning tasks to compare human experts and AI agents. While the AI agents beat human experts at two hours of equivalent work, the median engineer won at longer time scales.</p><p>But even at eight hours, the best AI agents still managed to beat well over one-third of the human experts. The METR researchers <a href="https://metr.org/blog/2024-11-22-evaluating-r-d-capabilities-of-llms/">emphasized</a> that there was a "relatively limited effort to set up AI agents to succeed at the tasks, and we strongly expect better elicitation to result in much better performance on these tasks." They also highlighted how much cheaper the AI agents were than their human counterparts.</p></blockquote><p>(I'd expect OpenAI's latest model, o3, to do significantly better on the METR evaluation based on its other scores.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!z_s9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!z_s9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png 424w, https://substackcdn.com/image/fetch/$s_!z_s9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png 848w, https://substackcdn.com/image/fetch/$s_!z_s9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png 1272w, https://substackcdn.com/image/fetch/$s_!z_s9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!z_s9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png" width="1176" height="772" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:772,&quot;width&quot;:1176,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!z_s9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png 424w, https://substackcdn.com/image/fetch/$s_!z_s9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png 848w, https://substackcdn.com/image/fetch/$s_!z_s9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png 1272w, https://substackcdn.com/image/fetch/$s_!z_s9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd377e5f-cdbd-4dc9-bfda-bf859c9a5ed6_1176x772.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://metr.org/AI_R_D_Evaluation_Report.pdf">METR</a></figcaption></figure></div><p>But I can't put the significance of this research better than METR researcher Chris Painter <a href="https://x.com/ChrisPainterYup/status/1860062461828956580">did</a>. He asks us to imagine a dashboard monitoring various AI risks &#8212; from bioweapons to political manipulation. If any one capability starts rising, we can work to mitigate it. But what happens when the dashboard shows progress in AI's ability to improve itself? Chris writes:</p><blockquote><p>If AI systems are approaching the point where they can improve themselves, quickly teach themselves new capabilities, how can you trust any of the other panels on your dashboard? If this one capability starts to come online, who can say what comes next?</p><p>No one wakes up every morning excited to build an AI system that's explicitly excellent at causing massive harm, but the tech industry has automating AI R&amp;D squarely on its roadmap. This capability could be a crucial inflection point for humanity, and potentially destabilizing.</p></blockquote><p>For more on how this kind of recursive self-improvement could occur and what might happen next, check out <a href="https://jacobin.com/2024/01/can-humanity-survive-ai#:~:text=Explosion%3A%20The%20Extinction%20Case">this section</a> of my <em>Jacobin</em> cover story.</p><h3>Smarter models can scheme better</h3><p>Meanwhile, there's a <a href="https://www.anthropic.com/research/alignment-faking">bunch</a> of <a href="https://time.com/7205359/why-ai-progress-is-increasingly-invisible/#:~:text=Last%20month%2C%20Apollo,sabotage%2C%20lying%2C%20manipulation.%E2%80%9D">new research</a> <a href="https://x.com/PalisadeAI/status/1872666169515389245">finding</a> that smarter models are more capable of scheming, deception, sabotage, etc.</p><p>In <a href="https://time.com/7205359/why-ai-progress-is-increasingly-invisible/">TIME</a>, I spelled out my fear that this mostly invisible progress will leave us dangerously unprepared for what's coming. I worried that politicians and the public will ignore this AI progress, because they can't see the improvements first-hand. All the while, AI companies will continue to advance toward their goal of automating AI research, bootstrapping the automation of everything else.</p><p>Right now, the industry is mostly <a href="https://www.thenation.com/article/society/california-ai-safety-bill/#:~:text=In%20the%20West,less%20than%20perfect.">self-regulating</a>, and, at least in the US, that looks unlikely to change anytime soon &#8212; unless there's some kind of "warning shot" that motivates action.</p><p>Of course, there may be no warning shots, or we may ignore them. Given that many of the leading figures in the field <a href="https://managing-ai-risks.com/managing_ai_risks.pdf">say</a> "no one currently knows how to reliably align AI behavior with complex values," this is cause for serious concern. And the stakes are high, as I wrote in <a href="https://time.com/7205359/why-ai-progress-is-increasingly-invisible/">TIME</a>:</p><blockquote><p>The worst-case scenario is that AI systems become scary powerful but no warning shots are fired (or heeded) before a system permanently escapes human control and <a href="https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/">acts decisively</a> against us.</p></blockquote><h3>o3 and the media</h3><p>I had already written the first draft of the TIME essay when o3 was announced by OpenAI on December 20th. My timeline was freaking out about the <a href="https://garrisonlovely.substack.com/p/we-are-in-a-new-paradigm-of-ai-progress">shocking gains</a> made on many of these same extremely tough technical benchmarks.</p><p>I thought, 'oh man, I need to rework this essay because of how much o3 undermines the thesis that AI progress is stalling out!' But the mainstream media dramatically under-covered the announcement, with <a href="https://x.com/Kylec1215/status/1871291132594008302">most big news</a> sites making no mention of it at all.</p><p>In fact, the day after the o3 announcement you could find headlines in the <a href="https://www.nytimes.com/2024/12/19/technology/artificial-intelligence-data-openai-google.html">NYT</a>, <a href="https://www.wired.com/story/generative-ai-will-need-to-prove-its-usefulness/">WIRED</a>, <a href="https://www.wsj.com/tech/ai/openai-gpt5-orion-delays-639e7693">WSJ</a>, and <a href="https://www.bloomberg.com/news/videos/2024-12-19/why-ai-is-facing-diminishing-returns-video">Bloomberg</a> suggesting AI progress was slowing down!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!G3fc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!G3fc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png 424w, https://substackcdn.com/image/fetch/$s_!G3fc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png 848w, https://substackcdn.com/image/fetch/$s_!G3fc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png 1272w, https://substackcdn.com/image/fetch/$s_!G3fc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!G3fc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png" width="1545" height="1703" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1703,&quot;width&quot;:1545,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2170238,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!G3fc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png 424w, https://substackcdn.com/image/fetch/$s_!G3fc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png 848w, https://substackcdn.com/image/fetch/$s_!G3fc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png 1272w, https://substackcdn.com/image/fetch/$s_!G3fc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F529d2fa3-3fe0-4228-b0e3-19e22b3beada_1545x1703.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This is not a knock on these individual stories, which contain important reporting, analysis, causes for skepticism, etc. But collectively, the mainstream media is painting a misleading picture of the state of AI that makes it more likely we'll be unprepared for what's coming. (Shakeel Hashim of <a href="https://www.transformernews.ai/">Transformer</a> had a great, relevant <a href="https://www.niemanlab.org/2024/12/the-media-reckons-with-agi/">piece</a> on journalism and AGI for Nieman Lab in December.)</p><p>Just as one deep learning paradigm <a href="https://garrisonlovely.substack.com/p/is-deep-learning-actually-hitting">might be stalling out</a>, a <a href="https://garrisonlovely.substack.com/p/we-are-in-a-new-paradigm-of-ai-progress">new one is emerging</a> and iterating faster than ever. There were almost 3 years between GPT-3 and 4, but o3 was announced just ~3.5 months after its predecessor, with huge benchmark gains. There are many reasons this pace might not continue, but to say AI is slowing down seems premature at best.</p><p>The gap between AI's public face and its true capabilities is widening by the month. The real question isn't whether AI is hitting a wall &#8212; it's whether we'll see what's coming before it's too late.</p>]]></content:encoded></item><item><title><![CDATA[We are in a New Paradigm of AI Progress]]></title><description><![CDATA[OpenAI's o3 model makes huge gains on the toughest AI benchmarks in the world]]></description><link>https://www.obsolete.pub/p/we-are-in-a-new-paradigm-of-ai-progress</link><guid isPermaLink="false">https://www.obsolete.pub/p/we-are-in-a-new-paradigm-of-ai-progress</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Fri, 20 Dec 2024 23:25:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kHV_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Earlier this month, I <a href="https://x.com/GarrisonLovely/status/1866945509975638493">wrote</a>, "There's a vibe that AI progress has stalled out in the last ~year, but I think it's more accurate to say that progress has become increasingly illegible."</p><p>I argued that while AI performance on everyday tasks only got marginally better, systems made massive gains on difficult, technical benchmarks of math, science, and programming. If you weren't working in these fields, this progress was mostly invisible, but might end up accelerating R&amp;D in hard sciences and machine learning, which could have massive ripple effects on the rest of the world.</p><p>Today, OpenAI <a href="https://www.youtube.com/watch?v=SKBG1sqdyIU">announced</a> a new model called o3 that turbocharges this trend, obliterating benchmarks that the average person would have no idea how to parse (myself included).</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>A bit over a month ago, Epoch AI <a href="https://epoch.ai/frontiermath/the-benchmark">introduced</a> FrontierMath, "a benchmark of hundreds of original, expert-crafted mathematics problems designed to evaluate advanced reasoning capabilities in AI systems."</p><p>These problems are really fucking hard, and the state-of-the-art (SOTA) performance of an AI model was ~2%. They were also novel and unpublished, to eliminate the risk of data contamination.</p><p>OpenAI says that o3 got <em>25%</em> of these problems correct.</p><p>Terence Tao, perhaps the greatest living mathematician, said that the hardest of these problems are "extremely challenging... I think they will resist AIs for several years at least.&#8221;</p><p>Jaime Sevilla, director of <a href="https://epoch.ai/">Epoch AI</a>, wrote that the results were "far better than our team expected so soon after release. AI has hit a wall, and smashed it through."</p><p>Buck Shlegeris, CEO of the AI safety nonprofit <a href="https://www.redwoodresearch.org/">Redwood Research</a>, wrote to me that, "the FrontierMath results were very surprising to me. I expected it to take more than a year to get this performance."</p><p>FrontierMath was created in part because models were quickly scoring so well they "saturate" other benchmarks to the point that they stop being useful differentiators.</p><h3>Other benchmarks</h3><p>o3 significantly improved upon the SOTA in a number of other challenging technical benchmarks of mathematics, hard science questions, and programming.</p><p>In September, o1 <a href="https://openai.com/index/learning-to-reason-with-llms/">first exceeded</a> human domain experts (~70%) in the <a href="https://arxiv.org/abs/2311.12022">GPQA</a> Diamond benchmark of PhD-level hard science questions (~78%). o3 now definitively beats them both, getting 88% right. In other words, a single AI system outperforms the average of human experts in these tests of each their respective domains of expertise.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kHV_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kHV_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png 424w, https://substackcdn.com/image/fetch/$s_!kHV_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png 848w, https://substackcdn.com/image/fetch/$s_!kHV_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png 1272w, https://substackcdn.com/image/fetch/$s_!kHV_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kHV_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png" width="1200" height="586" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:586,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kHV_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png 424w, https://substackcdn.com/image/fetch/$s_!kHV_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png 848w, https://substackcdn.com/image/fetch/$s_!kHV_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png 1272w, https://substackcdn.com/image/fetch/$s_!kHV_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0ea2aa-99cf-47e2-a5ea-a67a02d9f839_1200x586.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>o3 also scores high enough on Codeforces programming competition problems to place it as 175th <a href="https://codeforces.com/ratings">top-scoring</a> human in the world (out of the ~168k users in the last six months).</p><p><a href="https://www.swebench.com/">SWE-Bench</a> is a repository of real-life, unresolved issues in open source codebases. The top score a <a href="https://x.com/GarrisonLovely/status/1866945540644274526">year ago</a> was 4.4%. The top score at the start of December was 55%. OpenAI says o3 got <em>72%</em> correct.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ukOQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ukOQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png 424w, https://substackcdn.com/image/fetch/$s_!ukOQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png 848w, https://substackcdn.com/image/fetch/$s_!ukOQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png 1272w, https://substackcdn.com/image/fetch/$s_!ukOQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ukOQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png" width="1200" height="603" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:603,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ukOQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png 424w, https://substackcdn.com/image/fetch/$s_!ukOQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png 848w, https://substackcdn.com/image/fetch/$s_!ukOQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png 1272w, https://substackcdn.com/image/fetch/$s_!ukOQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ac120a-1ec6-489b-8605-ba79ca641c1b_1200x603.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>ARC-AGI</h3><p>o3 also <a href="https://arcprize.org/blog/oai-o3-pub-breakthrough">approaches</a> human performance on the ARC-AGI benchmark, which was designed to be hard for AI systems, but relatively doable for humans (i.e. you don't need a PhD to get decent scores). However, it's <em>expensive</em> to get those scores.</p><p>OpenAI research Nat McAleese published a <a href="https://x.com/__nmca__/status/1870170098989674833">thread</a> on the results, <a href="https://x.com/__nmca__/status/1870170117755343321">acknowledging</a> "o3 is also the most expensive model ever at test-time," i.e. when the model is being used on tasks. Running o3 on ARC-AGI tasks cost between $17 and thousands of dollars per problem &#8212; while humans can solve them for $5-10 each.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lyfX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lyfX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png 424w, https://substackcdn.com/image/fetch/$s_!lyfX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png 848w, https://substackcdn.com/image/fetch/$s_!lyfX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png 1272w, https://substackcdn.com/image/fetch/$s_!lyfX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lyfX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png" width="1456" height="796" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:796,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:157770,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lyfX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png 424w, https://substackcdn.com/image/fetch/$s_!lyfX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png 848w, https://substackcdn.com/image/fetch/$s_!lyfX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png 1272w, https://substackcdn.com/image/fetch/$s_!lyfX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40a244bf-c566-4483-88b5-fee8619c5d6a_1544x844.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This aligns with what I <a href="https://garrisonlovely.substack.com/i/151579244/maybe-agi-will-just-be-really-expensive">wrote</a> last month about how the first AI systems to achieve superhuman performance on certain tasks might actually cost more than humans working on those same tasks. However, that probably won't last long, if historic cost trends hold.</p><p>McAleese <a href="https://x.com/__nmca__/status/1870170119470563545">agrees</a> (though note that it's bad news for OpenAI if this isn't the case):</p><blockquote><p>My personal expectation is that token prices will fall and that the most important news here is that we now have methods to turn test-time compute into improved performance up to a very large scale.</p></blockquote><p>I'm particularly interested in seeing how o3 performs on <a href="https://arxiv.org/abs/2411.15114">RE-Bench</a>, a set of machine learning problems that may offer the best insight into how well AI agents stack up against expert humans in doing the work that could theoretically lead to an <a href="https://jacobin.com/2024/01/can-humanity-survive-ai#:~:text=Explosion%3A%20The%20Extinction%20Case">explosion</a> in AI capabilities. I would guess that it will be significantly better than the current SOTA, but also significantly more expensive (though still cheaper than the human experts).</p><p>Mike Knoop, co-founder of ARC-AGI prize <a href="https://x.com/mikeknoop/status/1870172132136931512">wrote</a> on X that "o3 is really special and everyone will need to update their intuition about what AI can/cannot do."</p><h3>So is deep learning actually hitting a wall?</h3><p>A bit over a month ago, I <a href="https://garrisonlovely.substack.com/p/is-deep-learning-actually-hitting">asked</a> "Is Deep Learning Actually Hitting a Wall?" following a wave of reports that scaling up models like GPT-4 was no longer resulting in proportional performance gains. There was hope for the industry in the form of OpenAI's o1 approach, which uses reinforcement learning to "think" longer on harder problems, resulting in better performance on some reasoning and technical benchmarks. However, it wasn't clear how the economics of that approach would pencil out or where the ceiling was. I concluded:</p><blockquote><p>all things considered, I would not bet against AI capabilities continuing to improve, albeit at a slower pace than the <a href="https://time.com/6300942/ai-progress-charts/">blistering one</a> that has marked the dozen years since <a href="https://en.wikipedia.org/wiki/AlexNet">AlexNet</a> inaugurated the deep learning revolution.</p></blockquote><p>It's hard to look at the results of <a href="https://openai.com/index/introducing-openai-o1-preview/">o1</a> and then the potentially even more impressive results of o3 published ~3 months later and say that AI progress is slowing down. We may even be entering a new world where progress on certain classes of problems happens faster than ever before, all while other domains stagnate.</p><p>Shlegeris alluded to this dynamic in his message to me:</p><blockquote><p>It's interesting that the model is so exceptional at FrontierMath while still only getting 72% on SWEBench-Verified. There are way more humans who are able to beat its SWEBench performance than who are able to get 25% at FrontierMath.</p></blockquote><p>Once more people get access to o3, there will inevitably be widely touted examples of it failing on common-sense tasks, and it may be worse than other models at many tasks (maybe even most of them). These examples will be used to dismiss the genuine capability gains demonstrated here. Meanwhile, AI researchers will use o3 and models like it to continue to accelerate their work, bringing us closer to a future where humanity is increasingly rendered obsolete.</p>]]></content:encoded></item><item><title><![CDATA[Is the AI Doomsday Narrative the Product of a Big Tech Conspiracy?]]></title><description><![CDATA[A close reading of what tech titans actually say on the matter]]></description><link>https://www.obsolete.pub/p/is-the-ai-doomsday-narrative-the</link><guid isPermaLink="false">https://www.obsolete.pub/p/is-the-ai-doomsday-narrative-the</guid><dc:creator><![CDATA[Garrison Lovely]]></dc:creator><pubDate>Wed, 04 Dec 2024 05:51:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gtOx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>It&#8217;s still Giving Tuesday somewhere</h3><p><em>Before we dive in, a gentle request for money (but not for me).</em></p><p><em>This Giving Tuesday, I&#8217;m joining twelve other Substackers in <a href="https://www.givedirectly.org/substackers2024/?utm_campaign=obsolete">raising funds</a> for GiveDirectly, a nonprofit that sends no-strings-attached cash to people living in extreme poverty. It works &#8212; <a href="https://www.givedirectly.org/cash-evidence-explorer/">research</a> shows cash transfers have profound, lasting impacts. I worked there for nearly two years, and I&#8217;m thrilled to support their mission again. You can read more in the Appendix section.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.givedirectly.org/substackers2024/?utm_campaign=obsolete&quot;,&quot;text&quot;:&quot;Donate to GiveDirectly&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.givedirectly.org/substackers2024/?utm_campaign=obsolete"><span>Donate to GiveDirectly</span></a></p><p><em>It&#8217;s not too late to donate! This campaign is open through the end of the month.</em></p><p><em>Back to our irregularly scheduled programming. This one&#8217;s a bit longer than usual &#8212; it&#8217;s been on my mind for over a year. If you just want to see what tech execs have said on this topic, I made a compilation <a href="https://garrisonlovely.substack.com/p/a-compilation-of-tech-executives">here</a>.</em></p><div><hr></div><h3>The AI industry&#8217;s existential angst</h3><p>The leaders of the world&#8217;s most capable AI companies have done something unusual: they&#8217;ve all <a href="https://www.safe.ai/work/statement-on-ai-risk">stated publicly</a> that the technology they are building might end the world.</p><p>Executives at the top-three AI companies, OpenAI, Anthropic, and Google DeepMind, are all on the record saying that AI could lead to human extinction. Elon Musk, who runs his own leading AI company, <a href="https://www.wsj.com/tech/ai/elon-musk-x-open-ai-03ff1ead">xAI</a>, has been loudly warning about AI existential risk (x-risk) for <a href="https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat">over a decade</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.obsolete.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.obsolete.pub/subscribe?"><span>Subscribe now</span></a></p><p>(The typical <a href="https://jacobin.com/2024/01/can-humanity-survive-ai#:~:text=common%20x%2Drisk%20argument%20goes%3A%20once%20AI%20systems%20reach%20a%20certain%20threshold%2C%20they%E2%80%99ll%20be%20able%20to%20recursively%20self%2Dimprove%2C%20kicking%20off%20an%20%E2%80%9Cintelligence%20explosion.%E2%80%9D%20If%20a%20new%20AI%20system%20becomes%20smart%C2%A0%E2%80%94%C2%A0or%20just%20scaled%20up%C2%A0%E2%80%94%C2%A0enough%2C%20it%20will%20be%20able%20to%20permanently%20disempower%20humanity">thinking</a> is that the first system to rival humans across the board, often called artificial general intelligence (AGI), could <a href="https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/#:~:text=If%20we%20are%20not,all%20over%20the%20world.">recursively self-improve</a> until it is capable or scaled up enough to be able to overpower all of humanity. Humans could effectively find themselves subject to the whims of a new dominant species that treats us the way we treat non-human animals, as a curiosity, an instrument, or an afterthought.)</p><p>This strange phenomenon has generated lots of attention and a heap of skepticism. One common argument is that these executives are cynically drumming up fears about AI-driven extinction to hype up their products and/or defer regulatory action to some future, unspecified date. Some go as far as to say that the prominence of the AI x-risk narrative is actually a result of a Big Tech conspiracy, designed to <a href="https://jacobin.com/2024/01/can-humanity-survive-ai#:~:text=However%2C%20the%20more,or%20less%20biased.">distract</a> from the technology&#8217;s immediate harms and embarrassing shortcomings. <a href="https://www.latimes.com/business/technology/story/2023-03-31/column-afraid-of-ai-the-startups-selling-it-want-you-to-be">Popularized</a> <a href="https://x.com/timnitGebru/status/1641290908363804673">by</a> <a href="https://x.com/mer__edith/status/1641426267118661638">critics</a> on the left, variants of this argument have recently been advanced by <a href="https://x.com/GarrisonLovely/status/1818683717142876626">Ted Cruz</a> and <a href="https://x.com/GarrisonLovely/status/1815490153240268924">JD Vance</a>.</p><p>It&#8217;s a compelling story and plays to a warranted skepticism of the people who are warning about AI&#8217;s risks, while seemingly doing everything they can to accelerate its progress.</p><p>The statements made by the executives at the top-three AI companies on x-risk are well-documented, so I won&#8217;t go through them exhaustively here. But far less attention has been paid to what other tech titans have said on the matter.</p><p>This might be because their statements undermine the narrative, but it might just be because they don&#8217;t say interesting things! Instead, the more typical Big Tech CEO says something like <em>AI promises to transform the world, overwhelmingly for the better, but we should still be careful to manage its risks (which actually aren&#8217;t that bad).</em></p><p>While the rise of AI has shuffled around the ranking of the most valuable tech companies, they&#8217;ve all done <em>very</em> well in the two years since ChatGPT was released. </p><p>At the time of this writing, nine of the world&#8217;s ten <a href="https://companiesmarketcap.com/">most valuable companies</a> are tech firms, and all are worth over one trillion dollars. Nvidia, which makes the vast majority of the GPUs used to train large AI models, <a href="https://docs.google.com/spreadsheets/d/1pvB6KLelbcEkw-Oks3-nKGiXFCw-rVfcwqogZLFOhD0/edit?gid=843273406#gid=843273406">grew</a> a staggering <em>701%</em>, from $421 billion to <em>$3.4 trillion</em>. It now trades places with Apple for the title of &#8216;most valuable company in the world.&#8217;</p><p>All of these tech giants outperformed the S&amp;P 500 in this period. In fact, <em><a href="https://docs.google.com/spreadsheets/d/1pvB6KLelbcEkw-Oks3-nKGiXFCw-rVfcwqogZLFOhD0/edit?gid=1024169749#gid=1024169749">half</a> </em>of all growth in the S&amp;P 500 over the last two years is attributable to just these nine companies.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gtOx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gtOx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png 424w, https://substackcdn.com/image/fetch/$s_!gtOx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png 848w, https://substackcdn.com/image/fetch/$s_!gtOx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png 1272w, https://substackcdn.com/image/fetch/$s_!gtOx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gtOx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png" width="1456" height="905" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:905,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gtOx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png 424w, https://substackcdn.com/image/fetch/$s_!gtOx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png 848w, https://substackcdn.com/image/fetch/$s_!gtOx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png 1272w, https://substackcdn.com/image/fetch/$s_!gtOx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b4ad91-39f9-4dec-878b-90d75e3603a4_1544x960.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Data from CompaniesMarketCap.com, chart by me</figcaption></figure></div><p>So if their interests are served by the AI x-risk narrative, we should expect tech executives to be some of its biggest proselytizers.</p><p>But in practice, Musk is the only current leader of a Big Tech firm to state explicitly that AI poses an x-risk.</p><p>Last year, Bill Gates signed onto an <a href="https://www.safe.ai/work/statement-on-ai-risk">open letter</a> stating that &#8220;Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.&#8221; But he <a href="https://www.forbes.com/sites/alexkonrad/2023/02/06/bill-gates-openai-microsoft-ai-hottest-topic-2023/">hasn&#8217;t held</a> a formal role at Microsoft since 2020.</p><h3>So what do they actually say in public?</h3><p>The other tech titans generally dismiss the possibility of extinction entirely while arguing that the benefits of AI will significantly outweigh any risks. As in, just what you&#8217;d expect from corporate executives. If you&#8217;re interested in the extended quotes, I compiled them <a href="https://garrisonlovely.substack.com/p/a-compilation-of-tech-executives">here</a>.</p><p>Microsoft CEO Satya Nadella essentially waves away x-risk concerns, <a href="https://www.wired.com/story/microsofts-satya-nadella-is-betting-everything-on-ai/">telling</a> WIRED in June 2023 he's &#8220;not at all worried about AGI showing up, or showing up fast.&#8221; Instead, he frames AI as potentially &#8220;bigger than the industrial revolution,&#8221; bringing abundance to all eight billion people on Earth.</p><p>Microsoft CTO Kevin Scott, however, was one of the few Big Tech executives to sign the <a href="https://www.safe.ai/work/statement-on-ai-risk">extinction letter</a>.</p><p>Meta's Mark Zuckerberg has been consistently dismissive. In a 2016 interview, he was <a href="https://www.businessinsider.com/mark-zuckerberg-doesnt-worry-about-ai-overtaking-humans-2016-2">asked</a> if fear of AI takeover was &#8220;valid&#8221; or &#8220;hysterical.&#8221; His reply: &#8220;more hysterical.&#8221;</p><p>And here&#8217;s Zuckerberg in <a href="https://www.theverge.com/2024/4/18/24134370/mark-zuckerberg-meta-interview-llama-3-ai-assistant-race">April</a>:</p><blockquote><p>In terms of all of the concerns around the more existential risks, I don't think that anything at the level of what we or others in the field are working on in the next year is really in the ballpark of those types of risks.</p></blockquote><p>This is a far more precise answer implying some level of movement toward taking the possibility more seriously &#8212; but a far cry from hyping up x-risk.</p><p>Amazon's leadership is similarly optimistic. Back in 2018, company chairman Jeff Bezos <a href="https://www.cnbc.com/2018/05/11/jeff-bezos-on-ai-robots-wont-take-all-our-jobs.html">said</a>, &#8220;The idea that there is going to be a general AI overlord that subjugates us or kills us all, I think, is not something to worry about. I think that is overhyped.&#8221; He doubled down on Lex Fridman&#8217;s podcast last December, <a href="https://lexfridman.com/jeff-bezos-transcript">saying</a>, &#8220;These powerful tools are much more likely to help us and save us even than they are to, on balance, hurt us and destroy us.&#8221;</p><p>Amazon CEO, Andy Jassy, appears to stick to pure optimism. In an April 2024 <a href="https://www.aboutamazon.com/news/company-news/amazon-ceo-andy-jassy-2023-letter-to-shareholders#:~:text=Unlike%20the%20mass%20modernization%20of%20on%2Dpremises%20infrastructure%20to%20the%20cloud%2C%20where%20there%E2%80%99s%20work%20required%20to%20migrate%2C%20this%20GenAI%20revolution%20will%20be%20built%20from%20the%20start%20on%20top%20of%20the%20cloud.%20The%20amount%20of%20societal%20and%20business%20benefit%20from%20the%20solutions%20that%20will%20be%20possible%20will%20astound%20us%20all.">letter</a> to shareholders, he gushed about the &#8220;GenAI revolution,&#8221; writing that, &#8220;The amount of societal and business benefit from the solutions that will be possible will astound us all&#8221; &#8212; while remaining silent on extinction risks.</p><p>When Apple CEO Tim Cook was <a href="https://forum.effectivealtruism.org/posts/gZhrqihqSEvbtTBpi/tim-cook-was-asked-about-extinction-risks-from-ai">asked</a> about AI extinction on Good Morning America in June 2023, he called for regulation but stayed mum on the actual question.</p><p>Nvidia CEO Jensen Huang, who has perhaps benefited more than anyone from the AI boom, has been remarkably cavalier about AI risk. Take this passage from a November 2023 <em>New Yorker</em> <a href="https://www.newyorker.com/magazine/2023/12/04/how-jensen-huangs-nvidia-is-powering-the-ai-revolution">profile</a>, after the author voices his dread at the seemingly imminent obsolescence of humanity:</p><blockquote><p>Huang, rolling a pancake around a sausage with his fingers, dismissed my concerns. &#8220;I know how it works, so there&#8217;s nothing there,&#8221; he said. &#8220;It&#8217;s no different than how microwaves work.&#8221; I pressed Huang &#8212; an autonomous robot surely presents risks that a microwave oven does not. He responded that he has never worried about the technology, not once. &#8220;All it&#8217;s doing is processing data,&#8221; he said. &#8220;There are so many other things to worry about.&#8221;</p></blockquote><p>But Huang's public comments in the <a href="https://www.newyorker.com/magazine/2023/12/04/how-jensen-huangs-nvidia-is-powering-the-ai-revolution">profile</a> reveal some tension. At a speaking engagement, he acknowledges concerns about &#8220;doomsday AIs&#8221; that could learn and make decisions autonomously, insisting that, &#8220;No AI should be able to learn without a human in the loop.&#8221; In response to an audience question, he predicted that AI &#8220;reasoning capability is two to three years out&#8221; &#8212; a timeline that sent murmurs through the crowd.</p><p>(It&#8217;s not clear to me if Huang is claiming that AI will never be able to learn without human involvement, or if he&#8217;s saying it should never be allowed to happen. The former is at odds with the <a href="https://managing-ai-risks.com/managing_ai_risks.pdf">position</a> of the most cited researchers in the field. The latter will not happen without regulation, as the <a href="https://jacobin.com/2024/01/can-humanity-survive-ai">competitive pressures</a> will prompt us to cede more and more decision-making powers to autonomous systems.)</p><p>I couldn&#8217;t find any public statements on this topic from CC Wei, chief executive of TSMC, which manufactures <a href="https://finance.yahoo.com/news/why-taiwan-semiconductor-stock-moving-190828264.html">almost all</a> the semiconductors used in advanced AI development.</p><p>Google is a bit more complicated. In an interview largely buried in the paywalled &#8220;Overtime&#8221; <a href="https://www.paramountplus.com/shows/video/OhGyPf1sEEvxo_VWlRJS8RaHGFnVR97Z/?ftag=CNM-00-10abb6c">section</a> of an April 2023 episode of <em>60 Minutes</em>, CBS News&#8217; Scott Pelley speaks with Google CEO Sundar Pichai, who says:</p><blockquote><p>I&#8217;ve always thought of AI as the most profound technology humanity is working on. More profound than fire or electricity or anything that we&#8217;ve done in the past&#8230; We are developing technology which, for sure, one day, will be far more capable than anything we&#8217;ve ever seen before.</p></blockquote><p>Later on, there&#8217;s this notable exchange:</p><blockquote><p>Pelley: What are the downsides?</p><p>Pichai: I mean the downside is, at some point, that humanity loses control of the technology it&#8217;s developing</p><p>Pelley (voice over): Control, when it comes to disinformation and generating fake images.</p></blockquote><p>I&#8217;m not sure if Pelley is just paraphrasing what Pichai says next, or speculating as to what he meant, but it&#8217;s hard to reconcile the claim that AI will inevitably be far more capable than any past technology, we might lose control of it, and that the biggest downside will be <em>disinformation</em>.</p><p>(Props to Jason Aten for finding and <a href="https://www.inc.com/jason-aten/with-1-sentence-googles-ceo-just-explained-biggest-downside-of-ai-its-a-warning-for-all-of-us.html">writing up</a> this interview.)</p><p>Former Google CEO Eric Schmidt has publicly <a href="https://www.cnbc.com/2023/05/24/ai-poses-existential-risk-former-google-ceo-eric-schmidt-says.html">expressed concern</a> that AI poses an &#8220;existential risk,&#8221; which he defined as &#8220;many, many, many, many people harmed or killed,&#8221; at a <em>Wall Street Journal</em> <a href="https://www.wsj.com/video/events/life-with-ai/F13F8BDE-2AB5-4BA1-9191-997FC338C1AF.html">event</a> in May 2023. However, he has consistently emphasized &#8220;misuse&#8221; risk, i.e. a bad actor uses AI to cause harm, rather than &#8220;misalignment&#8221; risk, i.e. humanity loses control of a powerful AI.</p><p>Where you come down on the relative risk of <a href="https://aiimpacts.org/misalignment-and-misuse-whose-values-are-manifest/">misuse vs. misalignment</a> tends to have significant implications on where you fall on the spectrum from &#8216;<a href="https://situational-awareness.ai/the-free-world-must-prevail/">race China to build AGI first&#8217;</a> on one side to &#8216;<a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">shut it all down</a> because AGI will kill everyone by default&#8217; on the other (this continuum likely warrants its own future post).</p><p>Schmidt advocates for racing China to build AGI, <a href="https://www.thecrimson.com/article/2024/11/19/eric-schmidt-china-ai-iop-forum/">arguing</a> at Harvard in November that even a few months' lead could provide &#8220;a very, very profound advantage.&#8221; He also claims that the US was now falling behind China in the AI race &#8212; an abrupt departure from his <a href="https://www.businessinsider.com/eric-schmidt-comments-china-behind-united-states-ai-2024-5">assessment</a> to <a href="https://www.bloomberg.com/news/videos/2024-05-07/schmidt-says-he-considered-buying-tiktok">Bloomberg</a> only six months ago that the US was &#8220;two or three years&#8221; ahead of China on AI.</p><p>Despite acknowledging the need for some international guardrails, like restrictions on autonomous weapons, Schmidt <a href="https://www.thecrimson.com/article/2024/11/19/eric-schmidt-china-ai-iop-forum/">tells</a> the Harvard audience that he remains fundamentally optimistic about AI. And in a November <a href="https://www.economist.com/by-invitation/2024/11/19/middle-powers-can-thrive-in-the-age-of-ai-says-eric-schmidt">essay</a> in the <em>Economist</em>, he writes that the technology could reset the &#8220;baseline of human wealth and well-being,&#8221; concluding that &#8220;Just that possibility itself demands that we pursue it.&#8221;</p><p>This seemingly contradictory position &#8212; emphasizing AI's destructive potential while advocating for its accelerated development &#8212; may be both a product and a consequence of Schmidt&#8217;s <a href="https://defensescoop.com/2023/09/08/eric-schmidt-led-panel-pushing-for-new-defense-experimentation-unit-to-drive-military-adoption-of-generative-ai/">deep ties</a> to the national security establishment, like his past leadership of both the National Security Commission on Artificial Intelligence and the Pentagon's Defense Innovation Board.</p><p>Google co-founder, Sergey Brin, has not appeared to make any public statements on AI x-risk.</p><h3>What do they (reportedly) say in private?</h3><p>It&#8217;s possible that these deca-billionaires are singing a different tune in private, but privacy doesn&#8217;t mean as much when you&#8217;re that famous.</p><p>According to <a href="https://www.vanityfair.com/news/2023/09/artificial-intelligence-industry-future#:~:text=While%20Page%20stays,soon%20as%20possible.%E2%80%9D">multiple</a> <a href="https://time.com/6310076/elon-musk-ai-walter-isaacson-biography/">independent</a> <a href="https://www.nytimes.com/2023/12/03/technology/ai-openai-musk-page-altman.html?searchResultPosition=8">sources</a>, Google&#8217;s other co-founder, Larry Page, thinks AI could kill us all &#8212; he just doesn&#8217;t seem to care. In private settings, he&#8217;s reportedly dismissed efforts to prevent AI-driven extinction as &#8220;speciesist&#8221; and &#8220;sentimental nonsense,&#8221; viewing superintelligent AI as &#8220;just the next step in evolution.&#8221;</p><p>Zuckerberg has long had strong feelings about the idea that AI might drive humanity extinct. He wants people to <em>stop talking about it.</em> At least that&#8217;s what he asked of Elon Musk when they first met back in 2014, according to Cade Metz&#8217;s book <em><a href="https://www.amazon.com/Genius-Makers-Mavericks-Brought-Facebook/dp/1524742678">Genius Makers</a></em>. Zuckerberg invited Musk with the intention of getting him to tone it down. He even invited backup from some prominent Facebook researchers, including Yann LeCun (who has himself <a href="https://x.com/ylecun/status/1718670073391378694">become</a> one of loudest promoters of the idea that AI industrialists are fear-mongering to capture regulators). Metz writes that during his meeting with Musk, Zuckerberg &#8220;didn&#8217;t want lawmakers and policy makers getting the impression that companies like Facebook would do the world harm with their sudden push into artificial intelligence.&#8221;</p><p>This pattern is clear &#8212; the executives at the biggest tech companies publicly downplay or dismiss existential risks while emphasizing AI's benefits.</p><p>If hyping up x-risk actually serves Big Tech's interests, shouldn&#8217;t we expect to see more titans joining that chorus?</p><h3>And what of the true believers?</h3><p>You can even see a transformation underway in the founders who kicked off the generative AI revolution.</p><p>OpenAI CEO Sam Altman has a long track record of frankly (sometimes glibly) discussing x-risk from AI. However, as OpenAI has gotten closer to profitability and Altman closer to power, the CEO has actually begun <em>downplaying</em> the idea of AI-driven extinction.</p><p>In February 2015, nearly a full year before OpenAI was <a href="https://www.britannica.com/money/OpenAI">formally founded</a>, Altman opened a personal <a href="https://web.archive.org/web/20240215072900/https://blog.samaltman.com/machine-intelligence-part-1">blog post</a> with this stark sentence: &#8220;Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.&#8221; In June 2015, as he was co-founding OpenAI, Altman <a href="https://x.com/liron/status/1760338580562657719">said</a> publicly, &#8220;AI will probably, most likely lead to the end of the world, but in the meantime, there&#8217;ll be great companies.&#8221;</p><p>However, in an interview just before he was briefly dethroned in November 2023, Altman <a href="https://www.nytimes.com/2023/11/20/podcasts/hard-fork-sam-altman-transcript.html#:~:text=I%20actually%20don%E2%80%99t%20think%20we%E2%80%99re%20all%20going%20to%20go%20extinct.%20I%20think%20it%E2%80%99s%20going%20to%20be%20great.%20I%20think%20we%E2%80%99re%20heading%20towards%20the%20best%20world%20ever.">said</a>, &#8220;I actually don&#8217;t think we&#8217;re all going to go extinct. I think it&#8217;s going to be great. I think we&#8217;re heading towards the best world ever.&#8221;</p><p>And in a September blog <a href="https://ia.samaltman.com/">post</a> called &#8220;The Intelligence Age,&#8221; Altman doesn&#8217;t mention extinction at all. The only downside he specifies is the &#8220;significant change in labor markets (good and bad)&#8221; to come, but even here, his outlook is rosy:</p><blockquote><p>most jobs will change more slowly than most people think, and I have no fear that we&#8217;ll run out of things to do (even if they don&#8217;t look like &#8220;real jobs&#8221; to us today).</p></blockquote><p>(I documented Altman&#8217;s journey on this topic in greater detail in my <em>Jacobin </em>cover story <a href="https://jacobin.com/2024/01/can-humanity-survive-ai#:~:text=One%20understandable%20source,couple%20of%20generations.%E2%80%9D">here</a>.)</p><p>Last summer, The <em>New York Times</em> <a href="https://www.nytimes.com/2023/07/11/technology/anthropic-ai-claude-chatbot.html">called</a> Anthropic &#8220;the White-Hot Center of AI Doomerism,&#8221; but even its CEO, Dario Amodei, has presented a sunnier perspective as his company has grown. In October, he published a 14,000 word <a href="https://darioamodei.com/machines-of-loving-grace">essay</a> called &#8220;Machines of Loving Grace&#8221; that outlined how incredible the world could be if things go well with AI, partly in a conscious attempt to respond to his reputation as a &#8220;doomer.&#8221;</p><p>Of everyone mentioned so far, excluding Musk, I think Amodei is the most genuinely worried about the risks from AI. So it&#8217;s notable that even he is changing his public emphasis.</p><h3>Under pressures</h3><p>I think what&#8217;s happening here is pretty straightforward. The people who are leading the charge on developing general AI systems have long histories of caring a lot about the technology. They thought it would be a huge deal, with nearly boundless upside and downside. As their companies grew, they started dealing with new pressures: investors, lawyers, regulators, and higher-ups (at least in the case of DeepMind once it was acquired by Google).</p><p>Of these pressures, the fear of regulatory action might be the most significant. Most of the AI industry <a href="https://garrisonlovely.substack.com/p/the-tech-industry-is-the-biggest">does not</a> want to be regulated, no matter what they <a href="https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html">may have said</a> to the contrary. <a href="https://www.documentcloud.org/documents/25034111-the-honorable-senator-umberg-senate-bill-1047/">Google</a>, <a href="https://www.documentcloud.org/documents/25054090-openai-formal-letter-of-opposition">OpenAI</a>, and <a href="https://www.documentcloud.org/documents/25036015-sb-1047-letter-62524">Meta</a> all came out hard against California <a href="https://garrisonlovely.substack.com/p/all-my-coverage-of-california-ai">Senate Bill 1047</a>, the <a href="https://www.context.news/ai/battle-rages-over-uss-first-binding-ai-safety-bill-in-california">first real attempt</a> to implement binding AI safety guardrails in the US. Few of <a href="https://www.thenation.com/article/society/california-ai-safety-bill/">their arguments</a> against the bill hold up under the slightest scrutiny, but governor Gavin Newsom caved to political and industry pressure to <a href="https://jacobin.com/2024/09/gavin-newsom-ai-tech-bill-sb-1047">kill</a> it anyway.</p><p>If AI companies ever needed to rely on doomsday fears to lure investors and engineers, they definitely don&#8217;t anymore. As governments slowly turn their attention to the industry, most executives seem far more interested in maintaining the <a href="https://jacobin.com/2024/09/gavin-newsom-ai-tech-bill-sb-1047#:~:text=It%20would%20have%20been%20the%20first%20law%20in%20the%20United%20States%20to%20mandate%20that%20these%20companies%20implement%20safeguards%20to%20mitigate%20catastrophic%20risks%2C%20breaking%20from%20the%20tradition%20of%20using%20the%20voluntary%20AI%20safety%20commitments%20preferred%20by%20the%20industry%20and%20national%20lawmakers.">status quo</a> of self-regulation than in promoting the idea that their products pose a risk to the whole world.</p><p>It&#8217;s also possible, of course, that their views actually updated in light of how AI was developing. (Though I think it would be a grave mistake to conclude from the fact that ChatGPT mostly complies with developer and user intent that we have any reliable way of controlling an actual machine superintelligence. The top researchers in the field say <a href="https://managing-ai-risks.com/managing_ai_risks.pdf">we don&#8217;t</a>.)</p><h3>Should we just ignore CEOs?</h3><p>Some will see this exhaustive cataloguing as a big waste of time, arguing that these executives are so conflicted that their pronouncements contain no real information. Instead of reading tainted tea leaves, we should just ignore them and focus on what is said by others who have special insight into the technology, but no vested interest in playing its risks up or down. (This might include people like <a href="https://garrisonlovely.substack.com/p/35-yoshua-bengio-on-why-ai-labs-are">Yoshua Bengio</a>, a deep learning pioneer who resisted the siren song of Big Tech and, last year, <a href="https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/">became</a> one of the most prominent voices to warn that AI poses an extinction risk.)</p><p>I think this is a good instinct, but you can still learn things from what these executives do and don&#8217;t say, as well as how their statements have changed over time. And I&#8217;d wager that the things said a <a href="https://www.lesswrong.com/posts/No5JpRCHzBrWA4jmS/q-and-a-with-shane-legg-on-risks-from-ai">decade</a> ago by the founders of the leading AI companies hew pretty closely to their actual views at the time.</p><p>About a year ago, I asked an employee at one of these companies whether the major AI founders took x-risk seriously, and they wrote back:</p><blockquote><p>I don&#8217;t actually think the people talking about x risk are that motivated by hype or regulatory capture. I&#8217;ve discussed this at length with basically everyone relevant in very private settings. Usually it&#8217;s quite a genuine concern.</p></blockquote><p>This matches my experience reporting on and interacting with the AI industry. Some of the people most worried about AI are conflicted, but many of them aren&#8217;t. And conflicts alone aren&#8217;t discrediting. Arguments and evidence should be evaluated on their merits.</p><p>And while AI doomsday narratives may not be the result of a Big Tech conspiracy, the real story is far more unsettling: some of the people closest to the technology are genuinely <a href="https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction">terrified</a> of what they're building, while others can't wait to build it faster.</p><div><hr></div><h2>Appendix: More info on GiveDirectly</h2><p>This <a href="https://www.givedirectly.org/substackers2024/?utm_campaign=obsolete">fundraiser</a> is near and dear to me because I spent nearly two years working at GiveDirectly! Unconditional cash transfers work really well &#8212; even better than expected. GiveWell, the rigorous charity evaluator, recently <a href="https://www.givedirectly.org/givewell-2024/">estimated</a> that cash transfers were 3-4 times more effective than they previously thought thanks to <a href="https://www.vox.com/future-perfect/2019/11/25/20973151/givedirectly-basic-income-kenya-study-stimulus">positive</a> <a href="https://www.givedirectly.org/wp-content/uploads/2019/11/General-Equilibrium-Effects-of-Cash-Transfers.pdf">spillover effects</a> (in short, because a dollar spent is a dollar earned).</p><p>I&#8217;m particularly excited about the policy implications of GiveDirectly&#8217;s work and research. In addition to helping people who really need it, the nonprofit has been part of a <a href="https://www.givedirectly.org/cash-evidence-explorer/">larger effort</a> to <a href="https://www.usaid.gov/news-information/speeches/dec-08-2022-administrator-samantha-power-at-the-white-house-co-hosted-evidence-forum-us-policy-impact-in-foreign-aid-and-beyond">benchmark</a> social policies against unconditional cash, which, in my view, is a welcome trend in development economics and public policy. (It might be even better to treat cash as the <a href="https://www.givedirectly.org/default/">default</a>.)</p>]]></content:encoded></item></channel></rss>