Discussion about this post

User's avatar
Earl Boebert's avatar

What I see here is similar to what my co-author and I discovered when we wrote our well-received account of the Deepwater Horizon disaster. In the late 1990s the head of BP, to the applause of McKinsey and Stanford Business School (two institutions that should never be let near a high-consequence engineering project) made BP the most efficient producer of oil in the Gulf of Mexico. He did this by stripping all of the redundancy out of the organization. All that was left were forward-looking, "get 'er done, son" elements and there was nobody left who could say "no" or even alert upper management to potential for disaster. And disasters followed: two million barrels of oil spilled on the Alaska tundra, 15 dead in the Texas City refinery explosion, and 11 dead and the largest man-made ecological disaster in the history of the U.S. when the Macondo well blew out.

I see OpenAI reconfiguring itself in the same way, applauded by the same finance-first culture for the same reasons, and running the risk of the same kind of multiple catastrophe. Fasten your seat belts, folks, it's going to be a bumpy ride.

Expand full comment
Jonathan Grudin's avatar

“OpenAI, as we knew it, is dead.”

I'll suggest instead that OpenAI, as we thought we knew it, was never alive. It was a dream. The dream, shared by employees, investors, and the public, was that software more capable than existing search engines, conversational agents, and generative tools would quickly find sufficient revenue streams to support an ongoing commitment to focus on safe and secure accomplishment rather than on profitability. Those revenue streams have not materialized, the dream dematerialized, and here we are, with no need to assume bad intentions anywhere.

Expand full comment
20 more comments...

No posts