Human-level AI is Not Inevitable. We Have the Power to Change Course - My Latest in The Guardian
Technology happens because people make it happen. We can choose otherwise.
I was in The Guardian last week arguing that artificial general intelligence (AGI) is not inevitable. Here’s the start of the piece, which is freely available here. Accompanying threads: X (formerly Twitter), LinkedIn, Bluesky, Threads. This is part of Breakthrough, a new series on technology and the left, launched by Guardian US opinion editor Amana Fontanella-Khan.
“Technology happens because it is possible,” OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb.
Altman captures a Silicon Valley mantra: technology marches forward inexorably.
Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity.
For countless other species, the arrival of humans spelled doom. We weren’t tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a new species, which might quickly outsmart or outnumber us. It could see humanity as a minor obstacle, like an anthill in the way of a planned hydroelectric dam, or a resource to exploit, like the billions of animals confined in factory farms.
Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures).
Given all this, it’s natural to ask: should we really try to build a technology that may kill us all if it goes wrong?
Perhaps the most common reply says: AGI is inevitable. It’s just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing called “the last invention that man need ever make”. Besides, the reasoning goes within AI labs, if we don’t, someone else will do it – less responsibly, of course.
A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI’s inevitability is a consequence of the second law of thermodynamics and that its engine is “technocapital”. The e/acc manifesto asserts: “This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.”
For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature. But it’s not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we’ve done it before.
No technology is inevitable, not even something as tempting as AGI…
Great article!
Garrison,
I'm a volunteer at Pause AI. Quit my Amazon developer job on an ML team to go full-time AI safety. Decided that activism and public awareness of extinction risk were the biggest levers despite being far from my own nerd wheelhouse.
I value your writing and thought this was a great article.
I'd really appreciate talking with you in more depth about the psychology of the public AIXR conversation.
http://antb.me/meet if interested.