AI Worker Power is Near Its Peak. They’re Finally Starting To Use It.
Google DeepMind UK employees voted to unionize, but not for higher pay.
Note to readers
I wrote a book! It’s called Obsolete: The AI Industry’s Trillion-Dollar Race to Replace You—and How to Stop It. Preorders through my publisher will go out in June. Wide release is September 15.
It got some great new blurbs this week!
“With admirable clarity, Lovely’s analysis cuts straight through the contentious and dizzying discourse surrounding AI...an invaluable and urgent warning about the threat it poses to democracy—and, as he convincingly argues, to the future of our civilization itself.” — Luke Savage, author of The Dead Center
“A deeply researched exposition of the grim situation humanity finds itself in with respect to AI, and what we can do about it...even-handed while expressing a strong and correct view of what is happening.” — Anthony Aguirre, theoretical physicist; executive director, Future of Life Institute
“Garrison Lovely is exactly what the left needs at this moment in technological history.” — Robert Wright, author of Nonzero and Why Buddhism Is True
And now, a significant development from earlier this week you might have missed.
If you’re a researcher at a leading AI company, you occupy a strange position. Should the technology keep advancing, you — as one of the people most able to shape its direction — are among the world’s most powerful. But you spend your time frantically trying to render yourself obsolete (on the path to doing the same to the rest of us). Should you succeed, your leverage goes away first, then ours.
At the moment, however, you are an irreplaceable part of the most economically significant process in the world: training successively more capable AI models. This gives you the power to command billion-dollar compensation packages. And in the absence of legislation, your labor power is one of the key levers shaping what the industry does and doesn’t do.
One of the biggest signs that AI employees are recognizing their power and corresponding responsibility came in this week’s news that UK Google DeepMind employees voted to unionize — a first for a frontier AI company. The Guardian broke the story and reported:
One of the workers said they were particularly driven by reports that Google was close to reaching a deal with the defense department and pointed to the US’s “capricious Iran war” and the Trump administration’s feud with Anthropic as indications that the department is “not a responsible partner”.
The aforementioned deal was indeed reached, granting the Pentagon the ability to use Google’s AI for classified work for “any lawful use” (along with SpaceX, OpenAI, Nvidia, Reflection, Microsoft, and Amazon Web Services).
Earlier this year, Anthropic famously refused to sign a similar deal without guarantees its technology wouldn’t be used in domestic mass surveillance or lethal autonomous weapons (i.e. killer robots). For this, the Trump administration labeled the company a supply chain risk — a designation that has only ever been used on companies tied to governments deemed U.S. adversaries. (However, the scary cyber capabilities of Anthropic’s unreleased Mythos model has led the feds to be more dependent than ever on the company, and both sides are reportedly seeking peace.)
In January, Anthropic CEO Dario Amodei speculated that future systems capable enough to compromise any computer system:
could also use the access obtained in this way to read and make sense of all the world’s electronic communications (or even all the world’s in-person communications, if recording devices can be built or commandeered). It might be frighteningly plausible to simply generate a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn’t explicit in anything they say or do.
And, weeks later, the Department of War demanded Anthropic allow its models be used more or less exactly like that, as Amodei explained to employees in a leaked internal memo:
it is legal for DoW to buy a bunch of private data on US citizens from vendors who have obtained that data in some legal way (often involving hidden consents to sell to third parties) and then analyze it at scale with AI to build profiles of citizens, their loyalties, movement patterns in physical space (the data they can get includes GPS data, etc), and much more.
Notably, near the end of the negotiation the DoW offered to accept our current terms if we deleted a specific phrase about ‘analysis of bulk acquired data,’ which was the single line in the contract that exactly matched this scenario we were most worried about. We found that very suspicious.
In sum, the legal and technical groundwork is there to build a truly dystopian digital surveillance state. The major AI companies, besides Anthropic, are going along with it, reassuring employees and the public with — at best — easily subvertible guardrails. And the Trump administration’s fixation on preserving its right to deploy killer robots and spy on everyone tells us plenty about what they intend to do with their newly terrifying levels of technical capability.
While Anthropic’s Pentagon fight and OpenAI’s willingness to fill the void became huge news, Google’s delayed assent happened in relative obscurity. But amidst the contemptible — if entirely expectable — capitulation by AI executives, AI workers are emerging as a rare bulwark against a surging authoritarian wave. And they’re getting organized. The Guardian reported on some of the motivations behind the new DeepMind UK union:
Another worker, who also requested anonymity, said that many at the company had struggled with what they had come to view as their complicity in Israel’s war in Gaza. The company provided the Israeli military with increased access to its AI tools from the early days of the war in Gaza, the Washington Post reported last year, and in 2021, it signed, along with Amazon, a $1.2bn cloud-computing contract with the Israeli government.
Historically, Big Tech has largely staved off unionization efforts using generous pay and benefits. But as company leadership grows increasingly out of step with the rank and file, organizing begins to look more appealing. And tactics that worked in the past, protests and open letters, are proving ineffective.
Following widespread employee opposition to a Pentagon contract called Project Maven, Google didn’t renew the contract in 2019 (Palantir took it. Incidentally, Maven has been how the Pentagon uses Anthropic’s AI in classified settings.)
Flash forward to 2024, when Google fired 50 employees who had protested a contract with the Israeli government. And earlier this year, nearly 1,000 Google employees joined over 100 OpenAI employees in signing a letter opposing the Pentagon’s use of their models “for domestic mass surveillance and autonomously killing people without human oversight.” More recently, over 600 Google employees signed a letter asking CEO Sundar Pichai to refuse to make the company’s AI systems available for classified work.
But these efforts obviously weren’t enough to stop the latest deals from going through.
Empty promises
The AI industry has been built atop a foundation of voluntary commitments — to governments, to recruits, to employees. With precious few exceptions like Anthropic’s refusal to bow to the Pentagon, these promises are abandoned once they conflict with material interests, inevitably, when the stakes are higher.
You can tell a version of this story using any AI company. For instance, when DeepMind sold to Google in 2014, the founders passed on a competing offer from Facebook that would have left them with twice the payout. Why? They didn’t feel that Mark Zuckerberg shared their ethical concerns around AI, according to Cade Metz’s book Genius Makers. In particular, the Facebook CEO “refused to accept a contractual clause that guaranteed DeepMind’s technology would be overseen by an independent ethics board.”
To land the deal, Google agreed to clauses guaranteeing DeepMind’s tech would never be used by militaries and that any hypothetical artificial general intelligence (AGI) the company built would be governed by an independent ethics board. That ethics board, which included Google cofounder Larry Page and prominent existential risk scholar Toby Ord, had its first meeting in 2015 — but never had a second one. The New York Times reported that fast AI progress made DeepMind’s founders “increasingly worried about what Google would do with their inventions.” Then:
In 2017, they tried to break away from the company. Google responded by increasing the salaries and stock award packages of the DeepMind founders and their staff. They stayed put.
(One DeepMind researcher wrote to me in late 2023 that they “don’t talk about long-term risk… in the office,” explaining that, “Google is more focused on building the tech and on safety in the sense of legality and offensiveness.”)
In 2018, Google DeepMind and its three founders were the top signatories of an open letter from the Future of Life Institute affirming “that the decision to take a human life should never be delegated to a machine.” (Elon Musk’s name appeared next.)
It took about a decade for DeepMind to abandon its other key commitment. In April 2024, Billy Perrigo reported in Time that Google provides cloud computing services to Israel’s military. In early 2025, Hassabis and another Google executive announced changes to the company’s policy on AI, quietly dropping a section that prohibited applications that were “likely to cause harm.” Perrigo asked Hassabis if the change was a compromise in order to keep pursuing AGI, Hassabis said no and pointed to “the much bigger geopolitical uncertainties we have around the world.” He added:
We can’t take for granted anymore democratic values are going to win out — I don’t think that’s clear at all. There are serious threats. So I think we need to work with governments.
Maybe Hassabis had a genuine change of heart. Or maybe he just traded against his principles to stay in the room where it happens — the trade that keeps the AI race going.
Google’s executives are predictable; they’ll do whatever they think will maximize shareholder value (or they’ll eventually find themselves out of the job). For instance, chief scientist Jeff Dean has been an outspoken critic of ICE’s assaults in Minneapolis and even tweeted this during the Anthropic-Pentagon showdown: “Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression. Surveillance systems are prone to misuse for political or discriminatory purposes.” But he’s remained silent as technology he played an instrumental role in building is positioned to entrench that same administration.
Promises without anything to back them up should be understood for what they are: marketing.
We should have never been relying on the conscience of individuals to make all this go well — something that is becoming increasingly difficult to ignore. We need to instead change the equation, to make capitulation cost more than resistance. The DeepMind union organizers seem to realize this. Here’s The Guardian again:
Workers who voted to join the union said they did so to raise pressure on Google to meet demands already made by other employees at the company, including that it commit not to develop technology “whose primary purpose is to cause harm or injury to people”, establish an independent ethics oversight body, and grant workers the individual right to refuse to contribute to projects on moral grounds. Should the company refuse, they said, they are considering protests and “research strikes”, during which staff abstain from work expected to significantly improve core products such as Gemini, Google’s AI bot, while avoiding detection by continuing to perform less significant updates.
You might ask: why bother with all this? Even if Google backs out of the deal, won’t OpenAI and SpaceX (which now includes xAI) just pick up the slack?
Well, their workers still have plenty of leverage, and they might not be thrilled about their work being used to build a techno-dystopia (one they’ll have to live in, by the way). And there’s immense power in examples and precedents.
When I tell people the title of my book, the most common question I get is: so how do we actually stop it? The short answer is a society-wide mobilization, where as many people as possible are activated, organized, and lobbying for an end to the effort to render us obsolete.
Exactly what that looks like depends largely on the individual. AI workers, for instance, have a lot of influence over whether their bosses sign a contract with the Pentagon, but I don’t expect them to be able to direct their employers away from what they’re trying to do most — inventing our collective obsolescence.
That begins with the industry’s Holy Grail: AI systems that can fully automate AI R&D. There’s growing talk that this milestone could be reached this year. I think it’ll take longer, but if and when it is reached, AI companies will be able to substitute computer chips for the (presently) most expensive and irreplaceable workers in the world.
So they would be wise to think carefully about what they can do with their leverage while they still have it.
One more thing
On a lighter note, the book gave me a chance to hang out with one of my favorite musicians, José González, when he was in town for a show, where I learned that his latest albumAgainst the Dying of the Light is actually about existential risk. José was incredibly kind and well-informed about AI. And he wasn’t the only one — his sound guy brought up METR’s research on AI’s counterproductive impact on open source developer productivity (I let him know that follow-up work found AI tools were now significantly speeding up developers). Anyway, the show was fantastic too and he has lots of tour dates coming up.



