Inside Tech's Risky Gamble to Kill State AI Regulations for a Decade
Republicans slipped a controversial provision into the “One Big Beautiful Bill” — now facing bipartisan backlash and internal party rebellion
Update 2: the Senate voted 99-1 to remove the moratorium from the reconciliation bill early Tuesday morning. This stunning defeat "represents a major turning point in U.S. technology policy," according to an originator of the idea. While preemption with no replacement was unprecedented and got farther than many of its supporters even expected, I agree with this take. I wrote up some more quick thoughts here.
Update: The moratorium has been modified again, likely in response to concerns from Republican Senator Marsha Blackburn. Here’s a summary of the changes from the Institute for Law & AI:
Shortens the “temporary pause” from 10 to 5 years;
Attempts to exempt laws addressing CSAM, childrens’ online safety, and rights to name/likeness/voice/image—although the amendment seemingly fails to protect the laws its drafters intend to exempt; and
Creates a new requirement that laws do not create an “undue or disproportionate burden,” which is likely to generate significant litigation.
In other words, the changes don’t actually seem to do what Blackburn wants them to do. The other parts of the following analysis remain true.
Voting on amendments to the reconciliation bill began this morning, with a final overall vote expected late Monday or early Tuesday.
The Republican budget reconciliation bill — better known as "One Big Beautiful Bill" — currently includes a provision attempting to ban state-level AI regulations for ten years.
This moratorium, if passed, would be perhaps the most sweeping attempt to deregulate an emerging technology in US history.
The AI lobby, likely emboldened by the national political climate, is making a big gamble. Following in the footsteps of the most aggressive actors, like Andreessen Horowitz and Meta, much of the tech industry has learned to stop worrying and love the moratorium.
States that don't comply with the provision would lose access to $500 million in new federal funding from the Broadband Equity, Access, and Deployment (BEAD) program, which provides federal grants to expand internet access in underserved communities. Any state that accepts part of the new funding risks forfeiting its entire share of a $42.5 billion pot of BEAD funding and having their AI laws invalidated.
In May, the reconciliation bill passed the House by a single vote margin and now only needs a majority vote to pass the Senate, after which any differences from the House version would send the bill back to the lower chamber for another vote.
Do they have the votes?
The moratorium, initially slipped into the Big Beautiful Bill with little public discussion, has become a lightning rod within the GOP. Dozens of Republican governors and state attorneys general have publicly opposed the provision, joining the far-right House Freedom Caucus and advocacy arm of the Heritage Foundation, the think tank behind Project 2025.
The moratorium has also received opposition from dozens of Democratic representatives and senators, joining over 140 civil society groups organized by Demand Progress, a tech policy advocacy nonprofit.
Thursday morning, Punchbowl News reported that Republican Senator Marsha Blackburn delivered a letter to Majority Leader John Thune asking him to remove the moratorium from the reconciliation bill. The letter was reportedly also signed by Republican Senators Rand Paul and Josh Hawley. Republican Senators Kevin Cramer and Rick Scott have also expressed concerns about the provision.
With likely unanimous support from Senate Democrats — something advocates tell Obsolete they are working hard to secure — four Republicans would provide enough votes to pass an amendment removing the moratorium. But even with the votes, supporters tell Obsolete they worry that Thune will add the provision back in in what's known as a "wraparound amendment," chock full of the majority's priorities that often gets a party-line vote.
The vote on the amendment is expected overnight Sunday, followed some time after by the vote on the full bill.
Unprecedented
In the U.S., state governments have the power to regulate companies that do business within their borders. Those regulations are sometimes inconsistent, creating a large compliance burden for companies. To resolve this, congress can preempt state-level laws and replace them with a unifying national framework, like when it established uniform trucking regulations in 1994 or created national food labeling standards in 1990.
Something Congress can also do — but has never really done before — is preempt state laws without passing a federal regulation to fill the gap.
Overreach?
By failing to offer a federal framework for AI regulation, even a non-binding one, the provision goes further than what even many of the key players in the industry asked for.
In their submissions to President Trump's AI Action Plan, OpenAI, Meta, Google, and Andreessen Horowitz (known as a16z) ask for federal nullification of state AI laws. OpenAI called for preemption with a national framework that is "purely voluntary." Meta requested "federal preemption of state laws that conflict with the Administration’s pro-innovation agenda." Google asked for preemption and "a unified national framework for frontier AI models focused on protecting national security while fostering an environment where American AI innovation can thrive." a16z, the venture capital giant, called for a federal law that "preempts state-specific
restrictions on model development."
The trade group TechNet specifically suggested "the federal government should look to impose a moratorium on state legislation related specifically to the development of frontier AI models until national standards are adopted." The influential industry association includes OpenAI, Google, Meta, Amazon, and Anthropic.
Amazon's submission frets about the "growing patchwork of approaches to AI regulation," but stops short of explicitly asking for preemption. Microsoft and Anthropic didn't mention preemption in their submissions.
OpenAI, Meta, Google, a16z, Amazon, and TechNet did not reply to a request for comment.
American Edge, a dark money lobbying group backed by tens of millions of dollars from Meta, told Punchbowl News it was doing a seven-figure cable and digital ad buy to support the moratorium. Recent ads from the group focus on how AI is empowering American manufacturers, one of whom is quoted saying, "we can't let China get the upper hand."
Opponents of AI regulations often invoke competition with China as justification — Ted Cruz's Senate committee factsheet is titled "Investing In AI and Beating China in the AI Race." Yet China already imposes far stricter regulations on AI than the US. Earlier this month, Chinese AI chatbots temporarily disabled image recognition tools during national college entrance exams, yet these types of restrictions apparently haven't prevented China from rapidly closing the gap with US AI capabilities, from a reported years to months.
An analysis from Public Citizen found that over 200 state laws would likely be preempted by the provision. A bipartisan coalition of 40 state attorneys general warned that the moratorium would nullify state enacted and proposed laws safeguarding against AI-generated explicit material, deceptive deepfakes, discriminatory rent-setting algorithms, and invasive automated phone scams.
Red states in the crosshairs
The moratorium was almost much stronger. As recently as Thursday, the language of the provision might have allowed the Commerce Department to de-obligate the $42.5 billion in BEAD funding and condition access to it on complying with the moratorium, even if states did not take new money.
However, the latest text makes clear that states would only risk their portion of the larger pot if they took any of the new $500 million.
One of the arguments conservatives make against letting states lead the charge on AI regulations is that doing so allows 'woke' states, like California and New York, to impose their values on the rest of the country.
But the structure of the moratorium makes red states, which have smaller budgets, more rural populations, and a lower propensity to regulate, more tempted to take the deal on offer. In doing so, they would give up a substantial amount of their legislative power in exchange for funding designed to help low-income communities get access to broadband internet.
For instance, Montana and New York are set to receive similar total amounts of BEAD funding, but Montana's is 22 percent of the overall state budget to New York's 0.3 percent.
And as law professor Gabe Weil noted, a temporary governing coalition — in many states, the governor alone — could take the deal, binding them to the moratorium for years after their terms end.
There's also a risk that states accept this new funding without fully understanding it endangers its share of a much bigger pot of money. This confusion has been compounded by moratorium supporters' incomplete and misleading explanations. Neither Cruz's senate committee factsheet nor the Koch-backed Abundance Institute’s response to skeptical Republican governors acknowledges the potential loss of $42.5 billion.
The libertarian Abundance Institute doesn't disclose its funders, but head of AI policy Neil Chilson told Politico it received donations from "Silicon Valley and Austin types.”
The exact state-by-state allocation of the new $500 million hasn’t been determined, but it's likely to mirror the dynamic described above — with red states more inclined to accept the tradeoff — though the financial stakes are far lower.
Byrd law
The reconciliation process allows budget-related bills to pass the Senate with a simple majority, rather than the 60 votes needed to overcome the filibuster. As a result, both parties try to cram as many of their priorities into these bills as possible.
These measures have to comply with the Byrd Rule, which the senate parliamentarian uses to assess each provision to determine if it affects the budget enough to stay in. This rule is meant to ensure that reconciliation bills stick to budgetary matters rather than becoming vehicles for unrelated policy changes.
Anticipating the moratorium would not survive the so-called "Byrd bath," earlier this month, Republican Senator Ted Cruz made it conditional on the allocation of $500 million in new BEAD funding.
Even after this change, many insiders, including Republican Senator John Cornyn and JD Vance, predicted the AI moratorium would get stripped. But in a surprising move last Saturday, the Senate Parliamentarian determined the moratorium was Byrd-compliant, allowing it to pass the Senate with a simple majority. "Shocking would be an understatement," a tech lobbyist opposed to the moratorium said of the ruling to Obsolete.
Jason Van Beek, who spent two decades as a senior senate aide following his work on Thune's successful 2004 senate campaign, told Obsolete he'd "never seen that before" in his entire Hill career. "I was shocked by the initial ruling," he said, noting that even senators themselves were caught off guard by the parliamentarian's decision. Van Beek is now Chief Government Affairs Officer for the Future of Life Institute, an AI safety nonprofit lobbying against the moratorium.
In another shocking move, the parliamentarian reopened her decision on Thursday, which led to a further revision to the bill text. The changes to the moratorium language clarified that states would only risk their portion of the $42.5 billion if they took any of the new $500 million, closing the door on the potential of a Commerce clawback.
"Conflicting" regulations
Advocates of the moratorium often cite a problematic "patchwork" of "conflicting" state laws as justification. However, there's little evidence these laws genuinely conflict — requiring businesses in one state to perform actions explicitly prohibited elsewhere. Instead, regulations simply vary in their definitions, scope, and enforcement. In practice, companies typically comply with the strictest regulations applicable to their operations.
When I tweeted asking proponents of the moratorium to point to AI regulations that require states to do mutually exclusive things, the Abundance Institute's Neil Chilson cited the usage of 57 different definitions of AI in state legislation. Chilson was previously chief technologist of the Federal Trade Commission during the first Trump administration and has been one of the loudest supporters of the moratorium. When I pointed out that inconsistency is not the same as contradiction, Chilson did not produce any examples of mutually exclusive legal requirements.
Laws often include exemptions to resolve apparent conflicts. For example, Colorado’s AI consumer protection law, which mandates retaining hiring data to prove nondiscrimination, seems at first glance to conflict with California’s privacy law granting users data deletion rights. Yet California’s law itself contains a carve-out, allowing companies to retain data when required by other legal obligations.
Origins
The idea to preempt state-level AI regulations appears to trace back to policy analyst Adam Thierer, who proposed a "learning period moratorium" in a May 2024 blog post for the free-market R Street Institute. The think tank, funded partly by Google and Amazon, does not disclose its full list of donors.
R Street has not replied to a request for information on its funders.
According to a political consultant advising groups opposed to the moratorium, the provision began as a coordinated effort by tech companies already contending with AI bills in several states. “I think it’s gone a lot further than they imagined,” the consultant told Obsolete.
The initial idea was to discourage other states from following suit by making lawmakers second-guess whether their efforts would hold up, the consultant said. But on this front, the preemption push has failed — it hasn’t stopped new bills from advancing. Key among them is New York's RAISE Act, which passed both chambers with strong, bipartisan majorities earlier in June. The bill would require developers of powerful AI models to conduct safety testing and risk assessments, using liability law as an enforcement mechanism.
Van Beek says preemption may have first started to gain traction in December when a bipartisan House Task Force on AI floated the idea paired with a strong federal standard. Republican Representative Jay Obernolte co-chaired the Task Force and has been a key proponent of the moratorium in the House. At a tech conference in February, he discussed the need for both preemption and congressional action, saying, "We can't preempt something with nothing, so we need to give states that confidence."
But according to Van Beek, the real momentum came shortly after, with the Trump administration’s January AI Action Plan. The plan’s call for public comments became a wishlist for deep-pocketed industry players, many explicitly requesting preemption.
Van Beek described preemption as an "industry priority," telling Obsolete that a16z and other tech lobbyists were "certainly pushing" the idea.
But even as industry enthusiasm grew, Van Beek was skeptical it could actually work. He thought preemption was "so very clearly a policy proposal" that it wouldn't be able to survive the Byrd rule and be included in a reconciliation bill. As a result, he and others were "caught off guard" by how far the provision has gotten. He did not expect to be "having to fight this battle this early in the game."
AI voices
On Monday, Chris Lehane, a Clinton-era political "dark arts" operative and now OpenAI’s chief lobbyist, posted in support of preemption on LinkedIn.
OpenAI CEO Sam Altman, who famously urged Congress to regulate AI in May 2023, was asked about preemption at a recent live taping of the Hard Fork podcast. He said he still believes some regulation is needed, but warned that "a patchwork across the states would probably be a real mess and very difficult to offer services under."
Altman went on to describe his growing disillusionment with the ability of policymakers to keep up with AI’s rapid pace. A detailed, multi-year rulemaking process, he suggested, could be overtaken by the speed of technological change. At the same time, he acknowledged the need for guardrails as systems grow more powerful — ideally something adaptive and narrowly targeted at risky capabilities, rather than a rigid law designed to last a century.
But as I've asked before: is the rapid pace of AI progress really a good reason to defer regulation?
Breaking with his peers in a New York Times op-ed, Anthropic CEO Dario Amodei expressed sympathy for the desire to simplify the regulatory landscape, but called the moratorium "far too blunt an instrument." "Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop," he wrote.
Strange bedfellows
AI policy tends to scramble the usual factions, and the ten-year regulation ban is no exception.
The provision has exposed a split in the GOP between corporatists like Cruz and national conservatives like Hawley, who have opposed it.
Representative Marjorie Taylor Greene, who has identified as a "Christian nationalist," has become the provision's most outspoken critic in the House, tweeting:
I’m not voting for the development of skynet and the rise of the machines by destroying federalism for 10 years by taking away state rights to regulate and make laws on all AI.
Greene said Monday she would oppose the reconciliation bill if it returns to the House with the moratorium intact.
If Greene is the sole obstacle to passing President Trump's signature bill, she'll face "unbelievable pressure," says Van Beek. "You're the center of attention for the president of the United States, leadership of your party, calling you, getting your name out there to the friendly factions within the party to put pressure," he explained.
There are two groups of Republican lawmakers who oppose the provision, according to the consultant: those like Hawley who are uneasy about Big Tech getting special treatment, and those who care about states' rights. Florida Governor Ron Desantis, for instance, has criticized the measure, highlighting child-safety AI regulations he's signed into law.
Mad Libs
Regulation arguments often resemble a game of Mad Libs — swap out the industry or policy fight and the sentences largely still work.
California's bitterly resisted AI safety bill, SB 1047, was likely a major inspiration for the preemption push. Authored by state senator Scott Wiener, the bill would have mainly required the largest AI developers to implement safeguards to mitigate catastrophic risks. Governor Gavin Newsom vetoed the bill, following intense pressure from the industry and prominent Democrats like Nancy Pelosi.
In an August interview, Wiener pushed back on the idea that only a federal standard could work, pointing to his state's 2018 data privacy law, which passed despite similar industry warnings about a patchwork of state laws. Six years later, he noted, Congress still hasn’t passed a national privacy law.
Wiener also called out the hypocrisy of industry leaders who claim to prefer federal rules while lobbying to block them. “A lot of times there are corporate actors who will say, ‘hey, don't go to the state level, do it at the federal level,’ but those are the same corporate actors that are making it impossible for Congress,” he said, citing his 2018 net neutrality bill as another example. That law passed after telecom and cable companies successfully killed a similar federal effort.
Lobbying
Wiener told me last September that, "The tech industry does not want to be regulated by and large. And so, it's always a huge fight and we've had some success in California and in Congress, it's been even harder."
While Big Tech has come to largely embrace preemption, often preferring to work through proxies like dark money groups or industry associations, the idea wasn't immediately adopted.
In May, Politico reported on the lobbying effort, noting that “nobody quite knows what to ask for,” and that there’s disagreement within the industry over how strong federal AI rules should be. One AI company representative described to Politico the spectrum of opinion, from those who want “no regulation at the state level” to others who are “more comfortable with some regulation.”
The provision was a "fluke" that didn't really come from Big Tech, but rather from Andreessen Horowitz, Marc Andreessen's venture capital giant, the lobbyist told Obsolete. They described the firm as "less experienced" and going for the ''hey, let's just ask for exactly what we want'" approach.
The consultant echoed the lobbyist, describing the moratorium as a "tactic" that unexpectedly got legs, but now there are a lot of tradeoffs to getting it passed. While in their view it would still be a huge win for the industry, it would come with unintended consequences, including killing a raft of state-level AI deepfake child pornography regulations. (These conversations took place before the moratorium was substantially weakened by the parliamentarian's revision.)
According to the lobbyist, the broader tech industry feared that blanket deregulation might appear too greedy an ask. And so it took a more cautious approach, avoiding making such a direct request themselves. Though, the lobbyist noted, they all wanted it.
The lobbyist described a tension between tech behemoths and VCs like Andreessen. "It's this big brother, younger brother kind of thing, right? Where the companies are like, ‘whoa, whoa, whoa, we're making incremental progress here,'… Whereas you have some of the venture capitalist type folks coming in and being like, 'hey, let's just ban this and ban that,'" they said.
The consultant noted that Meta and a16z have emerged as the most aggressive opponents of state AI safety regulations, compared to the "pretty tame" lobbying efforts from Microsoft, OpenAI, and Anthropic.
In recent history, Microsoft has positioned itself as more pro-regulation than its competitors. The $3.7-trillion company didn't formally oppose California's AI safety bill, SB 1047 (though it did express a preference for a national law), and is the only one of the above companies that doesn't belong to TechNet.
Now, a lobbyist working on behalf of Microsoft — along with Amazon, Google, and Meta — is pushing for the moratorium, per reporting in the Financial Times.
The consultant said that Elon Musk's falling out with the administration has not been helpful on this particular policy front. Musk supported SB 1047 and has warned of the dangers of AI for over a decade. They said that people were working on getting Musk to weigh in on the moratorium, but noted that his advocacy could cut both ways.
Blowback
The tech industry may have overplayed its hand. By trying to push the moratorium through reconciliation — with Cruz as its public face and Trump’s priorities baked into the broader bill — companies have made it harder for national Democrats to weigh in against any AI regulations, according to the consultant — a miscalculation they say the companies didn’t fully anticipate.
The opposition of national Democrats like Pelosi likely played a key role in killing California's SB 1047 last year. But in swinging for a ten-year veto on all state-level AI regulations, the industry may have alienated some would-be allies in the fight against specific state regulations, like New York's RAISE Act. The consultant says supporters feared that New York Senators Chuck Schumer and Kirsten Gillibrand might pressure Governor Kathy Hochul to veto it, as Gavin Newsom did in California. But neither senator has taken a position on the bill, even privately, the consultant told Obsolete — a silence the consultant attributed to the issue’s newfound political toxicity.
What's next
As the Senate prepares to vote, the tech industry finds itself in uncharted territory. Companies have spent years warning about the dangers of a regulatory patchwork while simultaneously blocking federal action. Concerns about AI's harms and risks are growing, especially within elite circles. The public has been clear about its support for regulation on the technology, though the issue remains low salience. If AI continues to become more capable and ubiquitous — as the industry hopes it will — the technology's salience will grow too, and with it, the appetite for regulation. Should AI enable a disaster or large-scale job loss, the industry may find itself wistful for the days of politicians pitching their bills as innovation-friendly and light-touch.
The moratorium has been substantially weakened and may not even survive. But it's remarkable that the AI industry got within spitting distance of a ten-year vacation from all state-level regulation. The unexpected success of the effort signals just how much Silicon Valley can override the preferences of the public and key parts of the MAGA coalition.
But win or lose, the push may have already damaged the industry's ability to whip Democrats — a miscalculation companies may come to regret.
With editing by Sid Mahanta. All mistakes are mine.