House Proposal Would Block State AI Laws for 10 Years
The measure would override existing laws in California and Colorado
States could be banned from enacting or enforcing artificial intelligence regulations for a decade if a provision tucked into the One Big Beautiful Bill Act becomes law. The clause is one of many contained in the budget reconciliation package passed May 22 by the U.S. House of Representatives.
It would not only prevent new state regulations of AI systems, models, and automated decision systems, but would also block states from enforcing existing AI regulations.
Currently, there is no comprehensive federal legislation that regulates the development of AI or specifically prohibits or restricts its use. The Trump administration replaced President Joe Biden’s AI Bill of Rights with a new framework prioritizing AI innovation.
There is some doubt whether the moratorium will survive in the Senate due to parliamentary rules related to spending and some objections from Republican senators. Widely rejected by Democrats, the provision would need Senate Republicans to unite to gain approval.
A bipartisan group of 40 state attorneys general also sent a letter to Congress opposing the provision.
“If enacted, the moratorium would pre-empt existing state AI laws in California, Colorado, New York, Illinois, and Utah, as well as more than 1,000 pending AI bills across state legislatures,” said Omer Tene, a partner in the Boston office of Goodwin. “The bill’s broad definition of automated decision systems would likely impact regulatory oversight across finance, insurance, education, health care, and other sectors, fundamentally reshaping the AI regulatory landscape at a time when AI technology is surging with dazzling speed.”
The ban would be a major boon to the AI industry, which has lobbied for uniform, light-touch regulation. Supporters of the moratorium say it would undo a growing patchwork of state laws and give Congress the space to craft its own AI legislation. Opponents say it would leave people unprotected from risks ranging from deepfakes to employment discrimination.
It would also create confusion and uncertainty for employers, said Niloy Ray, a shareholder in the Minneapolis office of Littler.
“If it were to pass and go into effect, implementing regulations would have to be created,” he said. “That would create more turbulence. Will the moratorium only apply to AI-specific laws, or laws that regulate AI along with a number of other technologies or related activities? Most laws are specific to larger activities but include AI. Does this mean Title VII cannot be enforced in the AI context? That will create confusion.”
The moratorium would also be challenged, Ray added, leading to more uncertainty. He noted that the language around the exemptions in the provision also needs clarification.
Tene agreed, saying that the moratorium’s exemptions “would obviously open a wide front of interpretative discussions — and likely litigation.”
Arguments for and Against
The moratorium measure is supported by many in the tech industry and by business groups such as the U.S. Chamber of Commerce.
“A moratorium on state-level regulation would provide the necessary breathing room for federal agencies — in consultation with experts, industry, and civil society — to develop a comprehensive framework,” said Kevin Frazier, the AI innovation and law fellow at the University of Texas at Austin’s School of Law. “Conflicting mandates chill innovation and create a compliance nightmare while putting national security at risk. Regulations supposedly intended to address local concerns over AI offer far too few benefits in light of the potential burdens on the AI economic ecosystem. State legislation also risks entrenching significant inconsistencies in the law.”
In many cases, those in favor of a moratorium argue that state proposals address harms already covered by existing laws.
“AI-related harms can already be addressed under many existing laws, regulations, and court-based standards,” said Adam Thierer, a senior fellow at the R Street Institute in Washington, D.C. “Even if one sympathizes with some of these bills, put yourself in the shoes of an entrepreneur who is sitting in a dorm room or garage right now pondering how to build the next great algorithmic application — only to face hundreds of different regulatory definitions, compliance requirements, bureaucratic hurdles, and liability threats. Costly, contradictory regulation is a surefire recipe for destroying a technological revolution.”
In addition to opposition from state policymakers such as the National Conference of State Legislatures and the National Association of Attorneys General, the proposal is generating concerns from civil rights and consumer protection organizations.
Marc Rotenberg, the executive director and founder of the Center for AI and Digital Policy in Washington, D.C., said states have established critical new safeguards in the absence of meaningful federal legislation.
The state AI regulatory moratorium is “short-sighted and ill-conceived,” Rotenberg said. “To be clear, there is a legitimate role for federal legislation in the AI space. National standards can help ensure baseline protections, promote international coordination, and reduce some regulatory uncertainty. But those standards should complement, not override, state efforts. A cooperative federalism model, in which baseline national standards are paired with room for state innovation, would better reflect the urgency and complexity of the moment.”
Employers have increasingly started using data and algorithms in ways that stand to have profound consequences for work, including setting workers’ benchmarks and pay; productivity quotas; and AI-enabled recommendations for hiring, promotions, and layoffs, said Amba Kak, the co-executive director of the AI Now Institute in New York City.
“Multiple states have already passed laws that create task forces to better understand these multiplying impacts of AI on workers,” she said. “There is growing momentum across a range of specific threats, as well — with pending bills that require transparency around the use of AI systems in the workplace, notice for AI-driven layoffs, and rules that prevent the misuse of AI management software or those that prevent especially invasive modes of surveillance, such as so-called ‘emotion-recognition’ systems in the workplace.”
What Employers Should Do Now
Employment attorneys said that companies should continue to assess their current AI systems against existing state requirements, as well as ones that are coming into effect.
“At the same time, monitor potential federal pre-emption,” Tene said. “Businesses may need to evaluate whether to maintain existing compliance frameworks or scale back state-specific AI governance measures pending final resolution.” “Stay the course,” Ray said. “Continue implementing AI to the standards that you are already meeting. Assume that laws related to anti-discrimination, privacy, products liability, personal injury, and torts will continue to apply to AI. My advice is to use AI in a reasonable and sensible way, knowing that existing laws already regulate about 85% of AI uses.”
Was this resource helpful?