Research / Clearing the Path for AI: Federal Tools to Address State Overreach

Clearing the Path for AI: Federal Tools to Address State Overreach

DOWNLOAD PDF

Department of Justice, Docket No. OLP182

Request for Information on State Laws Having Significant Adverse Effects on the National Economy or Interstate Commerce

Introduction

The Abundance Institute welcomes the opportunity to comment on the Department of Justice’s request for information on state laws that significantly and adversely affect the national economy or interstate commerce.

The Abundance Institute is a mission-driven nonprofit organization dedicated to creating the cultural and policy environment necessary for emerging technologies to germinate, thrive, and perpetually expand human prosperity and abundance. Our scholars and policy experts have testified before Congress, submitted comments to federal agencies, and published widely on the legal and policy barriers to artificial intelligence innovation and use.

Our federalist system divides regulatory power between state and federal governments, but not equally. The Constitution limits each. The federal government must halt state laws that violate constitutional rights and may preempt state laws that burden interstate commerce.

Yet state-level regulations with interstate effects are increasingly common, driven by today’s highly interconnected economy. This is especially true for information technologies, which flow seamlessly across borders.

As a result, a state’s regulations often reach far beyond its borders, dictating the choices of businesses and consumers who never set foot there. When repeated across multiple jurisdictions, a single business or party might face the challenge of complying with fifty or more different regulatory regimes.

However well-intentioned, overreaching regulation stifles growth, chills innovation, and even threatens health and safety. Imagine the potential effects of dozens of overlapping and conflicting regulations.

This is precisely the threat facing artificial intelligence. A growing patchwork of state AI regulations threatens both America’s global technology leadership and the strength of our national economy. The executive branch of the federal government has existing constitutional and statutory tools it should deploy to slow and remove this patchwork. Congress also has a vitally important role.

Below, we describe the nature of the patchwork, outline the most problematic state regulations that meet these criteria, and list key tools the federal government could deploy. Finally, we summarize which tools are best suited to address each kind of threat.

The Patchwork Problem: How State AI Laws Threaten Economic Growth and Interstate Commerce

The RFI seeks comment on “[w]hich State laws significantly burden commerce in other States and between States…” There are specific state AI laws and proposed laws that, by themselves, significantly burden commerce; we discuss these in more detail below. But the sheer volume of AI legislation is also a threat.

In 2024 alone, state lawmakers introduced 635 AI-related bills, enacting ninety-nine. In 2025 legislative sessions, that number swelled to more than 1,000 AI-related bills. The National Conference of State Legislatures estimates that “thirty-eight states adopted or enacted around 100 measures” in the first six months of 2025.  

Some of these proposals focus narrowly on local harms and needs, such as laws that study the effects of AI or govern how the state government itself uses AI.

But other state AI bills attempt sweeping regulation of AI development and deployment. These comprehensive state AI laws will have national effects for at least two practical reasons. First, because of how AI is built. Modern AI systems operate on cloud infrastructure, serve many geographically-diverse clients simultaneously, and integrate shared models and datasets across jurisdictions. User requests, logging, evaluation, and monitoring data all flow, often unknowingly, across state lines. As a result, designing systems to comply with different state rules adds needless complexity, customer friction, security risks, and costs. Providers therefore avoid state‑specific variants that could break products used everywhere. 

Second, AI’s interstate character triggers a compliance “ratchet”: when a large state imposes strict rules, firms often apply those standards nationwide to reduce engineering and legal burdens. People have referred to this as the Brussels effect or the “California effect.” Companies find it cheaper to build once to the strictest standard than to maintain separate systems for each jurisdiction, and risk-averse big enterprise customers will often demand the stricter baseline anyway. 

Indeed, some legislators in large markets openly aim to shape national practice—“if you want access to our residents, comply with our rules”—knowing that vendors will spread those controls to all users rather than carve out bespoke state treatments. The result is that a single state’s AI governance design effectively becomes the default for users, developers, and downstream integrators across the country.

These overlapping, conflicting, and sometimes directly contradictory laws impose especially high costs on small and mid-sized innovators that lack sophisticated and well-resourced compliance teams. Merely tracking this volume of developing legislation is beyond the capacity of most startups and many mid-sized companies. Large incumbents may be able to absorb these costs (by diverting resources to compliance from building products and services), but startups and open-source developers cannot. This tsunami threatens the very dynamism that drives American AI leadership.

These individual AI bills threaten to impose significant economic costs.

  • Florida Example: A macroeconomic simulation found that restrictive AI rules could cut Florida’s GDP by $38 billion a year and cost more than 54,000 jobs. While Florida is only one state, the analysis suggests that similar measures replicated across the country could create massive nationwide losses.

  • California Example: California projects that its proposed AI-related privacy rules will impose $3.5 billion in first-year compliance costs and cause up to 126,000 job losses by 2030. We believe even this sizable estimate is low. And that is just one regulatory proceeding; California’s legislature considered at least forty-two AI bills this session and passed at least seventeen.

These examples are early warnings. If all fifty states continue to “go their own way” on AI, businesses will face overlapping, inconsistent, and costly obligations that stifle investment and innovation. The result will be slower productivity growth, fewer high-quality jobs, and diminished U.S. competitiveness relative to global rivals such as China.

Top State Threats

The sheer number of state regulations affecting AI requires us to categorize them to make sense of what tools might best apply in which situations. 

To categorize state threats appropriately, it is critical to understand the breadth of AI itself and how modern AI is developed. AI is a general-purpose technology—perhaps the most general-purpose technology humans have ever created. AI’s applications span every industry. Current AI approaches, sometimes referred to as machine learning or generative AI, can be crudely divided into two categories of activities: training and inference. Training is the up-front development of the AI model. Once trained, that model can be deployed and used; this is the inference phase. 

AI model training resists state-specific regulation. The training phase raises no legitimate local or state concerns. In fact, state regulation of the training phase would have an extraterritorial cost that far outweighs any local benefits. Training the most advanced models is extraordinarily expensive and time-consuming, costing tens or hundreds of millions of dollars and taking months of training time. These constraints make jurisdiction-specific models economically infeasible. Companies will not train a frontier model under California rules and then train a different frontier model under Utah rules. Instead, developers are likely to choose a single set of laws to train under, and will likely attempt to comply with the strictest requirements in their intended market. The strictest state will, practically speaking, set the rules for the models everyone uses.

Other factors make training inherently interstate. Training typically relies on data collected from many states and countries. The computation itself may take place in multiple states, depending on the location of the data centers used. Importantly, the final product, the trained model, is a file or collection of files that is not anchored to a single location and indeed is almost trivial to move to a new location. Indeed, there are millions of trained models available for download by anyone with an internet connection.

Regulation of AI deployment and use is more complicated. Deciding the proper regulatory authority for deploying and using AI models in the abstract is complex because it implicates such a wide swath of economic activity. AI is or will be deployed in industries that already have significant state regulation. Undoubtedly some of these will be intrastate activities. Given the expansive scope of the Commerce power, Congress could likely preempt most state regulations of AI use. However, prudent policy suggests that states should and do retain some authority to govern certain kinds of AI use and deployment. 

This analysis helps to determine which types of state AI regulation are both the most threatening to innovation and the least well-suited to state regulation. These are also the areas where federal preemption of state regulation is most likely to withstand judicial review. 

Given this background, we divide state regulation into three categories based on what tools are best suited to address these concerns. 

For the reasons outlined above, the most apt immediate targets for federal scrutiny and preemption are state laws that directly regulate AI model development and the model development process. Next in priority are state laws that hinder model development. Also important but more legally complex are state laws that directly regulate various deployments or uses of AI. 

State laws that directly regulate AI model development

Colorado’s AI Act imposes vague “reasonable care” standards on model developers, requiring them to protect against both “known” and “reasonably foreseeable” algorithmic discrimination—an impossibly broad mandate. The law's implementation has proved so challenging that Colorado pushed its effective date from January to June 2026.

New York’s pending RAISE Act (A 6453), now on Governor Hochul’s desk, also targets model development. Among its many requirements, it requires developers to “implement appropriate safeguards to prevent unreasonable risk of critical harm” and would punish a developer for making such a model available if the state attorney general subsequently found that model “create[d] an unreasonable risk of critical harm.” The Act would also hold developers liable for harms caused by third parties who build tools with the model—an unmanageable burden, especially for open-source projects.

California’s SB 53 recently passed the California legislature. It imposes transparency and incident reporting requirements on certain model developers. This bill is the renewed effort of California Senator Scott Wiener after Governor Newsom vetoed what was the highest-profile of all the state AI bills last session, SB 1047.

These three bills are the most significant and most likely to take effect soon. But many other state proposals would directly regulate AI model development. One analyst identifies fourteen states that have considered or adopted such bills, including Connecticut, Hawaii, Maryland, and Virginia. Even though most of these states’ sessions have ended for this year, experts expect many of these bills will be reintroduced during the next session. 

State laws that hinder AI model development

There is undoubtedly a much wider collection of state laws that don’t directly regulate AI model development but do block, slow, or otherwise hinder it. Such laws could impact AI development by complicating or blocking model development inputs, such as energy generation or a supply of useful information to train on. 

The most obvious and problematic category here is state comprehensive privacy laws. Nineteen or twenty states now have “comprehensive” state privacy laws, depending on how the term is defined. Other states continue to consider such legislation as well. This existing patchwork of state privacy laws imposes unnecessary costs not just on AI but across the economy, and should be resolved with a federal privacy framework.

The effect of this patchwork on AI development turns on each law’s specific requirements and how they interact. Privacy protections are not necessarily hindrances to AI model development. Most general-purpose AI model developers do not need or want to train on sensitive personally identifiable information, which is the most restricted category of protected information in such laws. However, there are important areas of AI research, such as health care, where such information may be important to creating new and effective tools.

The state privacy law provisions most likely to negatively affect AI model development include data minimizations, purpose limitations, and processing limitations. Generally speaking, these types of provisions restrict how companies can use data collected for one purpose for a different purpose. States have been expanding these kinds of regulations. Such regulations can prevent companies from using data (including nonsensitive data) they have already collected to train an AI model, even when privacy risks from this kind of use are minimal. Complying with these laws, then, can eliminate entire data sets from AI training.

State laws that regulate AI deployment or use

Laws in this category regulate uses of AI technology. Because AI is a general-purpose technology, nearly any overreaching state rule will affect its use and fall into this category. However, cataloging all such state laws would be a monumental effort and not particularly useful for the DOJ’s purposes. 

We therefore focus on laws that directly target AI use or closely related practices. Below we identify some use-specific AI regulations that have been adopted. As AI is deployed into more industries, we expect this category of state regulation to grow significantly over the next several years. Thus, in addition to addressing the specific laws below, the administration should establish a process for addressing new state AI use regulations.

(Note that many of the laws discussed above that affect AI model development also apply to various AI use cases. For example, state privacy laws also affect various AI deployments.)

Automated Decision-Making Laws and Regulations

California Consumer Privacy Agency rulemaking on Automated Decision-Making Technology. The Agency has proposed an entire AI regulatory regime with minimal demonstrated consumer benefit. The proposal would expand the CCPA’s role from privacy oversight into de facto AI regulation. It adopts an unusually broad definition of automated decision-making technology (ADMT). As we’ve described elsewhere, the rulemaking in effect regulates “any automated system or algorithmic process involving personal data … from advanced AI models down to basic data sorting, if it influences an outcome.” It also applies to a wide range of uses and industries, including finance and lending, housing, insurance, education enrollment, criminal justice, employment or contracting, compensation, healthcare services, targeted advertising, and other “essential goods or services.”

Many state privacy laws also include provisions that give a similar right against automated decision-making. While the CCPA rulemaking perhaps marks the most expansive type of this provision, many of these state laws also apply to a wide swath of industries and different technologies. The International Association of Privacy Professionals counts seventeen state laws and thirty-five proposed laws that have some variation of this type of provision.  

Biometric Privacy Laws

Biometric privacy laws prohibit or restrict collection of a select type of personal information associated with the physical characteristics of a person’s body. This can include facial data, fingerprints, and a wide range of health-related data. Laws often focus on “biometric identifiers” such as retina scans, fingerprints, and face geometry. Some of these laws exclude photographs, for example, from the definition of biometric information. However, plaintiffs could argue that these laws apply if an AI trained on photographs can reconstruct images or identify individuals.

Penalties involving violations of these laws can be almost ruinously expensive, even for technical oversights. In fact, the first of these laws, Illinois’ Biometric Information Privacy Act, was modified to substantially reduce potential penalties for violations given the enormous sums plaintiff firms were extracting from litigation and settlements for what were essentially paperwork failures.

Chatbot Laws

A growing number of state laws are regulating chatbots. Six states have such laws currently. Some of these laws are targeted and well-formulated. But each state defines “chatbot” differently, resulting in a patchwork of compliance obligations. Companies developing and deploying chatbot technology must therefore build and maintain separate product features and compliance frameworks for each jurisdiction.

This state-by-state approach imposes significant burdens on interstate commerce. Some laws contain blanket prohibitions on certain use cases, while others prescribe detailed requirements for user experience and accessible content. Divergent state obligations make national operation prohibitively expensive, especially for smaller developers. Moreover, many of these statutes presume to apply extraterritorially, regulating chatbot innovation in other jurisdictions.

The cumulative result is that U.S. developers face heightened compliance costs, overlapping enforcement risks, and increased litigation exposure. These regulatory barriers slow product launches, raise the cost of innovation, and provide a competitive advantage to foreign developers who can scale without navigating this fractured regulatory environment.

Private Rights of Action

Several state statutes (mostly in the privacy space) create private rights of action that enable individuals to sue companies to enforce the statute. These rights of action often yield judgments disproportionate to the actual harm. Such rights of action spur plaintiff lawsuits but the benefits accrue primarily to private attorneys, not to the public at large. 

Key Tools

The RFI asks not just for a list of problematic state laws, but also for solutions, including the application of existing authority. Specifically, the RFI asks: 

  • whether problematic state laws are preempted by existing federal authority,

  • whether there are federal legislative or regulatory means for addressing those laws and their burdens, and

  • which federal agency has the subject-matter expertise to address concerns lawfully within the federal government's authority.

Having described the wide range of laws and three categories for classifying them, we now turn to specific tools the DOJ, other executive branch agencies, and Congress can apply to ensure such state laws do not interfere with economic growth or interstate commerce.

Existing Statutory Preemption Authority

Federal agencies wield significant statutory authority across the economy, often including express or implied preemption powers. Because AI will be implemented and used across the economy, every such agency can and should consider how it can use its preemption authority to clear the state thicket of AI regulation. 

Examples of agencies with relevant, preemptive statutory authority in specific industries include:

  • National Highway Traffic Safety Administration (NHTSA) – motor vehicle safety and transportation

  • Federal Aviation Administration (FAA) – aviation and aerospace

  • Federal Communications Commission (FCC) – telecommunications and broadcasting

  • Federal Motor Carrier Safety Administration (FMCSA) – commercial trucking and bus transportation

  • Federal Trade Commission (FTC) – economy-wide consumer protection, including privacy and data security

  • Department of Energy – energy efficiency standards and conservation

  • Food and Drug Administration (FDA) – food, drugs, and medical devices 

  • Department of Health and Human Services (HHS) – health care industry

Such authority is best suited to remove industry-specific state barriers to AI innovation. For example, the FDA and HHS should explore how their authority might preempt applications of state biometric privacy laws to medical devices or health provider services. However, there is not a clear path to use such industry-specific authority to preempt “comprehensive” state AI laws regulating AI model training. 

The federal government should establish a capability, perhaps housed within the DOJ, for identifying and mitigating the extraterritorial effect of state AI laws. That capability should monitor industry-specific AI regulation as the technology becomes integrated across the economy. Each state law identified as regulating an AI use should be reviewed by a federal agency with relevant jurisdiction. That agency should determine if it has authority to preempt those state restrictions. Agencies could exercise preemption through rulemaking, declaratory rulings, or adjudications. Other measures could include guidance and consultations with state lawmakers. In some cases, the agency may identify existing rules that already preempt certain state laws, and should then challenge the state law in court.h agencies, and Congress can apply to ensure such state laws do not interfere with economic growth or interstate commerce.

Spending Conditions and Procurement Tools

Federal Spending Conditions in Existing Agency Programs. The federal government has broad power to condition its own expenditures to states on certain requirements. As we have previously recommended, federal agencies, including the Departments of Commerce, Agriculture, Education, Energy, Transportation, and Health and Human Services should offer guidance on when state-level AI regulations would exclude a state from eligibility to receive funds from federal spending programs, including CHIPS Act and BEAD. The administration's AI Action Plan supports this approach.

Procurement. As a major purchaser of AI services, the federal government’s rules and standards for products and services that it procures can help shape industry practices. The executive branch could establish standards that preempt state standards that would undermine the provision of federal contracts. While the required nexus to federal contracts may limit which products and services can be cleared from the state patchwork, procurement standards can still provide a large counterweight to more restrictive state standards and could set up a conflict that could seed a Dormant Commerce Clause case. 

Litigation Tools

Dormant Commerce Clause Case Support

The Constitution grants Congress, not the states, the authority to regulate interstate commerce. Even when Congress has not acted, the Dormant Commerce Clause limits state laws that impose excessive burdens on interstate commerce.

Courts apply three principles: whether the law discriminates against other states, whether its burdens on interstate commerce outweigh local benefits, and whether it regulates conduct wholly outside the state. Recent state AI laws risk violating at least the latter two principles by imposing complex requirements on developers of AI models, even in situations where the model was developed entirely in another state.

The Supreme Court has long warned against such “economic Balkanization” that occurs when states project their regulatory preferences beyond their borders. Unless checked, expansive state AI laws risk exactly that outcome.

The DOJ should develop and publish a memorandum identifying the types of innovation-chilling state laws that violate the Commerce Clause. This memorandum should list characteristics of state AI regulation that create an impermissible interstate effect. The DOJ also should develop a litigation strategy and earmark agency resources to support private Dormant Commerce Clause cases. Finally, the DOJ should publicize its intent to support such cases against any state AI legislation that meets memorandum’s criteria and therefore threatens to undermine U.S. AI leadership and national security.

Unleash Safe Nuclear Energy by Settling the Lawsuit Against the NRC Brought by Utah, Texas, and Last Energy

States and commercial nuclear startups have sued the Nuclear Regulatory Commission (NRC) to open up deployment of small modular nuclear reactors. The lawsuit argues that the NRC is overstepping its authority by asserting licensing jurisdiction over the construction and operation of all nuclear reactors, regardless of how small and safe they are. The Atomic Energy Act of 1954 explicitly excludes small and safe reactors from the statutory definition of Utilization Facility, and therefore, the NRC lacks legal authority to license or restrict construction and operation of those reactors. Licensing authority for these classes of reactors properly belongs to the states.

The DOJ should settle the case immediately in favor of the plaintiff’s claims. This will open the door for rapid deployment of test reactors, microreactors, and small modular reactors and usher in a new age of nuclear energy abundance. Among the many other benefits, there would be ample energy to power data centers needed for AI dominance.

Legislative Tools

Congress can preempt any state law affecting interstate commerce. It also has other powers, such as the spending power, that it can use. Congress should deploy these powers to clear the path through the state patchwork for important AI innovation. The administration should encourage and support such efforts.

Enact a national privacy framework. Naturally, a national privacy law should avoid replicating provisions from state privacy laws that unnecessarily restrict AI model development and use, such as data minimization requirements, strict restrictions on automated decision-making, or overreaching biometric privacy protections.

Moratoria or Preemption. Congress has the authority to directly preempt a wide swath of state AI regulation. This preemption could take many forms. Some of the key characteristics that such preemption could choose from include:

  • What is covered? AI model development, AI model use, or both? If preempting forms of AI model use regulation, in what sectors or industries? 

  • How long would the preemption apply? This could range from a few months to permanently.

  • What kinds of interventions are preempted?

  • Is the preemption accompanied by substantive regulatory requirements?

In addition to direct preemption there are other useful types of laws that Congress could enact:

  • A negative liability law could shield developers of general-purpose AI models from liability from third party misuse of a model.

  • A safe harbor law could establish a set of light-touch practices that, if performed, would exempt developers of general-purpose AI models from other state and federal regulatory requirements.

  • A right to compute law could establish that any government actions that restrict the ability to privately own or make use of computational resources for lawful purposes must be limited to those demonstrably necessary and narrowly tailored to fulfill a compelling government interest.

Of course, for any of the recommended actions that the Executive Branch could do now using existing authority, Congress could enact laws to clarify and expand such actions.

Other Tools

FTC Competition Advocacy

The FTC can use its long‑standing competition advocacy program, run by the Office of Policy Planning (OPP), to advise state lawmakers weighing AI bills or state regulators considering AI regulation. As former acting FTC Chair Maureen K. Ohlhausen has explained, competition advocacy leverages the agency’s legal and economic expertise to persuade other government actors to adopt policies that promote competition and innovation rather than unnecessarily restricting new business models. FTC staff have a well‑developed process for evaluating the likely competitive effects of proposed state laws and sharing that analysis with policymakers upon request or during public comment windows. Such analysis often includes how proposed requirements might raise entry barriers, entrench incumbents, or chill innovation, and also frequently recommends less‑restrictive, pro‑competitive alternatives. This analysis comes in many forms including submitting advocacy filings to state legislatures, regulatory boards, and officials. The Commission has also produced targeted staff policy papers for state lawmakers. It could use this tool to synthesize research on how specific state AI proposals would affect market structure, growth, and dynamic entry. 

Substantive FTC advocacy on the competitive effects of state AI policy could help calibrate state interventions and avoid the worst anticompetitive effects while promoting continued AI innovation.

Substantive FTC advocacy on the competitive effects of state AI policy could help calibrate state interventions and avoid the worst anticompetitive effects while promoting continued AI innovation.

Innovator Defense Division

The DOJ should establish a Division, Office, or Task Force of lawyers and staff to support the defense of private parties from state enforcement of overreaching and unconstitutional state AI laws. The actions an IDD could take include:

  • Issuing a guidance memo identifying the characteristics of state AI legislation that interfere with interstate commerce.

  • Publishing detailed legal critiques of individual state AI legislation to identify vulnerabilities. 

  • Establishing an amicus practice to support American innovators against overly aggressive state prosecution.

  • Offering litigation support and resources to private plaintiffs.

  • Intervening in private litigation to promote liability rules focused on proximate causes of harm, while opposing theories that unduly burden foundational AI research or the creation of general-purpose tools.

Summary: Picking the Right Tools for the Job

There are many threats and many potential tools, but some of these tools are better suited for certain threats than others. Below is a chart summarizing key categories of threats and identifying which tools best apply. 

STATE LAWS THAT DIRECTLY

REGULATE AI MODEL DEVELOPMENT

STATE LAWS THAT AFFECT AI MODEL

DEVELOPMENT

STATE LAWS THAT REGULATE AI USE

AND DEPLOYMENT

SUMMARY

AI model development is

interstate activity and should

only be regulated by the federal

government. There are strong legal

and policy reasons for preempting

state regulation of AI model

development. The most permanent

solution is federal legislation,

but the DOJ should explore all

possible options to halt this threat.

State privacy laws are the most

prominent state laws that affect

AI model development. The best

solution here is a federal privacy

law.

States have passed and will pass

a variety of industry-specific laws

that regulate particular uses of

AI. There are no comprehensive

solutions here; instead, the DOJ

should establish a process that

helps identify and activate federal

agencies that have preemption

authority in the relevant industry.

TOP TOOLS 

Executive: Impose spending conditions on federal funds to states 

Litigation: Support Dormant Commerce Clause cases; establish an Innovator Defense Division 

Legislative: Enact federal preemption, with or without a substitute framework

Executive: Impose spending conditions on federal funds to states 

Litigation: Support Dormant Commerce Clause cases

 Legislative: Enact a federal privacy framework expressly preempting state laws

Executive: Establish a process for reviewing state AI laws for agency preemption 

Litigation: Support Dormant Commerce Clause cases 

Legislative: Enact a federal AI framework expressly preempting specific types of state AI use laws

Conclusion

Artificial intelligence is a transformative, general-purpose technology with the power to drive productivity, improve lives, and secure America’s global leadership. Yet a patchwork of conflicting state laws threatens to choke U.S. innovation at the very start.

The Constitution points to a clear solution: states may govern harmful local uses of AI, but only Congress can regulate the national AI market. The Department of Justice should enforce that boundary, back litigation that protects interstate commerce, and press Congress to act quickly to preempt harmful state laws.

By taking these steps, the Department will preserve the integrity of our federal system and ensure that America leads the world in using AI to advance human flourishing and prosperity.

Respectfully submitted,

Neil Chilson, Head of AI Policy

Abundance Institute

Department of Justice, Request for Information on State Laws Having Significant Adverse Effects on the National Economy or Significant Adverse Effects on Interstate Commerce, 90 Fed. Reg. 39427 (Aug. 15, 2025), https://www.federalregister.gov/documents/2025/08/15/2025-15604/request-for-information-on-state-laws-having-significant-adverse-effects-on-the-national-economy-or (“RFI”).
Patrick A. McLaughlin and John T.H. Wong, The Causal Effect of Regulations on EconomicGrowth: Evidence from the US States (Dec. 2024) Mercatus Center at George Mason University, https://www.mercatus.org/research/working-papers/causal-effect-regulations-economic-growth-evidence-us-states; James Broughel and Kip W. Viscusi, Death by Regulation: How Regulations Can Increase Mortality Risk (Nov. 20, 2017), Vanderbilt Law Research Paper No. 18-31, https://ssrn.com/abstract=3169605 or http://dx.doi.org/10.2139/ssrn.3169605.
RFI, 90 Fed. Reg. at 39428.
Multistate.ai, Artificial Intelligence (AI) Legislation (last visited Sept. 15, 2025), https://www.multistate.ai/artificial-intelligence-ai-legislation.
Id. (counting 1,092 AI-related laws introduced in the 2025 state legislative sessions).
NCSL, Summary of Artificial Intelligence 2025 Legislation (July 2025), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation.
See, e.g., Montana House Joint Resolution No.4, Requesting an Interim Study on Artificial Intelligence  69th Legislature HJ 4.1 (2025), https://archive.legmt.gov/content/Sessions/69th/Contractor_index/HJ0004.pdf.
There are two kinds of “California effect.” One is that a company might comply with the strictest standard nationally, for efficiency reasons. The second is that companies under a California regulation have an incentive to lobby for the creation of that same standard in other jurisdictions as well. See David Vogel, Trading Up: Consumer and Environmental regulation in a global economy (1995) Harvard University Press. The second kind undermines the “laboratories of democracy” argument that is often made in favor of state experimentation.
Edward Longe, The $38 B⁠i⁠ll⁠i⁠on M⁠i⁠s⁠t⁠ake: Why AI Regula⁠t⁠⁠i⁠on Could Crush Flor⁠i⁠da’s Economy (June 26, 2025), James Madison Institute, https://jamesmadison.org/the-38-billion-mistake-why-ai-regulation-could-crush-floridas-economy/.
California Privacy Protection Agency, Standardized Regulatory Impact Assessment at 9, 11, 63, 103 (Oct. 2024), https://cppa.ca.gov/regulations/pdf/ccpa_updates_cyber_risk_admt_ins_impact.pdf.
See Neil Chilson, Public Comment on CCPA Updates, Cyber, Risk, ADMT, and Insurance Regulations at 3-6 (Feb. 2025), Abundance Institute, https://abundance.institute/articles/ccpa-cyber-risk-admt.
The only safety concerns that could imaginably arise from the mere training of a model are self-sentience or other loss of control of a model that becomes agentic. While these concerns are highly speculative to the point of science fiction, to the extent these risks do exist they are most certainly matters for national attention.
See, Models – Hugging Face (last visited Sept. 15, 2025), https://huggingface.co/models (listing 2,091,115 models available for download).
Colorado SB24-205, Consumer Protections for Artificial Intelligence (2024 Regular Session), https://leg.colorado.gov/bills/sb24-205.
Jesse Paul and Taylor Dolven, Colorado lawmakers abandon special session effort to tweak AI law, will push back start date to June 2026 (Aug. 25, 2025), The Colorado Sun, https://coloradosun.com/2025/08/25/colorado-ai-law-tweak-dies/.
NY 6453-B, Responsible AI Safety and Education Act, https://www.nysenate.gov/legislation/bills/2025/A6453/amendment/B.
Adam Thierer, et al., Coalition Urges New York Lawmakers to Avoid Heavy-Handed AI Mandates (May 12, 2025), https://www.rstreet.org/outreach/coalition-urges-new-york-lawmakers-to-avoid-heavy-handed-ai-mandates/.
California SB 53, Artificial intelligence models: large developers, https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53.
See, e.g., Press Release, Pelosi Statement in Opposition to California Senate Bill 1047 (Aug. 16, 2024), https://pelosi.house.gov/news/press-releases/pelosi-statement-opposition-california-senate-bill-1047.
Adam Thierer, Updated Compendium of Bills Pushed by the Multistate AI Policymaker Working Group (last updated Aug. 29, 2025), https://medium.com/@AdamThierer/updated-compendium-of-bills-pushed-by-the-future-of-privacy-forum-fpf-multistate-ai-policymaker-40cb0566cb2f. See also, Nebraska Legislative Bill 642, Artificial Intelligence Consumer Protection Act (2025), 109th Leg., 1st Sess; Connecticut Senate Bill No. 2, An Act Concerning Artificial Intelligence (2024), Gen. Assemb., Feb. Sess; Texas House Bill 1709, Texas Responsible Artificial Intelligence Governance Act (2025), 89th Leg., Reg. Sess.
Compare International Association of Privacy Professionals, US State Privacy Legislation Tracker 2025 (last updated July 7, 2025), https://iapp.org/resources/article/us-state-privacy-legislation-tracker/ with Bloomberg Law, Which States Have Consumer Data Privacy Laws? (Apr. 7, 2025), https://pro.bloomberglaw.com/insights/privacy/state-privacy-legislation-tracker/#states-with-comprehensive-data-privacy-laws.
Jordan Francis, Unpacking the shift toward substantive data minimization rules in proposed legislation (May 22, 2024), https://iapp.org/news/a/unpacking-the-shift-towards-substantive-data-minimization-rules-in-proposed-legislation.
Neil Chilson, Public Comment on CCPA Updates, Cyber, Risk, ADMT, and Insurance Regulations (February 2025), https://abundance.institute/articles/ccpa-cyber-risk-admt (“Chilson CCPA Comments”).
Id. at 8.
California Privacy Protection Agency, Notice of Proposed Rulemaking at 16 (Nov. 22, 2024), https://cppa.ca.gov/regulations/pdf/ccpa_updates_cyber_risk_admt_ins_notice.pdf.
See, International Association of Privacy Professionals, US State Privacy Legislation Tracker 2025 State Privacy Law Chart (last updated July 7, 2025), https://iapp.org/media/pdf/resource_center/State_Comp_Privacy_Law_Chart.pdf (count from “Right against automated decision-making” column).
Apurva Dharia, et al., Illinois Revises Biometrics Law To Reduce the Prospect of "Ruinous" Damage Awards (Aug. 15, 2024), https://www.dwt.com/blogs/privacy--security-law-blog/2024/08/illinois-bipa-biometrics-law-amended-for-damages.
See Illinois Biometric Privacy Act, https://www.ilga.gov/Legislation/ILCS/Articles?ActID=3004&ChapterID=57, and supra n.27 and discussion in text.
Neil Chilson and Josh Smith, Comment on Request for Information on the Development of an Artificial Intelligence Action Plan at 4 (Mar. 2025),https://abundance.institute/articles/development-of-an-AI-action-plan.
Office of Science, Technology, and Policy, Winning the Race: America’s AI Action Plan at 3 (July 2025), https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf (“Led by OMB, work with federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”).
U.S. Const. art. I, § 8, cl. 3.
Okla. Tax Comm. v. Jefferson Lines, Inc., 514 U.S. 175, 180 (1995) (citations omitted).
See Matt Perault and Jai Ramaswamy, The Commerce Clause in the Age of AI: Guardrails and Opportunities for State Legislatures (Sept. 2, 2025), Andreessen Horowitz, https://a16z.com/the-commerce-clause-in-the-age-of-ai-guardrails-and-opportunities-for-state-legislatures.
Okla. Tax Comm. v. Jefferson Lines, Inc., 514 U.S. 175, 180 (1995) (citations omitted).
State of Texas v. U.S. Nuclear Regulatory Commission, 6:24-cv-00507, (E.D.Tex.). The complaint is available at https://storage.courtlistener.com/recap/gov.uscourts.txed.235070/gov.uscourts.txed.235070.1.0.pdf.
Maureen K. Ohlhausen, Remarks Before the Connecticut Bar Association at 5 (Feb. 26, 2014), https://www.ftc.gov/system/files/documents/public_statements/203081/140226healthcaretechnology_0.pdf.

↳ JOIN OUR NEWSLETTER