Comments of the Abundance Institute
Office of Science and Technology’s Notice of Request for Information; Regulatory Reform on Artificial Intelligence
Regulations.gov Docket ID Number OSTP–TECH–2025–0067
Introduction
Thank you for the opportunity to assist the Office of Science and Technology Policy with “identifying existing Federal Statutes, regulations, agency rules, guidance, forms, and administrative processes that unnecessarily hinder the development, deployment, and adoption of artificial intelligence (AI) technologies within the United States.”
The Abundance Institute is a mission-driven nonprofit organization focused on creating the cultural and policy environment necessary for emerging technologies to grow, thrive, and reach their full potential. We strongly support Executive Order 14179’s establishment as the policy of the United States “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
America can maintain our lead in the AI race – if we don’t trip over red tape obstacles of our own design.
Surveying the federal landscape, some of these obstacles are clear. Below we identify three specific categories of federal barriers in two federal agencies and discuss the tools to remove or mitigate those barriers.
However, because artificial intelligence is a general purpose technology with applications across the entire economy, it is likely that every federal agency has regulations, procedures, guidance documents, and other instruments that could hinder AI innovation and adoption. Some of these opportunities will be immediately obvious to parties who have engaged closely with the relevant agencies. We trust that other commenters in this proceeding will surface such opportunities.
Yet even experienced parties cannot anticipate the obstacles and opportunities that will continue to arise as governments, companies, and individuals deploy the technology across the economy. The lack of a coordinating structure in the federal government is itself an overarching organizational barrier that will hinder future AI innovation and adoption. Therefore, to fulfill the directive of Executive Order 14179 and the AI Action Plan to remove barriers, we strongly recommend the creation of an ongoing capability to coordinate and direct the removal of such barriers as innovators encounter them.
Eliminate Specific Federal Barriers
The federal government is massive, with “2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments.” The amount of regulation it imposes is likewise voluminous. The total number of pages in the Code of Federal Regulations grew 38 percent to 190,260 pages at the end of 2023.
Determining which of these regulations hinder the development and adoption of AI is a monumental and ongoing task.
After the release of ChatGPT in 2022, the Biden administration added to this task, pushing the entire federal bureaucracy into a surge of regulatory activity. Arguably some of this activity sought to remove barriers to AI innovation. But certainly much of it sought to impose new regulations which could form barriers to AI innovation and adoption.
Given the scope of this task, our comment focuses on specific targeted areas where our past experience and advocacy give us a unique perspective and understanding of the potential barriers. Undoubtedly there are many other existing barriers in various agencies; the limits of our identified barriers should not be read as minimizing the broader regulatory challenges elsewhere.
Federal Trade Commission
As the federal general consumer protection and competition agency, the FTC has an important role in promoting AI innovation and development.
The FTC’s traditional case-by-case enforcement against unfair methods of competition and unfair and deceptive acts and practices has proven flexible. It has adapted to a wide range of new technologies, allowing the FTC to bring cases over the years involving email, cellphones, internet, blockchain and cryptocurrencies, and AI. The FTC’s approach is adaptable because it focuses on outcomes – actual harms to consumers and to competition – regardless of the technologies used to cause those harms.
This flexibility is most evident in the FTC’s privacy and data security enforcement, where the agency has brought hundreds of cases under its general consumer protection authority. This agile, outcomes-based, case-by-case method is an excellent template for how to protect consumers as AI is integrated across the economy. Most importantly, this approach acknowledges that data has important commercial uses that benefit consumers, including greater customization, improved service experiences, enhanced protection against fraud and cyber threats, networks that are more responsive to real-time changes in traffic and usage patterns, and consumer education about new or discounted products or services of interest to them.
However, the Biden-era FTC adopted an acute skepticism toward data uses that resulted in several decisions deviating from the FTC’s long-standing focus on tangible harms. If left unaddressed, these decisions pose significant risks to AI innovation and adoption. Training effective AI models requires significant amounts of data, including publicly available data and data created by user interactions with online services. A blanket hostility toward data use without any consideration of its considerable benefits threatens to reduce a critical input to AI development.
Here are some steps OSTP can take to correct the past administration’s misjudgments.
Review FTC Investigations, Existing Orders, and the Current Order Templates
Problem: As the AI Action Plan recognizes, existing Biden-era FTC investigations, final orders, consent decrees, and injunctions could “unduly burden AI innovation” and any such workproducts should be modified or set aside.
Expansive use of Unfairness. In particular, the Biden administration frequently stretched the bounds of Section 5 unfairness authority, unnecessarily jamming unfairness counts into settling complaints where such practices were adequately addressed with the agency’s deception authority, and giving only cursory consideration of the benefit/cost prong of unfairness. The ultimate goal of such moves is to lay the groundwork for expanded unfairness privacy cases. If the FTC’s privacy enforcement becomes untethered from real product claims or meaningful cost–benefit analysis, it risks turning into a roving mandate that threatens AI innovation and adoption.
A good first pool of potential targets for review are Biden-era privacy and data security settlements or consent orders with concurrences, dissents or partial dissents from Republican commissioners. See Appendix A for a table of these matters.
Privacy and Data Security Template Orders. One type of FTC work product not mentioned in the AI Action Plan but influential on the future impact of FTC judgments on AI innovation are the templates staff uses when negotiating privacy and data security settlements. These form the starting position of the agency. These templates exhibit a ratchet effect: they tend to carry forward the most burdensome provisions the agency has previously secured. The FTC ought to review these templates to ensure they are consistent with the law and are unlikely to restrict beneficial uses of information, including in AI research, development, and deployment.
Overreach in the Rytr Case. In a 3-2 decision, the Commission alleged that a neutral AI writing tool supplied the “means and instrumentalities” for users to create false reviews. The complaint offered no evidence that users actually posted false reviews. It concluded that generating draft reviews was not a valid use case for an AI tool, seeming to view the ease with which the tool enabled people to draft content, “as if facilitating prolific writing were unlawful per se.” The settlement banned Rytr from offering any tools to support user drafting of reviews.
This decision was controversial, for good reason. As venture capital firm Andreessen Horowitz observed in its detailed comments, “The Commission appears to be inventing a special rule for AI” whereby “a company violates Section 5 by offering an AI-enabled service if the Commission can imagine a hypothetical use case that would deceive or harm consumers.” Dissenting Commissioners Holyoak and Ferguson summarized, “Today’s complaint suggests to all cutting-edge technology developers that an otherwise neutral product used inappropriately can lead to liability—even where, like here, the developer neither deceived nor caused injury to a consumer.”
That standard, were it adopted widely, would discourage a wide swath of AI services. It would chill innovation, harm competition, and especially impede open source AI development. The Commission should clarify that it will not impose liability based solely on speculative misuse unmoored from material deception or substantial consumer injury, and should terminate the Rytr order accordingly.
Recommended Actions: Work with FTC Chairman Andrew Ferguson to identify problematic precedent-distorting cases issued under the previous administration. For each such case, the Commission should choose from the following list of options:
Terminate the order under 15 U.S.C. § 45(b);
Modify the order under 15 U.S.C. § 45(b); or
Issue guidance rejecting the future imposition of similarly problematic counts or order provisions. Such guidance could also remind parties who believe their orders are affected by any changed guidance that they can request order modification under 16 C.F.R. § 2.51 or 16 C.F.R. § 3.72.
OSTP should also work with the FTC to revise the privacy and data security order templates.
Terminate the Commercial Surveillance Proceeding
Problem: In 2022 the FTC by a 3-2 partisan vote opened a highly problematic and controversial rulemaking proceeding on “Commercial Surveillance and Data Security” that has not been officially terminated. The Commission issued an Advanced Notice of Proposed Rulemaking (ANPR) with ninety-five wide-ranging questions about various business practices, many of which are fully legal under existing law.
The ANPR “demonstrates hostility to a wide but indeterminate range of clearly legal business practices.” Specifically, the use of the term “surveillance” itself is pejorative and typically describes a large power disparity between parties involved. Yet the ANPR’s broad definition of “commercial surveillance” includes even innocuous and mutually beneficial exchanges such as providing payment or shipping information when ordering online. In fact, “nearly all practices that fit the FTC’s broad definition of ‘commercial surveillance,’ are not only perfectly legal, but they are also commonplace, beneficial, and necessary for commerce to occur.”
The problem with sweeping such a wide range of commonplace practices under a pejorative term is that a future FTC could use the open ANPR as an opportunity to move the proceeding ahead in almost any regulatory direction, including to regulate AI practices.
In fact, the ANPR expresses deep skepticism that there are any benefits at all to the collection of data by businesses. It claims that “[b]usinesses reportedly use this information to target services …. [and] in theory, these personalization practices have the potential to benefit consumers…” (emphasis added). As I’ve said in a coauthored comment on the ANPR, “To characterize the benefits of personalized services as ‘theoretical’ borders on willful blindness to the hundreds of billions of dollars in value such services have created for users, businesses, and shareholders.”
Based on this deep skepticism of much of the existing business uses of data, the ANPR contemplates heavy-handed regulation that would raise significant barriers for AI developers. Among the wide-range of issues on which the ANPR sought comment, there are entire sections on automated decision-making systems and algorithmic discrimination. These questions contemplate the establishment of “economy-wide” rules to “forbid or limit the development, design, and use of” certain automated decisionmaking systems.
Even more concerning, the ANPR contemplates “new trade regulation rules that bar or somehow limit the deployment of any system that produces discrimination, irrespective of the data or processes on which those outcomes are based,” including against groups ”that current law does not recognize as protected from discrimination.” In other words, the ANPR contemplates banning the deployment of any system that has the effect of treating any identifiable group of people different from others.
This combination of data skepticism and unworkable algorithmic-discrimination proposals casts a pall over AI’s future. AI innovation depends on the collection and use of data, sometimes including personal and sensitive data, because some of the most important problems we have to solve (such as in health care) involve such data.
As dissenting Commissioner Noah Phillips explained, the ANPR “recast[s] the Commission as a legislature, with virtually limitless rulemaking authority where personal data are concerned. It contemplates banning or regulating conduct the Commission has never once identified as unfair or deceptive” including many that are “outside its bailiwick… a cavalcade of regulations may be on the way, but their number and substance are a mystery.”
If a future FTC were to revive this proceeding, it could more quickly move to full, heavy-handed regulatory rules across a wide range of legal business practices, including in ways that raise barriers to AI innovation and adoption.
Recommended Action: For the reasons above, the FTC should terminate the 2022 Commercial Surveillance proceeding with an explanatory statement. That statement should clarify that data collection and automated decisionmaking practices, provided they are not done deceptively or unfairly, are useful and necessary components of many business services and can bring wide benefits to consumers, companies, and the economy.
Direct the Nuclear Regulatory Commission to Replace the Linear No-Threshold Modeling
Problem: The Nuclear Regulatory Commission’s (NRC’s) “As Low as Reasonably Achievable” (ALARA) standard and the linear no-threshold (LNT) model have stifled nuclear expansion since the mid-to-late 1980s, creating skyrocketing costs and halting additional capacity. These regulations lack well-defined limits, ignore radiation dosage and timing, and force excessive mitigation efforts.
Recommended Actions: Direct the NRC to replace linear no-threshold modeling with standards that account for dosage and timing. Specifically, the NRC must:
Clarify ALARA guidelines to establish reasonable, cost-conscious thresholds rather than open-ended requirements.
Replace the LNT model with a more risk-informed framework that considers dosage and timing for radiation exposure.
Refine and streamline the NRC’s rules created under the direction of the 2019 Nuclear Energy Innovation and Modernization Act (NEIMA). NEIMA required the NRC to establish new licensing processes for nuclear reactors and advanced reactors, but its proposals have been unwieldy and failed to create the NEIMA authors’ envisioned and intended surge of small modular reactor (SMR) companies or deployment of SMRs or advanced reactors.
Looking Ahead: The Need to Coordinate Future Federal Action
Problem: As a general purpose technology, AI touches every industry and sector. This means that nearly every government agency has or will have a jurisdictional hook to regulate AI in some way. Even with this RFI, neither OSTP nor the public will be able to anticipate every barrier. Many will only appear as the technology is deployed over the coming years. What OSTP can anticipate is that there will be continued encounters and potential friction between developers/deployers and federal regulatory frameworks that will hinder AI innovation and adoption.
Under the RFI’s framework, this is a form of organizational barrier: a lack of coordination and alignment across the Federal government threatens to subvert the U.S.’s AI leadership goals.
To achieve the AI Action Plan’s goals, therefore, the Administration needs to proactively create organizational structures that strengthen the White House’s ability to constrain, channel, and coordinate disparate regulatory efforts to preserve and improve the U.S. innovation environment. The AI Action plan directs OSTP to “work with relevant Federal agencies to take appropriate action.” These interactions should not be one time, or ad hoc. OSTP and the White House should build lasting institutional capacity to manage them.
OSTP should establish a pathway for streamlined resolution of those conflicts. This can both help the companies get certainty quickly as well as provide a centralized set of resources to agencies and the incentive to reach certainty.
Recommended Actions: Three federal efforts could help address the continued potential barriers AI development, deployment, and adoption will face as the technology is incorporated across the economy and the country.
Executive Order on AI Regulatory Planning and Review. In the model of EO 12866, the White House should issue an Executive Order governing federal AI regulation. The EO should:
Reassert that (consistent with EO 12866) the Vice President is the Executive Branch lead on AI policy.
Direct the Office of Information and Regulatory Affairs (OIRA) to develop a benefit-cost rubric to ensure that any AI regulation is justified, proportionate, and does not stifle progress while addressing any real risks. The rubric should include the following factors:
Economic and Innovation Impact. What are the expected productivity gains, compliance costs, and effects on market competition and investment incentives?
Risk Management and Safety. Does regulation significantly reduce empirical and significant risks? What beneficial AI applications might be delayed, deterred, or prevented? Does this rule account for varying risk profiles in different AI applications (e.g., healthcare vs. entertainment)?
National Security and Geopolitical Positioning. Will this regulation make the U.S. a leader or laggard in global AI development? Will it make U.S. technology the choice of countries and companies around the globe? How will the regulation affect supply chain robustness?
Regulatory Simplicity and Feasibility. Does the rule use regulatorily flexible approaches for iterative innovation, sandboxing, and adaptive governance, or will it be quickly obsolete by new developments? Is the rule straightforward, or does it create regulatory uncertainty? Can this regulation be enforced efficiently without excessive bureaucracy? Are there non-regulatory approaches (e.g., industry standards, liability frameworks) that achieve similar or better outcomes?
Require all proposed federal AI regulation with more than a de minimis impact to be submitted to OIRA and the Vice President’s office for coordinated review.
Refocus Chief AI Officers established under the Biden Administration on how to deploy and use AI to further their agencies’ missions.
Require agency heads to identify in writing their plans for how they will streamline and accelerate U.S. AI development in the next 90 days.
Federal Interagency Regulatory Sandbox. Establish a federal interagency AI sandbox to accelerate AI innovation in federally regulated industries. The Senate is considering a federal AI sandbox law, and the White House should support that effort. But there is much that can be done while Congress deliberates. Many agencies already have various types of discretionary authority in applying statutory requirements, including waivers, enforcement discretion, declaratory rulings, and guidance. The sandbox should have five components:
Coordinate agency action. Identify participating agencies and their various authorities to waive or modify requirements, and develop clear processes, including standardized regulatory mitigation agreements for sandbox participants.
Admit and monitor participants. Govern the participation of parties in the sandbox, including by setting and enforcing the criteria for participation, establishing the processes for entering and leaving the sandbox, and monitoring participation.
Maintain shared resources. Maintain key resources for agency and private participants, including a repository of template agreements. Access to certain federal data sets could also be conditional on sandbox participation (see “Unleash the Potential of Unstructured Federal Data for AI Training,” below).
Catalyze regulatory mitigation. Match participants with involved agencies. Coordinate and support the development of time-limited regulatory mitigation agreements between participants and federal regulators. Serve as a pro-innovation advocate to the agencies as participants develop such agreements.
Recommend legal changes. Based on knowledge built through sandbox work, offer recommendations to agencies and to Congress on regulatory or legislative changes needed to streamline development or fill gaps in legal protections.
Federal Extraterritorial Effects Task Force. Establish a capability, perhaps housed within the DOJ, for identifying and mitigating the extraterritorial effect of state AI laws and regulation. Because AI touches every sector, each federal agency should use its preemption authority where possible to clear away conflicting state regulations.
Examples of agencies with relevant, preemptive statutory authority in specific industries include:
→ National Highway Traffic Safety Administration (NHTSA) – motor vehicle safety and transportation
→ Federal Aviation Administration (FAA) – aviation and aerospace
→ Federal Communications Commission (FCC) – telecommunications and broadcasting
→ Federal Motor Carrier Safety Administration (FMCSA) – commercial trucking and bus transportation
Such authority is best suited to remove industry-specific state laws with extraterritorial effects that create barriers to AI innovation. For example, the FDA and HHS should explore how their authority might preempt applications of state biometric privacy laws to medical devices or health provider services. However, there is not a clear path to use such industry-specific authority to preempt “comprehensive” state AI laws regulating AI model training.
The task force should monitor industry-specific AI regulation as the technology becomes integrated across the economy. Each state law identified as regulating an AI use should be reviewed for extraterritorial effect by a federal agency with relevant jurisdiction. As part of that analysis, the agency should determine if it has authority to preempt those state restrictions.
Agencies could exercise preemption through rulemaking, declaratory rulings, or adjudications. Other measures could include guidance and consultations with state lawmakers. In some cases, the agency may identify existing rules that already preempt certain state laws, and should then challenge the state law in court.
Conclusion
For the reasons detailed above, OSTP should work with the appropriate agencies to accomplish the following goals:
Review and rescind or modify relevant existing FTC orders, including the Rytr LLC Order
Revise the FTC current order templates
Terminate the FTC’s Commercial Surveillance proceeding and issue an explanatory statement
Direct the NRC to replace the Linear No-Threshold Model
Issue an executive order on AI regulatory planning and review
Establish a Federal Interagency Regulatory Sandbox
Establish a Federal Extraterritorial Effects Task Force
Thank you for considering these recommendations. We look forward to contributing to efforts to ensure continued U.S. AI leadership.
Appendix A
Table of AI-related FTC Matters with Concurrences, Partial Dissents, or Dissents
Matter | Commissioner(s) | Type | Date |
Holyoak; Ferguson | Dissent | 01-17-25 | |
Ferguson | Concurrence | 01-16-25 | |
Holyoak; Ferguson | Concurrence | 01-03-25 | |
Ferguson | Concurrence | 12-03-24 | |
Ferguson | Dissent | 12-03-24 | |
Holyoak | Dissent | 12-03-24 | |
Holyoak | Concurrence | 12-03-24 | |
Ferguson | Concurrence | 11-26-24 | |
Holyoak | Partial dissent | 11-26-24 | |
Ferguson | Concurrence | 11-06-24 | |
Holyoak | Concurrence | 11-06-24 | |
Holyoak | DIssent | 10-15-24 | |
Holyoak | Concurrence | 09-25-24 | |
Holyoak; Ferguson | Dissent | 09-25-24 | |
Ferguson | Concurrence | 09-25-24 | |
Holyoak; Ferguson | Dissent | 09-25-24 | |
Ferguson | Partial Dissent | 09-19-24 | |
Holyoak | Partial Dissent | 09-19-24 | |
Ferguson | Concurrence | 08-16-24 | |
Ferguson | Partial Dissent | 08-15-24 | |
Holyoak | Partial Dissent | 08-15-24 | |
Holyoak | Concurrence | 07-29-24 | |
Ferguson | Concurrence | 07-23-24 | |
Holyoak | Concurrence | 07-23-24 | |
Holyoak | Concurrence | 07-15-24 | |
Holyoak | Concurrence | 07-09-24 | |
Holyoak; Ferguson | Concurrence | 07-09-24 | |
Wilson | Concurrence | 03-02-23 | |
Wilson | Concurrence | 12-19-22 | |
Wilson | Partial Dissent | 10-24-22 | |
Advanced Notice of Proposed Rulemaking on Fake and Deceptive Reviews and Endorsements | Wilson | Dissent | 10-20-22 |
Wilson; Phillips | Dissent | 09-20-22 | |
Combatting Online Harms Through Innovation Report to Congress | Phillips | Dissent | 06-16-22 |