Research / Comment on Request for Information on the Development of an Artificial Intelligence Action Plan

Comment on Request for Information on the Development of an Artificial Intelligence Action Plan

DOWNLOAD PDF

Introduction

We appreciate the opportunity to comment on the NSF and NITRD NCO’s request for information (RFI) on the Development of an Artificial Intelligence Action Plan. We applaud the Office of Science and Technology Policy’s (OSTP) desire to identify policy actions necessary to enhance America’s AI dominance.

The Abundance Institute is a mission-driven nonprofit organization focused on creating the cultural and policy environment necessary for emerging technologies to grow, thrive, and reach their full potential. We strongly support Executive Order 14179’s establishment as the policy of the United States “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

AI is set to improve productivity, strengthen our national security, and “secure a brighter future for all Americans.” However, several threats could undermine those benefits. Well-intentioned state legislatures scramble to fill perceived regulatory gaps with hundreds of new hurdles for innovators. Meanwhile, wrapped in red tape, U.S. companies struggle to build the necessary domestic infrastructure, including energy infrastructure, to train and deploy powerful AI technology. If we stumble or slow, China is poised to take the lead.

The U.S. must meet these threats with muscular executive action. A robust AI Action Plan focused on removing barriers is a first important step.

As such, we are pleased to offer the following concrete ideas for the AI Action Plan. Our recommendations are grouped into direct AI-related actions and those that facilitate the energy infrastructure necessary to continue AI innovation in the U.S.

Key AI Policy Recommendations

Stem the Flow of Conflicting State Laws

Problem: State legislatures are actively considering more than 700 AI bills as of the end of February 2025 and will probably top 1000 by the end of state sessions. Colorado has passed a ‘comprehensive’ AI regulation that no one knows how to implement. Many other states, including some led by Republicans, are considering similar legislation. This patchwork of conflicting overregulation is a significant threat to the Trump administration’s goal of preserving and strengthening U.S. AI dominance.

Actions:

Anti-Patchwork Executive Order: Issue an executive order declaring that it is the position of the federal government that there be a unified regulatory regime for AI in the U.S., and that regime will be light touch, permissionless, and innovator-led. The executive order should:

  • Establish a use-based approach to AI regulation, recognizing that AI models are general-purpose software that pose no inherent consumer risks until applied to specific purposes.

  • Direct agencies, including the Departments of Commerce, Agriculture, Education, Energy, Transportation, and Health and Human Services to offer guidance on when state-level AI regulations would exclude a state from eligibility to receive funds from federal spending programs, including CHIPS Act and BEAD.

  • Direct relevant agencies to prioritize federal permitting and other regulatory processes for projects in states with AI-friendly regulatory environments, as determined by the Administration.

  • Declare federal contractors offering AI products and services exempt from state AI regulation that would hinder contract performance.

  • Direct the Federal Trade Commission’s Competition Advocacy Program to offer comments to state legislatures on proposed AI regulation that highlight the likely effects of such laws on competition.

Advance Pro-Innovation Federal Law: Propose and advocate for Congressional legislation that preempts restrictive state regulations or explicitly prohibits certain forms of state-level AI legislation. For example,

  • A negative liability law could limit liability of developers of general-purpose AI models for damages created by third parties using the model.

  • A safe harbor law could establish a set of light-touch practices that, if performed, would exempt developers of general-purpose AI models from other state and federal regulatory requirements.

  • A right to compute law could establish that any government actions that restrict the ability to privately own or make use of computational resources for lawful purposes must be limited to those demonstrably necessary and narrowly tailored to fulfill a compelling government interest.

Dormant Commerce Clause Case: Direct the Attorney General to prepare a brief and litigation strategy to support a private dormant commerce clause case. Publicize the Administration’s intent to support such cases against any state legislation that threatens to undermine U.S. AI leadership.

Empower the Executive to Direct AI Policy

Problem: As a general-purpose technology, AI touches every industry and sector. This means that nearly every government agency has a jurisdictional hook to regulate AI. These disparate regulatory efforts must be clearly constrained, channeled, and coordinated by the White House to preserve and improve the U.S. innovation environment.

Actions:

Executive Order on AI Regulatory Planning and Review. In the model of EO 12866, adopt an Executive Order governing federal AI regulation. The EO should:

  • Reassert that (consistent with EO 12866) the Vice President is the Executive Branch lead on AI policy.

  • Direct the Office of Information and Regulatory Affair (OIRA) to develop a benefit-cost rubric to ensure that any AI regulation is justified, proportionate, and does not stifle progress while addressing any real risks. The rubric should include the following factors:

    • Economic and Innovation Impact. What are the expected productivity gains, compliance costs, and effects on market competition and investment incentives?

    • Risk Management and Safety. Does regulation significantly reduce empirical and significant risks? What beneficial AI applications might be delayed, deterred, or prevented? Does this rule account for varying risk profiles in different AI applications (e.g., healthcare vs. entertainment)?

    • National Security and Geopolitical Positioning. Will this regulation make the U.S. a leader or laggard in global AI development? Will it make U.S. technology the choice of countries and companies around the globe? How will the regulation affect supply chain robustness?

    • Regulatory Simplicity and Feasibility. Does the rule use regulatorily flexible approaches for iterative innovation, sandboxing, and adaptive governance, or will it be quickly obsolete by new developments? Is the rule straightforward, or does it create regulatory uncertainty? Can this regulation be enforced efficiently without excessive bureaucracy? Are there non-regulatory approaches (e.g., industry standards, liability frameworks) that achieve similar or better outcomes?

  • Require all proposed federal AI regulation to be reviewed by OIRA and the Vice President’s office.

  • Refocus Chief AI Officers established under the Biden Administration on how to deploy and use AI to further their agencies’ missions.

  • Require agency heads to identify in writing their plans for how they will streamline and accelerate U.S. AI development in the next 90 days.

Federal Interagency Regulatory Sandbox. Establish a federal interagency AI sandbox to accelerate AI innovation in federally regulated industries. The sandbox should have five components:

  • Coordinate agency action. Identify participating agencies and develop clear processes, including standardized regulatory mitigation agreements for sandbox participants.

  • Admit and monitor participants. Govern the participation of parties in the sandbox, including by setting and enforcing the criteria for participation, establishing the processes for entering and leaving the sandbox, and monitoring participation.

  • Maintain shared resources. Maintain key resources for agency and private participants, including a repository of template agreements. Access to certain federal data sets could also be conditional on sandbox participation (see “Unleash the Potential of Unstructured Federal Data for AI Training,” below).

  • Catalyze regulatory mitigation. Match participants with involved agencies. Coordinate and support the development of time-limited regulatory mitigation agreements between participants and federal regulators. Serve as a pro-innovation advocate to the agencies as participants develop such agreements.

  • Recommend legal changes. Based on knowledge built through sandbox work, offer recommendations to agencies and to Congress on regulatory or legislative changes needed to streamline development or fill gaps in legal protections.

Rename and Restructure AISI. Consistent with the growth and innovation emphasis of this administration, the AI Safety Institute should be renamed the AI Standards Institute, depoliticized, and integrated back into NIST’s general structure. Important standard-setting institutions are undermined if they are positioned to make controversial policy decisions better suited to elected officials.

Reorienting the federal bureaucracy toward accelerating U.S. AI innovation requires strong leadership from the White House. A muscular EO, decisively implemented by OIRA, sets the right tone across government. And a vibrant sandbox will help federal agencies and innovators alike to learn more about how to collaborate to advance American AI leadership.

Unleash the Potential of Unstructured Federal Data for AI Training

Problem: The U.S. federal government holds vast troves of economic, demographic, scientific, historical, and other data that it has assembled or funded. While some of this data is publicly available, much remains inaccessible or stored in formats that hinder usability.

A significant portion of this inaccessible data is unstructured—text, images, audio, and other formats that lack a predefined model. Many of these datasets have few legal constraints and pose no privacy or security risks, yet they remain underutilized. Unlocking this data could greatly enhance AI training, benefiting both general AI models and specialized applications such as scientific research.

Further complicating access, multiple federal initiatives—such as the National AI Research Resource (NAIRR), the Standard Application Process (SAP), and the National Secure Data Service (NSDS)—address data access challenges but focus primarily on structured, high-quality statistical data, which is costly to curate. Meanwhile, unstructured data remains largely overlooked.

Action: The federal government should identify and make available unstructured datasets from federal programs. To achieve this:

  • Survey Federal Data Holdings. Direct the Office of Management and Budget (OMB), through the Federal Chief Data Officers Council, to conduct a government-wide inventory of unstructured data. This survey should assess data types, volume, and accessibility to prioritize high-value datasets for release.

  • Redirect the NAIRR to Unstructured Data. Task NAIRR with creating an AI training repository specifically for federally sourced, unstructured data. Agencies should be directed to contribute datasets to this repository.

  • Mandate Open Research Access. Require all government-funded research—including clinical studies—to be published openly, with associated publications and data submitted to the new repository.

  • Ensure Automated Access. Prohibit federal websites from blocking data collection by automated tools (‘bots’), allowing researchers to retrieve publicly available information more efficiently.

  • Digitize National Archives. Launch a large-scale initiative to digitize historical records and create a publicly accessible National Archives repository.

Many of these efforts could be implemented through public-private partnerships to promote data use while minimizing costs to taxpayers.

Ensure AI Liability Frameworks Promote Innovation

Problem: AI model developers create general-purpose tools, but they generally do not control how those tools are fine-tuned, deployed, or applied by the model deployers who build user applications. Yet some regulatory proposals and litigation efforts seek to impose liability on AI model developers. Trial lawyers and plaintiffs currently have strong financial incentives to sue model developers. Imposing broad liability on model developers would have severe consequences, including:

  • Overcautious AI Development. Model creators cannot predict every possible use by third parties, forcing them to adopt excessive restrictions that would limit the functionality of or access to the most effective AI tools.

  • Self-Censorship. Heightened litigation risk triggers one specific form of overcaution: unnecessarily constraining the range of topics and viewpoints that AI models and AI systems will engage with or reflect.

  • Anticompetitive Effects. Litigation burdens would disproportionately harm smaller or open-source AI model developers, entrenching the dominance of large technology firms that can afford litigation or compliance costs.

Legislators already recognize that harms are a result of AI deployment choices. New York’s anti-deepfake law targets those who create and distribute harmful AI-generated content, not the developers of image-generation models. This approach should guide federal AI policy. A liability framework that properly focuses on bad actors—rather than on tool-builders—better ensures accountability without stifling innovation.

Action: The executive branch should take immediate steps to influence the development of AI liability regimes that focus on deployers rather than model developers. The administration should:

  • Distinguish AI Liability Roles: Direct enforcement agencies such as the FTC and DOJ to distinguish between AI developers and deployers in enforcement actions and policy guidance.

  • Advocacy Efforts: Direct the DOJ to pursue intervening in relevant cases to support the development of liability law that focuses on the most proximate cause of harm. Oppose liability and compliance frameworks that impose undue burdens on foundational AI research and building of general-purpose tools.

  • Codify Deployers’ Responsibility Where Appropriate: Direct sector-specific agencies (e.g., FDA for AI-powered medical devices, NHTSA for AI-driven vehicles) to clarify that their jurisdiction is limited to industry-specific applications, with the goal of not subjecting developers of general-purpose tools to overlapping agency jurisdiction.

Key Energy Policy Recommendations

Direct FERC to Speed Interconnection Processes Within the Regional Transmission Operators that they Oversee

Problem: It takes too long to plug new generators into the grid. The Federal Energy Regulatory Commission (FERC) oversees the operations of major grid operators except for Texas. Over the last few years, it has become clear that the Texas grid builds much faster than other grids. From 2021 to 2023, the Electricity Reliability Council of Texas (ERCOT) added 25 GW, compared to the next highest, PJM in the eastern U.S., at 15 GW. That is, Texas added about 70% more, almost double what other grids added.

Texas can build fast and power new growth for two interrelated reasons. First, Texas has no capacity market, which requires extensive research, complicated modeling, and often expensive upgrades before new generation capacity can be added to the grid. Texas operates an energy-only system and relies on a “connect and manage” approach to grid operation. Generators know that they can be curtailed by the grid operators to maintain the safe operation of the system.

The second reason is that Texas operates more efficiently than other grid operators. An energy-only resource in the PJM regional transmission organization system takes just as long to add to the grid as a networked resource eligible for additional capacity payments. Because of this, generators largely ignore the energy-only option.

A swelling body of academic research suggests that the existing operations of capacity markets and interconnection need reform to enable faster building and market-driven responses to rising electricity costs or rising capacity costs.

Action: Direct the Federal Energy Regulatory Commission to speed interconnection processes within the regional transmission operators they oversee. Set a goal that new generation and new loads should be approved and connected to the grid within one year.

Specifically, direct FERC to host a technical conference and begin a related rulemaking on capacity markets and large load interconnection processes that clarifies and accounts for: (1) the economic foundations of capacity markets and simplifies the conflicting processes between regional transmission operators, (2) load flexibility from on-site energy assets that includes co-location, (3) the ability to move load from sites where the grid is stressed, (4) the possibility of energy parks that incorporate multiple energy assets, and (5) the emergence of new energy technologies like batteries. The goal is to enable innovation and replace bloated planning with market-driven investments that encourage efficiency, adaptation, and innovation.

Embrace “Build, Baby Build”: Enable All Energy Sources Via Permitting Reform

Problem: Permitting reviews are out of control. Two out of three reviews take longer than the two-year timeline required by statute. Final Environmental Impact Statements issued in 2024 took a median of 2.2 years and an average of 3.8 years. In addition to consuming years of time, these reviews are thousands of pages long.

Action: Work with agencies managing federal land to speed permitting and interconnection of energy infrastructure of all kinds and to open opportunities for mining and energy development of all kinds on federal lands.

In concrete terms, OSTP can work with agencies as they rewrite their NEPA-implementing regulations in response to Executive Order 14154 and CEQ’s subsequent rescission of its regulations. Under the NEPA statute and CEQ guidance, agencies have significant authority to improve their implementation of NEPA. The President should require agencies implementing NEPA to:

  • Consider only the direct, significant, and reasonably foreseeable environmental effects. Speculative, indirect, and cumulative impacts must not be considered.

  • Enforce prompt NEPA review timelines.

  • Improve coordination between agencies when multiple agencies are involved, by 1) designating a single lead agency for each NEPA review, responsible for issuing a single, consolidated Record of Decision; 2) requiring concurrent agency reviews and prohibit sequential reviews; and 3) eliminating duplicative permitting requirements among agencies unless explicitly required by law.

  • Immediately adopt new categorical exclusions to exempt:

    • Routine infrastructure maintenance and upgrades,

    • Grid expansion and modernization projects,

    • Domestic energy extraction and production activities,

    • Nuclear plant modifications and upgrades that do not significantly alter environmental impact,

    • Nuclear plant uprates and capacity expansions,

    • License renewals and life extensions for existing plants,

    • Deployment of next-generation reactors on pre-approved sites, and

    • Co-locating SMRs on existing energy infrastructure sites.

  • Unify exemptions by directing agencies to adopt all existing categorical exclusions at other agencies so that the same categorical exclusions are available to project sponsors regardless of the agency overseeing the permitting process.

  • Prioritize permitting for projects designated by the National Energy Council as critical for energy dominance. These include:

    • Nuclear energy projects, including SMRs and advanced reactors,

    • Domestic oil, natural gas, and hydrogen infrastructure,

    • Liquefied Natural Gas (LNG) terminals,

    • Electrical transmission lines,

    • Mining and mineral extraction for critical energy materials, and

    • Energy production from any fuel source.

  • Enable the use of federal lands for energy development and production of all kinds.

  • Direct the NRC to Replace Linear No-threshold Modeling

Problem: The U.S. builds nuclear power generation at a glacial pace. The Connecticut Yankee nuclear plant, which began operating on January 1, 1968, took five years and $1 billion (in today’s dollars) to permit and build. In contrast, Vogtle’s nuclear reactor in Georgia took 14 years and over $30 billion to fully come online in 2024. Poor regulation is the problem—particularly the “As Low as Reasonably Achievable” (ALARA) standard and the linear no-threshold (LNT) model. These lack well-defined limits, ignore radiation dosage and timing, and force excessive mitigation efforts. These rules have stifled nuclear expansion since the mid-to-late 1980s, creating skyrocketing costs and halting additional capacity. ALARA prevents cost reductions because regulators could interpret cost savings and profitability to imply that additional radiation emission reduction efforts could have been done.

Action: Direct the Nuclear Regulatory Commission (NRC) to replace linear no-threshold modeling with standards that account for dosage and timing. Specifically, the NRC must:

  • Clarify ALARA guidelines to establish reasonable, cost-conscious thresholds rather than open-ended requirements.

  • Replace the LNT model with a more risk-informed framework that considers dosage and timing for radiation exposure.

  • Refine and streamline the NRC’s rules created under the direction of the 2019 Nuclear Energy Innovation and Modernization Act (NEIMA). NEIMA required the NRC to establish new licensing processes for nuclear reactors and advanced reactors, but its proposals have been unwieldy and failed to create the NEIMA authors’ envisioned and intended surge of small modular reactor (SMR) companies or deployment of SMRs or advanced reactors.

  • Supercharge Small Modular Nuclear in the US by Settling the Lawsuit Against the NRC Brought by Utah, Texas, and Last Energy

Problem: The Nuclear Regulatory Commission (NRC) is overstepping its authority by asserting licensing jurisdiction over the construction and operation of all nuclear reactors, regardless of how small and safe they are. The Atomic Energy Act of 1954 explicitly excludes small and safe reactors from the statutory definition of Utilization Facility, and therefore, the NRC lacks legal authority to license or restrict construction and operation of those reactors. Licensing authority for these classes of reactors properly belongs to the states.

States and commercial nuclear startups have now sued the NRC to overturn this aspect of NRC’s Utilization Facility Rule, opening the door for rapid deployment of test reactors, microreactors, and small modular reactors. If this lawsuit is successful, America could experience a nuclear, energy, and economic renaissance. Nuclear costs would plummet as repeatable and manufacturable commercial reactors are deployed and rapid iteration of test reactors proceeds.

Action: The Trump Administration should welcome the Texas v. NRC case on the Utilization Facility Rule and direct the DOJ to settle the case immediately in favor of the plaintiff’s claims. This will usher in a new age of nuclear energy abundance. Among the many other benefits, there would be ample energy to power data centers needed for AI supremacy.

Conclusion

Thank you for considering the above recommendations. Maintaining the U.S. lead in AI development and deployment is a critical priority, and there is much that the President and his administration can do to clear the path for American innovators and entrepreneurs. We look forward to supporting such efforts.

Key AI Resources

Neil Chilson, Red Flags in AI Legislation, Getting Out of Control (March 6, 2025).

Matt Perault, Setting the Agenda for Global AI Leadership: Assessing the Roles of Congress and the States, Andreessen Horowitz (February 4, 2025).

Taylor Barkley, Logan Whitehair, Ahmad Nazeri, and Neil Chilson, Resetting AI Regulation: Key Takeaways from EO 14110’s Repeal, Now + Next (January 24, 2025).

Jan Zilinksy and Thomas Zeitzoff, Working Paper: Artificial Intelligence, Social Media, and the Politics of Anti-Technology, Abundance Institute (September 2024).

Neil Chilson, Red Teaming AI Legislation: Lessons from SB 1047, Getting Out of Control (August 25, 2024).

Neil Chilson, AI is no reason to limit political speech, Now + Next (May 14, 2024).

James Ostrowski, Regulating Machine Learning Open-Source Software, Abundance Institute (May 2024).

Nirit Weiss-Blatt, Adam Thierer, and Taylor Barkley, The AI Technopanic and Its Effects, Abundance Institute (May 2024).

Jim Harper, Grading the Government’s Data Publication Practices, Cato Institute (November 5, 2012).

Jim Harper, Publication Practices for Transparent Government, Cato Institute (September 23, 2011).

Key Energy Resources

Lynne Kiesling, Innovating Future Power Systems: From Vision to Action, Knowledge Problem (February 27, 2025)

Josh Smith, Unlock the potential of federal lands, Powering Spaceship Earth (January 25, 2025).

Josh Smith, Answering the call to build, Powering Spaceship Earth (January 5, 2025).

Josh Smith, A crisis of our own making?, Powering Spaceship Earth (August 16, 2024).

Austin Vernon, Policies to Take Advantage of Falling Solar Hardware Costs, Abundance Institute (July 2024).

Lynne Kiesling, Data Center Electricity Use 1: Framing the Problem, Knowledge Problem (June 19, 2024).

↳ JOIN OUR NEWSLETTER