An FDA for AI Would Make America Less Safe

Read this post on Neil’s Substack: Getting Out of Control.

The quest for perfect safety has a high body count.

Consider beta blockers, today a common treatment for blood pressure ailments. The Food and Drug Administration (FDA) first approved propranolol in 1968, three years after it became available in Europe, and then waited until 1978 to approve it for hypertension and angina. Experts estimate that these and related beta-blocker delays cost thousands of lives each year. Broader estimates of FDA delays put the toll in the tens of thousands per decade, and the cumulative toll plausibly reaches far higher.

That history should be front of mind after National Economic Council Director Kevin Hassett said the Trump administration is considering releasing advanced AI models only after they have been “proven safe,” “just like an FDA drug.”

This idea of an FDA for AI gained traction during the Biden-Harris administration, which treated the transformative technology of AI as a risk to be contained. We wrote in 2023 about The Problem with AI Licensing & an “FDA for Algorithms,” criticizing proposals to create a new agency to regulate AI.

But that era ended. From day one, the Trump administration rejected the Biden-Harris approach to AI. Rather than a risk, President Trump viewed AI as an opportunity to be seized. He made clear that U.S. policy should sustain and enhance America’s global AI dominance for human flourishing, economic competitiveness, and national security through a “try-first” policy framework. And in a major February 2025 address in Paris, Vice President JD Vance firmly rejected top-down European-style regulation of AI, arguing that “to restrict its development now will not only unfairly benefit incumbents in this space, it would mean paralyzing one of the most promising technologies we have seen in generations.”

That’s why Hassett’s comments have caused a stir. Adopting an FDA-style regulatory regime for AI would represent a shocking policy reversal by the Trump administration, and a major about-face on how America has approached software, online speech, and digital commerce. Hassett’s comments have even provoked a White House response: Chief of Staff Susie Wiles, in a subtle walking back of Hassett’s statement, called President Trump “the most forward leaning president on innovation in American history” and pledged that Trump will continue to “empower[] America’s great innovators, not bureaucracy.”

Her message is welcome, because the FDA is a particularly poor template for AI governance. President Trump himself has argued that the FDA’s regulatory approval process is “slow and burdensome” and it “keeps too many advances … from reaching those in need.” He’s right. Beta blockers are just one tragedy of the FDA’s making. In 1988, during the AIDS crisis, more than 1,000 patients and activists protested outside the FDA HQ, begging for access to experimental treatments. They weren’t asking for perfect safety—but for any chance to survive. The FDA denied that chance for too long.

The list goes on. In the early days of COVID, FDA processes delayed testing when speed mattered most. And the FDA for years shut down personal genetic health testing that could have helped patients make better choices. Anyone could spit into a tube and learn about their ancestry, but the FDA blocked them from receiving from that same test health information that might have prompted earlier conversations with doctors, better screening or wiser life choices.

The FDA is overly cautious because they get blamed if a drug has an unexpected effect. But no one blames them for the “invisible graveyard” of lost lives when thousands die while effective treatments are delayed. And these delays can be lengthy: according to the U.S. National Academies of Sciences, “[i]t takes 10–15 years for a typical drug to be developed successfully from discovery to registration with the FDA.”

Imagine applying this red tape nightmare to AI models, which often come and go in just 10-15 months. These aren’t pills designed and tested for a specific use. AI models are general purpose tools with an infinite number of perfectly safe and often very valuable uses (and some dangerous uses as well). Still, the opportunity is immense. AI can help read scans, flag rare diseases, accelerate drug discovery, match patients to clinical trials, translate medical records, reduce administrative waste, detect fraud, identify cyber vulnerabilities and make expert assistance available to people who lack it. And much more.

A pre-approval regime would put all of that behind a bureaucratic gate. A delayed approval for a frontier model could have a greater cost than even the horror of delaying a disease treatment. A delayed model could slow or stop progress across the entire economy.

Pre-approval regimes also have predictable rewards for the largest incumbents. Big firms can hire lawyers, lobbyists and compliance teams. They can survive delayed deployments. Startups, university labs and open-source developers cannot. We see this exact phenomenon in drug approvals, where the FDA approval process is endured primarily by large companies. In sum, “paperwork favors the powerful” and will mean less investment and less new entry for smaller AI players.

Finally, an FDA approach is incompatible with a tool that is so intimately intertwined with speech. AI technologies are information technologies, which mean they earn some First Amendment protection. A drug-like approval process for speech-generating tools sounds an awful lot like an unconstitutional prior restraint on speech.

Rejecting an FDA-style approach does not mean relegating ourselves to unsafe AI tools. The government can support voluntary testing, standards development, information-sharing and rapid response to concrete threats. Government and industry must cooperate with the private sector to use these powerful AI models to address cybersecurity threats. And it is reasonable that cybersecurity defenders get early access to powerful new tools.

The demand that a model be “proven safe” before release sounds comforting until one asks what safety means, who defines it, and how long the proof takes. Any “safety” evaluation must account for any delayed benefits such as lives saved by faster diagnosis, hospitals with better cybersecurity, drugs discovered sooner, productivity gains that raise living standards and breakthroughs that wouldn’t happen if innovators had to wait for permission.

The FDA’s history teaches a hard lesson: excessive caution isn’t safe—it can kill. It kills quietly, by delaying tools that would have helped. It kills by keeping officials safe from scrutiny while the public bears the harm of inaction.

When it comes to AI safety, the FDA isn’t a blueprint, it’s a warning label. Washington should heed it.