The years 2023 and 2024 (so far) have been banner years for technopanics related to artificial intelligence (AI). The term technopanic refers to “intense public, political, and academic responses to the emergence or use of media or technologies.” This paper documents the most recent extreme rhetoric around AI, identifies the incentives that motivate it, and explores the effects it has on public policy.
Granted, there are many serious issues related to the development of AI and the use of algorithmic and computational systems. Labeling the response to these innovations “technopanic” does not mean that the underlying technologies are harmless. Algorithmic systems can pose real threats. Many AI critics have raised concerns in a levelheaded fashion and have engaged in reasoned debate without resorting to rhetorical tactics such as fear appeals and threat inflation. Those tactics are meant to terrify the public and policymakers into taking extreme steps to curb or halt technological progress. Unfortunately, such scare tactics are becoming increasingly common today, and they often crowd out reasoned deliberation about the future of AI.
AI panic is often fueled by news media coverage. Looking at this coverage in light of the “technopanic” phenomenon prompts observers to notice what is being amplified and what is being ignored. These telling emphases and gaps in coverage are apparent in the case of the “existential risks” discourse that gives rise to some of the most extreme rhetoric and proposals in debates over the future of AI: Existential risk, or x-risk, gets most of the attention, while other risks are downplayed.
Our argument is that framing AI in such extremely negative terms can motivate policymakers to propose and adopt stringent rules that could chill or cancel beneficial innovation. Rhetoric has a cost. Thus, we outline a suite of proposals and recommendations for policymakers to adopt in order to provide the sober analysis required in their leadership positions. We also provide recommendations for those in civil society and the media who wish humanity to reap as many benefits as possible from AI tools in the future.
Extreme Rhetoric on the Rise
In June 2022, a unique news story prompted the general public to recognize that large language models (LLMs) had dramatically improved. Google engineer Blake Lemoine argued that Google’s LaMDA (Language Models for Dialogue Applications) is “sentient.” Among other claims, Lemoine said LaMDA resembles “an 8-year-old kid that happens to know physics.” The intense news cycle that followed this story prompted a worldwide discussion about the possibility of AI chatbots having self-awareness and feelings. The idea was received with skepticism. One New York Times article, for example, claimed that “robots can’t think or feel, despite what the researchers who build them want to believe. A.I. is not sentient. Why do people say it is?”
In August 2022, OpenAI gave one million people access to DALL-E 2. In November 2022, the company launched a user-friendly chatbot named ChatGPT. People started interacting with more advanced AI systems—“generative AI” tools—with Blake Lemoine’s story in the background.
At first, news articles debated issues such as copyright and consent regarding AI-generated images (e.g., “AI Creating ‘Art’ Is an Ethical and Copyright Nightmare”) and how students will use ChatGPT to cheat on their assignments (e.g., “New York City Blocks Use of the ChatGPT Bot in Its Schools,” “The College Essay Is Dead”). A turning point came after the release of New York Times columnist Kevin Roose’s story on his disturbing conversation with Microsoft’s new Bing chatbot. It has since become known as the “Sydney tried to break up my marriage” story. The New York Times cover page included parts of Roose’s correspondence with the chatbot, headlined as “Bing’s Chatbot Drew Me In and Creeped Me Out.” “The normal way that you deal with software that has a user interface bug is you just go fix the bug and apologize to the customer that triggered it,” responded Kevin Scott, Microsoft’s chief technology officer. “This one just happened to be one of the most-read stories in The New York Times history.”
After that, things escalated quickly. The “existential risk” open letters appeared in spring 2023 (more on this later), and the “AI could kill everyone” scenario became a mainstream talking point. If it had found a platform only in the realm of British tabloids, it could have been dismissed as fringe sensationalism: “Humans ‘Could Go Extinct’ When Evil ‘Superhuman’ AI Robots Rise Up Like The Terminator.” But similar headlines spread across mass media and could soon be found even in prestigious news outlets (e.g., The New York Times: “If we don’t master A.I., it will master us.”).
The demand for AI coverage produced full-blown exaggerations and clickbait metaphors. It snowballed into a competition of headlines. Patrick Grady and Daniel Castro from the Center for Data Innovation explain, “Once news media first get wind of a panic, it becomes a game of one-upmanship: the more outlandish the claims, the better.” This process reached its apogee with Time magazine’s June 12, 2023, cover story on AI, teased with “THE END OF HUMANITY.”
The fact that grandiose claims such as the assertion that AI will cause human extinction have gained so much momentum is likely having distorting effects on public understanding, AI research funding, corporate priorities, and government regulation. This is why we present some of the more notable examples of essays and op-eds that promote the “existential risk” ideology and reflect the growing AI technopanic.
Here are some telltale signs that the AI technopanic mentality is coloring a specific piece of writing:
→ Quasi-religious rhetoric expressing fear of godlike powers of technology or suggesting that apocalyptic “end times” scenarios are approaching
→ The repeated use of dystopian pop culture allusions to frame discussions, such as references to The Terminator, The Matrix, or Black Mirror, followed by implicit or explicit sympathy for violent actions or social uprisings to “stop the machine” or slow progress in some fashion
→ Calls for sweeping regulatory interventions to control technological progress, which may include widespread surveillance of research and development efforts or even militaristic interventions by governments, and possibly global government control
→ A tendency to ignore any trade-offs or downsides associated with these rhetorical ploys or the extreme recommendations set forth.
Overall, these articles have two commonalities: their focus on “inventing a monster and demanding that world leaders be as afraid of it as you are” and their promotion of dangerous ideas about how to tame it.
Notable Examples of Extreme Rhetoric
Eliezer Yudkowsky, Cofounder of MIRI (the Machine Intelligence Research Institute)
Repeatedly insisting that the world must “shut it all down,” Yudkowsky says that stopping AI and computational science requires extreme interventions. In his preferred world, “allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs,” he wrote in a Time essay. He advocates several sweeping prohibitions:
“Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms—no exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.”
Michael Cuenco, Associate Editor at American Affairs
Cuenco calls for “putting the AI revolution in a deep freeze” and stopping almost all digital innovation and computational progress. He advocates a literal “Butlerian Jihad,” inspired by the Dune prequel of the same name, whose plot involves a conflict in which almost all computers, robots, and forms of AI are intentionally destroyed. Cuenco’s call for policy action includes “a broader indefinite AI ban, accompanied by a social compact premised on the permanent prohibition on the use of advanced AI across multiple industries as a means of obtaining and preserving economic security.” He says that any disruptive automation technology should be “subject to nationwide codes governing what is permissible or not in every industry. Any changes to these codes would have to be enacted at the national level, and the codes should, in practice, be politically difficult to loosen.”
Max Tegmark, President of the Future of Life Institute
Tegmark describes how scary it would be to lose control “to alien digital minds that don’t care about humans”:
“If superintelligence drives humanity extinct, it probably won’t be because it turned evil or conscious, but because it turned competent, with goals misaligned with ours.”
He concludes that in this scenario, “We get extincted as a banal side effect that we can’t predict.”
Dan Hendrycks, Executive and Research Director at the Center for AI Safety
Hendrycks argues that:
“[E]volution tends to produce selfish behavior. Amoral competition among AIs may select for undesirable traits. Evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation. Humans are incentivized to cede more and more power to AI systems that cannot be reliably controlled, putting us on a pathway toward being supplanted as the earth’s dominant species.”
Yuval Harari, Professor at the Hebrew University of Jerusalem; Tristan Harris and Aza Raskin, Founders of the Center for Humane Technology
“We have summoned an alien intelligence,” these authors argue. “We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization.” They worry that “A.I. could rapidly eat the whole of human culture” and that “soon we will also find ourselves living inside the hallucinations of nonhuman intelligence”:
“We will finally come face to face with Descartes’s demon, with Plato’s cave, with the Buddhist Maya. A curtain of illusions could descend over the whole of humanity, and we might never again be able to tear that curtain away—or even realize it is there.”
Harari has called for stiff sanctions or even prison sentences for anyone who creates “fake people,” although he has not defined what that means.
Peggy Noonan, Opinion Columnist for The Wall Street Journal
In her first essay regarding AI, Noonan asks society to “pause it for a few years. Call in the world’s counsel, get everyone in. Heck, hold a World Congress.” In a follow-up essay replete with religious metaphors, she warns that “developing AI is biting the apple. Something bad is going to happen. I believe those creating, fueling, and funding it want, possibly unconsciously, to be God and on some level think they are God.” She also favorably cites Yudkowsky’s Time essay.
Erik Hoel, Assistant Professor at Tufts University
Hoel, who is a neuroscientist, writes that the time has come for panic and radical action against AI innovators.
“Panic is necessary because humans simply cannot address a species-level concern without getting worked up about it and catastrophizing,” he claims. “We need to panic about AI and imagine the worst-case scenarios while, at the same time, occasionally admitting that we can pursue a politically-realistic AI safety agenda.”
He fantasizes about “a civilization that pre-emptively stops progress on the technologies that threaten its survival” and rounds out his call to action by suggesting that anti-AI activists vandalize the Microsoft and OpenAI headquarters “because only panic, outrage, and attention lead to global collective action.”
Steve Rose, Assistant Features Editor at the Guardian
Rose has collected five essays on “the ways AI might destroy the world.” Max Tegmark’s essay compares future human extinction with recent extinctions, such as that of the West African black rhinoceros and orangutans in Borneo. The essay by Ajeya Cotra, who oversee Open Philanthropy’s “Potential risks from advanced artificial intelligence” program, compares GPT-4’s “brain” to a squirrel’s brain and recommends that the technology ratchet up to a hedgehog brain and not advance to the equivalent of a human brain. (That’s a lot of animals in one article about AI!) Yoshua Bengio discusses the survival instinct:
“When we create an entity that has survival instinct, it’s like we have created a new species. Once these AI systems have a survival instinct, they might do things that can be dangerous for us.”
Eliezer Yudkowsky suggests what such a superintelligence would do: (1) It is “probably going to want to do things that kill us as a side-effect, such as building so many power plants that run off nuclear fusion—because there is plenty of hydrogen in the oceans—that the oceans boil.” (2) “It could build itself a tiny molecular laboratory and manufacture and release lethal bacteria.
What that looks like is everybody on Earth falling over dead inside the same second.”
Open Letters and the Escalation of the Technopanic
Extreme rhetoric can also be found in open letters, which draw considerable media coverage. See table 1 for details about the two open letters regarding x-risk released in 2023.
TABLE 1 | Basic Details about the Two Existential Risk Open Letters
The first notable open letter, initiated by the Future of Life Institute (FLI), was released on March 22, 2023. FLI, as described on the Effective Altruism Forum, is “a non-profit that works to reduce existential risk from powerful technologies, particularly artificial intelligence.” In this widely discussed letter, various individuals called for AI labs “to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The letter argued that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” It warned of “an out-of-control race to develop and deploy ever more powerful digital minds” and “catastrophic effects on society.” The reasoning behind the immediate pause was expressed in the form of a rhetorical question: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”
Apparently, these speculative assumptions weren’t enough, because the cofounder of the Future of Life Institute, Max Tegmark, added in an interview,
We just had a little baby, and I keep asking myself . . . How old is he even gonna get? There’s a pretty large chance we’re not gonna make it as humans. There won’t be any humans on the planet in the not-too-distant future. This is the kind of cancer which kills all of humanity.
The second x-risk open letter, initiated by the Center for AI Safety, was released on May 30, 2023. It raised the rhetorical panic level by publishing a single statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This letter was launched in The New York Times with the headline, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” Consequently, Robert Wiblin, the former executive director of the Centre for Effective Altruism and the current director of research at 80,000 Hours, declared that “AI extinction fears have largely won the public debate.” Max Tegmark celebrated how the “AI extinction threat is going mainstream.”
In the aftermath of these dramatic warnings, some AI pioneers have escalated their rhetoric. Geoffrey Hinton (one of the signers of the second letter) said, “I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.” “The alarm bell I’m ringing has to do with the existential threat of [powerful AI systems] taking control.” “It’s not just science fiction. It’s not just fear-mongering. It is a real risk we need to think about.” Another pioneer, Yoshua Bengio (who signed both letters), shared that he believes that the technologies have become so capable that they risk triggering a world-ending catastrophe, whether as rogue sentient entities or in the hands of a human. “If it’s an existential risk, we may have one chance, and that’s it.” Jaan Tallinn, cofounder of the Future of Life Institute and the Centre for the Study of Existential Risk and biggest donor to the Survival and Flourishing Fund, said in an interview, “I’ve not met anyone in AI labs who says the risk is less than 1% of blowing up the planet. It’s important that people know lives are being risked.”
CEOs of AI start-ups have begun emphasizing similar existential risk scenarios. Emad Mostaque, CEO of Stability AI, signed both x-risk open letters. He said, “The worst case scenario is that [AI] proliferates and basically it controls humanity.” He also explained on Twitter, “There’s so many ways to wipe out humanity for something that can be more persuasive than anyone & replicate itself & gather any resources.” (It is actually “AI apocalypse scenarios that replicate and gather resources.) When Sam Altman, OpenAI’s CEO, shared his worst-case scenario of AI, it was “lights out for all of us.” In his interview tour, he frequently emphasized that he is “super-nervous,” that he empathizes “with people who are a lot afraid,” and that “there is a legitimate existential risk here.” Altman also signed the second open letter, which compared the risk from AI to the risk from nuclear war and pandemics.
Potential Incentives for Extreme Rhetoric
Why would someone frame their own company the way Sam Altman has? Making one’s products “the most important—and hopeful, and scary— project in human history” is part of the marketing strategy: “The paranoia is the marketing.” “If you want people to think what you’re working on is powerful, it’s a good idea to make them fear it,” explains François Chollet, an AI researcher at Google.
“AI doomsaying is absolutely everywhere right now,” wrote Brian Merchant, the Los Angeles Times tech columnist. “Which is exactly the way that OpenAI, the company that stands to benefit the most from everyone believing its product has the power to remake—or unmake—the world, wants it.” Merchant explains Altman’s science- fiction-infused marketing frenzy: “Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.”
One of us (Nirit Weiss-Blatt) has published a guide to the “AI existential risk” ecosystem. Weiss-Blatt classifies the AI panic facilitators as adopting either a “Panic-as-a-Business” attitude or an “AI Panic Marketing” attitude: In “Panic-as-a-Business,” the panic promoters are basically saying, “We believe humans will be wiped out by a Godlike, superintelligent AI. All resources should be focused on that!”
→ In “AI Panic Marketing,” the panic promoters are basically saying, “We’re building a powerful, godlike, superintelligent AI. See how much is invested in taming it!”
A New York Times article profiling the AI company Anthropic, titled “Inside the White-Hot Center of A.I. Doomerism,” demonstrates the “AI Panic Marketing” attitude. Because of the company’s “effective altruism” culture, its employees shared a prediction that there was a “20 percent chance of imminent doom.” “They worry, obsessively, about what will happen if A.I. alignment isn’t solved by the time more powerful A.I. systems arrive,” observes the article’s author, Kevin Roose. However, thanks to Anthropic’s unique “Constitutional A.I.” technique, “you get an A.I. system that largely polices itself and misbehaves less frequently than chatbots trained using other methods,” the company claimed. The New York Times published Anthropic’s profile the day the company launched its new chatbot, “Claude 2.”
In July 2023, OpenAI launched a “Superalignment” team to control “superintelligence.” The team’s opening statement is another example of extreme rhetoric and industry motivation: “Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.” The solution? The company pledged to dedicate 20 percent of its computational power to “solving the problem.” (One Superalignment team member called his team the “notkilleveryoneism” team.)
AI ethicist Rumman Chowdhury characterized this attitude as a “disempowerment narrative”:
The general premise of all of this language is, “We have not yet built but will build a technology that is so horrible that it can kill us. But clearly, the only people skilled to address this work are us, the very people who have built it, or who will build it.” That is insane.
The tech sector has a tendency to “overstate the capabilities of their products,” according to Will Douglas Heaven, MIT Technology Review’s senior editor for AI. Heaven suggested that “if something sounds like bad science fiction, maybe it is.” Emily Bender, a University of Washington professor, would probably agree: “The whole thing looks to me like a media stunt, to try to grab the attention of the media, the public, and policymakers and focus everyone on the distraction of sci-fi scenarios,” she said. “This would seem to serve two purposes: it paints their tech as way more powerful and effective than it is, and it takes the focus away from the actual harms being done, now.”
Kyunghyun Cho, a prominent AI researcher from New York University, explained the “AI Panic Marketing” attitude as a “savior complex”: “They all want to save us from the inevitable doom that only they see and think only they can solve. These people are loud, but they’re still a fringe group within the whole society, not to mention the whole machine learning community.”
This brings us to the “Panic-as-a-Business” attitude, which has been adopted by effective altruism. This movement created a network comprising hundreds of organizations that are led by a relatively small group of influential leaders and a handful of key organizations (e.g., Open Philanthropy). In recent years, these effective altruism institutes have promoted the “AGI [Artificial General Intelligence] apocalypse” ideology through field-building of “AI safety” and “AI alignment” research (which aim to align future AI systems with human values).
Ever since Eliezer Yudkowsky, one of the founders of the field of “AI alignment,” published his influential Time op-ed, he has been on a media blitz. “I expected to be a tiny voice shouting into the void, and people listened instead,” Yudkowsky admitted. “So, I doubled down on that,” referring to his AI doom scenarios.
Among the effective altruism movement’s leading donors are Jaan Tallinn (cofounder of the Future of Life Institute), Vitalik Buterin (cofounder of Ethereum), Sam Bankman-Fried (disgraced founder of FTX), and Elon Musk (CEO of Tesla and xAI). A notable institution in this realm is Open Philanthropy (cofounded by Dustin Moskovitz), which has funneled nearly half a billion dollars into developing a pipeline of talent to fight “rogue AI.” According to The Washington Post, it included building a scaffolding of think tanks, YouTube channels, prize competitions, grants, research funding, and scholarships. The interest in working on AI x-risk did not arise organically, as Princeton computer science PhD candidate Sayash Kapoor points out: “It has been very strategically funded by organizations that make x-risk a top area of focus.”
A major component of effective altruism’s philosophy is “longtermism”—a focus on the distant future’s potential catastrophes. Elon Musk called longtermism a “close match” to his own ideology. After attending the “AI Forum” called by the US Congress in September 2023, Musk told reporters that, though the chances are low, there’s a nonzero possibility that “AI will kill us all.” According to Reid Hoffman (who founded OpenAI with him), Musk’s “whole approach to AI is: AI can only be saved if I deliver, if I build it.” Five years after Musk left OpenAI, Sam Altman made a similar observation: “Elon desperately wants the world to be saved. But only if he can be the one to save it.”
The two x-risk open letters cited above spurred the AI panic as they gathered tens of thousands of signatories. These signatories provided credibility that drove the letters to fame. While people like Max Tegmark were interviewed about why they signed the letters, most signatories were “faceless,” and their motivations were unclear. That is why the paper “Why They’re Worried” is an interesting attempt to understand the views of those who signed the first open letter. The researchers interviewed early signatories about “how their beliefs relate to the letter’s stated goals.” The signatories’ answers revealed that their concerns were not centered on “human extinction” at all.
Most of the interviewed signatories indicated that they did not “envision the apocalyptic scenario that some parts of the document warn about.” For example, Moshe Vardi “disagreed with almost every line”; Ricardo Baeza-Yates “thought that the request was not the right one and also that the reasons were the wrong ones”; an anonymous signatory “didn’t read it all and [doesn’t] buy into it all.” The researchers concluded that “while a few aligned with the letter’s existential focus, many . . . were far more preoccupied with problems relevant to today.” Nonetheless, as Nirit Weiss-Blatt has observed elsewhere, “They lent their name to the extreme AI doomers.”
What this “six-month pause” letter did was to normalize “expressing deep AI fears.”83 Looking back on its impact, Max Tegmark shared, “I was overwhelmed by the success of the letter in bringing about this sorely needed conversation. It was amazing how it exploded into the public sphere.” Part of this “success” is that now “all sides realize that if anyone builds out of control superintelligence, we all go extinct.”
The result is that, unlike the “techlash” against social media, the current “AI techlash” amplifies the x-risk angle. “This is historically quite abnormal,” said Kevin Roose, tech columnist at The New York Times. Amid all the previous criticism of Facebook regarding political polarization, disinformation, and kids’ mental health, creator Mark Zuckerberg wasn’t blamed for wiping out humanity (nor did he warn that his products might do so). The AI techlash feels overwhelming and unprecedented—because it is.
The Media’s Incentives and Role in Fueling Doomsaying
There are “Top 10 AI Frames” that encapsulate the media’s “know- how” for covering AI. These AI descriptions are organized from the most positive to the most negative. Since the media is drawn to extreme depictions, the AI coverage includes mainly exaggerated utopian scenarios (on how AI will save humanity) alongside exaggerated dystopian scenarios (on how AI will destroy humanity). Recently, the most negative frame, the “existential threat” theme, has been getting the most attention.
Ian Hogarth, author of the column “We Must Slow Down the Race to God-Like AI,” shared that this column was “the most read story” in the Financial Times the day it was published. Similarly, Steve Rose, assistant features editor for The Guardian, shared this simple truth: “So far, ‘AI worst case scenarios’ has had 5 x as many readers as ‘AI best case scenarios.’” Hogarth’s Financial Times op-ed stated that “God-like AI . . . could usher in the obsolescence or destruction of the human race.” The Guardian declared in a headline that “Everyone on Earth Could Fall Over Dead in the Same Second.”
It’s not surprising that these articles were successful. Tragedy and catastrophe garner attention. After all, according to the journalistic marketing truism, “If it bleeds, it leads.”
According to Paris Martineau, a tech reporter at The Information, who was interviewed by Columbia Journalism Review, we need to consider the structural headwinds buffeting journalism—the collapse of advertising revenue, shrinking editorial budgets, smaller newsrooms, and the demand for SEO traffic. In a perfect world, all reporters would have the time and resources to write ethically framed, non-science-fiction-like stories about AI. But they do not. “It is systemic,” Martineau said.
Since the media plays a crucial role in the self-reinforcing cycle of AI doomerism, Nirit Weiss-Blatt has outlined seven ways AI media coverage fails us, using the acronym "A I P A N I C ":
AI Hype and Criti-Hype
→ AI hype describes when overconfident techies brag about their AI systems (also termed AI boosterism).
→ AI criti-hype describes when overconfident doomsayers accuse those AI systems of atrocities (also termed AI doomerism).
→ Both overpromise the technology’s capabilities.
Inducing Simplistic, Binary Thinking
→ Discussion is either simplistically optimistic or simplistically pessimistic.
→ When companies’ founders are referred to as “charismatic leaders,” AI ethics experts as “critics” or “skeptics,” and doomsayers (without expertise in AI) as “AI experts,” this distorts how the public perceives, understands, and participates in these discussions.
Pack Journalism
→ Pack journalism encourages copycat behavior: different news outlets report the same story from the same perspective.
→ It leads to media storms.
→ In the current media storm, AI doomers’ fearmongering overshadows the real consequences of AI. The resulting conversation is not productive, yet the press runs with it.
Anthropomorphizing AI
→ Attributing human characteristics to AI misleads people.
→ Anthropomorphizing begins with words like intelligence and learning and moves on to consciousness and sentience, as if the machine has experiences, emotions, opinions, or motivations. AI is not a human being.
Narrow Focus on the Edges of the Debate
→ The selection of topics for attention and the framing of these topics are powerful agenda-setting roles.
→ This is why it’s unfortunate that the loudest shouters lead the AI discussion’s framing.
Interchanging Question Marks and Exclamation Points
→ Sensational, deterministic headlines prevail over nuanced discussions.
→ “Artificial General Intelligence Will Destroy Us!” and “Artificial General Intelligence Will Save Us!” make for good headlines, not good journalism.
Conversing Sci-Fi Scenarios as Credible Predictions
→ “AI will get out of control and kill everyone.” This scenario doesn’t need any proof or factual explanation.
→ We saw it in Hollywood movies! So it must be true . . . right?
“The proliferation of sensationalist narratives surrounding artificial intelligence—fueled by interest, ignorance, and opportunism—threatens to derail essential discussions on AI governance and responsible implementation,” warn Divyansh Kaushik and Matt Korda from the Federation of American Scientists. The next section will show how it has already derailed the AI governance discussion.
Effect on Politicians
Politicians are paying attention to the AI panic. According to the National Conference of State Legislatures, over 90 AI-related bills had been introduced by midsummer 2023, many of them pushing for extensive regulation. As of April 2024 that number had increased to nearly 600 bills in the states and nearly 100 bills in Congress. The attention is likely to increase even more. “Regulators around the world are now scrambling to decide how to regulate the technology, while respected researchers are warning of longer-term harms, including that the tech might one day surpass human intelligence,” wrote Gerrit De Vynck in The Washington Post. “There’s an AI-focused hearing on Capitol Hill nearly every week.”
At the federal level, Congress, regulatory agencies, and the White House are all reacting to the public discourse by releasing guidance documents, memos, and op-eds. In May 2023, the White House hosted a summit of many of the leading generative AI CEOs. At one Senate Judiciary Committee hearing in May 2023, Sen. John Kennedy suggested that political deliberations about these issues should begin with the assumption that AI wants to kill us. The State Department spent $250,000 in November 2022 to commission a report released in February 2024 that compared advanced AI models to weapons of mass destruction. The report included recommendations to create a new federal regulatory agency, an international AI agency, and for Congress to outlaw “AI models using more than a certain level of computing power.”
The more that tech panic discourse permeates the media, the more pressure politicians feel to act. Of course, such public pressure is not a bad thing in and of itself. Politicians and policymakers should listen and respond to those who have elected them. Thus, it becomes the responsibility of more sober-minded experts to ensure that their voices are heard.
As politicians react, however, they will react with regulatory proposals that aim to curb the harm perceived as the most prominent. Basing public policies on peak fears has driven some of the worst laws and measures in United States history. For instance, the fear of Japanese people living in the US during World War II led to Japanese internment camps, and the fear of terrorism in the immediate aftermath of 9/11 led to domestic spying programs such as the Patriot Act. A precautionary approach has costs of its own, including forgone innovation and other curtailments of commerce, creativity, or even free speech. The small size of the European digital technology industry serves as a prime example of the results of such precaution when it is translated into widespread restrictions on innovative activities.
Lately, academics and organizations focused on x-risk have been escalating their calls for extreme political and regulatory interventions, and some of their ideas now serve as the baseline in public policy debates about artificial intelligence. The work of many of these individuals and groups can be traced to proposals set forth by Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford. Bostrom has done influential writing and speaking on existential risk (or what he calls “superintelligence”) and potential global regulatory responses to it. He has outlined a variety of specific regulatory options for addressing existential concerns, most notably in his widely cited essay developing what he refers to as his “vulnerable world hypothesis.”
Bostrom’s approach to x-risk basically suggests that it is worth pursuing one sort of existential risk (global authoritarian control of science, innovation, and individuals) to address what he regards as a far greater existential risk (the development of dangerous autonomous systems). During a 2019 TED talk, Bostrom said that ubiquitous mass surveillance might need to be accomplished through global government solutions that could possibly include a “freedom tag” or some sort of “necklace with multi-dimensional cameras” that allows real-time monitoring of citizens to ensure they are not engaged in risky activities. He admitted that there are “huge problems and risks” associated with the idea of mass surveillance and global governance, but suggested that “we seemed to be doomed anyway” and that extreme solutions are acceptable in that light.
Most other AI x-risk theorists do not go quite as far as Bostrom, but many of them also call for fairly sweeping regulatory solutions, some of which entail some sort of global government- imposed regulations. While these proposed solutions are often highly aspirational and lack details, there have been calls for governments to engage in chip-level surveillance using some sort of tracking technology embedded in semiconductors that power large- scale computing systems. This would require some government or organization to track chip distribution and usage in real time across the globe in order to determine how chips are being used and ensure compliance with whatever restrictions on use are devised. Extensive software export controls would probably accompany such regulations.
Other academics and organizations have proposed mandating “know your customer” regulations or other supply-chain regulations that would require companies to report their customers (or their customers’ activities) to government officials. Another type of proposed regulation would impose hard caps of the aggregate amount of computing power of AI models.. Such approvals would be obtained from a new licensing regime that would place limits on who could develop high-powered computing systems.
When Dan Hendrycks, the initiator of the second x-risk open letter, was asked about this letter, he explained that it may take a warning shot—a near disaster—to get the attention of a broad audience. To help the world understand the danger as he does. Hendrycks hopes for a multinational regulation that would include China: “We might be able to jointly agree to slow down.” He imagines something similar to the European Organization for Nuclear Research, or CERN. According to Hendrycks, if the private sector remains in the lead, the governments of the United States, England, and China could build an “off-switch.” (At the same time, Hendrycks is an adviser to Elon Musk’s new AI start-up, xAI.)
Moreover, some analysts suggest that “there’s only one way to control AI: Nationalization.” In their view, governments should consider nationalizing supercomputing facilities, perhaps through a “Manhattan Project for AI Safety,” which would be a government- controlled lab that has exclusive authority to coordinate and conduct “high-risk R&D.” Others have floated the idea of accomplishing this at a global scale through a new super-regulator that would “remove research on powerful, autonomous AI systems away from private firms and into a highly-secure facility with multinational backing and supervision.”
The most radical proposal along these lines comes from Ian Hogarth, who has called for “governments to take control by regulating access to frontier hardware” to limit what he calls “God-like AI.” He advocates that such systems be contained on a hypothetical “island,” where “experts trying to build God-like [artificial general intelligence] systems do so in a highly secure facility: an air-gapped enclosure with the best security humans can build. All other attempts to build God-like AI would become illegal; only when such AI were provably safe could they be commercialized ‘off island.’” As mentioned, his proposal gained a lot of attention.
To be clear, under this and other nationalization schemes, private commercial development of advanced supercomputing systems and models would be illegal. Incredibly, none of the authors of these proposals have anything to say about how they plan to convince China or other nations to abandon all their supercomputing facilities and research. Such cooperation would be required for “island” schemes to have any serious global limiting effect.
Still other analysts speak of the need for a “public option for superintelligence” that would have governments exert far greater control over large-scale generative AI systems, perhaps by creating their own publicly funded systems or models.
While most governments have yet to act on these calls, some lawmakers are threatening far-reaching controls on computation and algorithmic innovations for risks more mundane than “superintelligence.” For example, Italy banned ChatGPT for a month in April 2023 over privacy concerns before finally allowing OpenAI to restore service to the country. In the US, greatly expanded legal liability is being proposed as a solution to hypothetical harms that have not yet developed. Sen. Josh Hawley for reasons of “privacy . . . the harms of unchecked AI development, insulate kids from harmful impacts, and keep[ing] this valuable technology out of the hands of our adversaries” set forth objectives for AI legislation that expanded lawsuits for AI models, and he also proposed a new federal regulatory licensing regime for generative AI. Along with Sen. Richard Blumenthal, Senator Hawley also sent a letter to Meta in June 2023, citing “spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms” and warning the company about how it released its open-sourced “LLaMA” model. The letter even suggested that closed-source models were preferable.
Meanwhile, in the UK, Prime Minister Rishi Sunak commented on the widely circulated second open letter, saying, “The government is looking very carefully at this.” To prove as much, in June the British government appointed Ian Hogarth, author of the “AI island” proposal discussed earlier, to lead its new AI Foundation Model Taskforce.
The danger with extreme political solutions to hypothetical AI risks is not only that these reactions could derail many beneficial forms of innovation (especially open-source AI innovation), but— more importantly—that they require profoundly dangerous trade-offs in the realms of human rights and global stability. Bostrom and many other advocates of global regulatory interventions to address what they perceive as serious risks should consider what we can learn from the past and especially from previous calls for sweeping global controls to address new innovations.
At the outset of the Cold War, for example, the real danger of nuclear escalation among superpowers led some well-meaning intellectuals to call for extreme steps to address the existential risk associated with global thermonuclear conflict. In 1951, the eminent philosopher Bertrand Russell predicted “the end of human life, perhaps of all life on our planet,” before the end of the century unless the world unified under “a single government, possessing a monopoly of all the major weapons of war.” Fortunately, Russell’s recommendations were not heeded. Instead, the risks from nuclear weapons are managed in a multistakeholder, voluntary, and (mostly) peaceful manner. Despite the vast difference between nuclear weapons and AI, many AI x-risk theorists today similarly imagine that only sweeping global governance solutions can save humanity from near-certain catastrophe. If the risks from weapons of mass destruction have been managed successfully (to date), then the likelihood of successful management is much greater for a nonweapon technology like AI. To be clear, this paper is not drawing a comparison between the risk of nuclear weapons and the risk of AI, because such a comparison would be inaccurate and unhelpful. The point is that past powerful technologies have also prompted calls for draconian regulations.
What scholars then and now have failed to address fully are the remarkable and quite conceivable dangers associated with their proposals. “Global totalitarianism is its own existential risk,” notes researcher Maxwell Tabarrok regarding Bostrom’s approach. Indeed, the threat to liberties and lives from totalitarian government was a sad historical legacy of the past century. An effort to create a single global government authority to oversee all algorithmic risks could lead to serious conflict among nations vying for that power, as well as to mass disobedience by nation- states, companies, organizations, and individuals who want no part of such a scheme.
Extreme rhetoric has undermined life-enriching innovation before. For example, fears of nuclear weapons have contributed to generalized fears of nuclear power and radiation. This has resulted in the multi-decade stifling of cheap, abundant energy derived from nuclear fission. These fears still drive misguided efforts to take existing nuclear plants offline, which actually costs more lives than it saves.
AI has been referred to as a “general purpose technology,” which means that it is a technology applicable in many use cases. Electricity is also a general-purpose technology. When electric power was first deployed, there was a massive panic about electricity. Had policymakers acted on those fears and restricted the development and use of electricity, their actions would have had widespread and quite deleterious consequences for society. Instead, people quickly came to understand and experience the benefits of electric power. If AI panic drives extreme political responses to AI, these responses could also have negative ramifications.
Finally, practically speaking, any attempt to create global government solutions to mitigate AI risks must contend with the fact that most experts cannot even agree on how to define artificial intelligence. Nor is there any clear consensus about what are the most serious algorithmic dangers that should be addressed through global accords or even domestic regulations. There is a major divide currently between those in the “AI ethics” camp and those in the “AI safety” camp, and it has led to heated arguments about what issues deserve the most attention. Elevating AI policy battles to a global scale would multiply the range of issues in play and of actors who want some control over decision-making, creating the potential for even more conflict.
Around the globe, media hype influences politicians and their proposals to regulate AI. The rhetoric is not without consequence, and the results will continue to unfold. It is up to voters and the policymakers themselves to avoid the hype and focus on the issues that are actively harming consumers. This approach will lead the creation of more sober and innovation-friendly policy while allowing governments at all levels to step in and correct harms.
Effect on the Public
The public also responds to negative AI hype, dire predictions, and extreme proposals.
Existential risk, once a niche discussion, has gained popular prominence. For example, in May 2023, a Reuters poll revealed that 61 percent of Americans think that AI poses an existential risk to humanity. A May 2023 Quinnipiac University poll showed that “a majority of Americans (54 percent) think artificial intelligence poses a danger to humanity, while 31 percent think it will benefit humanity.” According to an April 2023 survey by Morning Consult, two out of three (61 percent) adults in the United States now perceive AI tools to be an existential threat to humanity.
Before ChatGPT, AI was at the bottom of Americans’ list of risk concerns. A survey by the Centre for the Governance of AI measured the public’s perception of the global risk of AI within the context of other global risks by asking respondents to respond to questions about cyberattacks, terrorist attacks, global recession, and the spread of infectious diseases. The centre defined “global risk” as “an uncertain event or condition that, if it happens, could cause significant negative impact for at least 10 percent of the world’s population.” The findings from the survey are reported in the scatterplot shown in figure 1.
FIGURE 1 | AI In the Context of Other Global Risk
The data reported in figure 1 clearly show that respondents ranked AI risk lowest overall in 2019 along both axes. On the horizontal “Likelihood” axis, the “harmful consequences of AI” was the least probable event to occur over the next 10 years. Along the vertical “Impact” axis it ranked toward the bottom of potential impacts, above only “failure to address climate change.”
A survey conducted by researchers from Monmouth University is perhaps the most insightful. Monmouth conducted this survey in 2015 and again in 2023, asking respondents various questions about their opinions regarding the increased adoption of AI systems and the potential consequences of this increase. In both years, the survey asked, “How worried are you that machines with artificial intelligence could eventually pose a threat to the existence of the human race?” The 2015 poll found that 44 percent of respondents were either “very worried” or “somewhat worried,” while 56 percent were either “not at all worried” or “not too worried.” The 2023 poll saw the “very worried” and “somewhat worried” categories jump to 56 percent and the “not at all worried” and “not too worried” categories fall to 44 percent. Similarly, a Pew Research Center survey from August 2023 found that 52 percent of Americans say they feel more concerned than excited about the increased use of AI. This was up 14 percent since December 2022, when 38 percent expressed this concern.
Evidently, optimism surrounding AI technologies used to be higher. We should look at the fruits of AI technological development with awe, but allowing the doomsday conversation to dominate the public consciousness may lead to more negative externalities than the real probability of an AI overlord.
Recommendations
Current media coverage is amplifying the potential existential risk from AI. This is unsurprising, because the media thrives on fear-based content. However, we can expect that the doomsaying will not stay this dominant. Panic cycles, as their name implies, are circular. At some point, the hysteria calms down (see figure 2). Here are some suggestions for ways to reach that point:
For the media:
→ The media needs to stop spreading unrealistic expectations (both good and bad). The focus should be on how AI systems actually work (and don’t work). When we discuss what AI is, we also need to discuss what it isn’t.
→ Media attention should not be paid to the fringes of the debate. The focus should return to the actual challenges and the guardrails they require.
→ There are plenty of AI researchers who would love to inform the public in a nuanced way. It’s time to highlight more diverse voices that can offer different perspectives.
FIGURE 2 | The AI Panic Cycle: Fears Increase, Peak, Then Decline over Time as the Public Becomes Familiar with the Technology and Its Benefits
For media audiences:
→ Whenever people make sweeping predictions with absolute certainty in a state of uncertainty, it is important to raise questions about what motivates such extreme forecasts.
→ We need to keep reminding ourselves that the promoters of hype and criti-hype have much to gain from spreading the impression that AI is much more powerful than it actually is. Rather than getting caught up in these hype cycles, we should be skeptical and evaluate in a more nuanced way how AI affects our daily lives.
→ We need to look at the complex reality and see humans at the helm, not machines. It’s humans making decisions about the design, training, and applications. Many social forces are at play here: researchers, policymakers, industry leaders, journalists, and users all have a hand in shaping this technology.
For policymakers:
→ First, policymakers should be aware of current technopanics and respond accordingly. Patrick Grady and Daniel Castro urge, “It would behoove policymakers to recognize when they are in the midst of a tech panic and use caution when digesting hypothetical or exaggerated concerns about generative AI that crowd out discussion of more immediate and valid ones.”
→ Second, policymakers should base their policy recommendations and decisions on actual harms, not hypothetical ones. As noted earlier, panic-fueled public policy decisions have a history of negative effects.
→ Third, policymakers should use their public platforms to educate the public about the actual technology in play. Policymakers are uniquely situated in that they are able to solicit and hear from a wide array of experts. Although they face a range of incentives that might run counter to this recommendation, they have a responsibility to provide a sober analysis. Sen. Chuck Schumer and Sen. Bill Cassidy are excellent examples of prominent politicians using their platform to slowly assess the issue and educate their colleagues and the public.
Conclusion
Extreme rhetoric about AI is ubiquitous and has a real influence on politicians and public opinion. The danger is that this rhetoric will result in policy decisions that come at the cost of potentially lifesaving technologies. When a technology is as important to the economy as AI, the incentives of the x-risk institutions and the effects of their rhetoric are worth further examination. Moving forward, the solutions to dealing with this media hype should be multifaceted and should involve the whole of civil society.