Resilience, Peace and Security

How AI could increase the risk of nuclear war

Stunning advances in AI are provoking a new arms race among the world's major nuclear powers

Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Resilience, Peace and Security?
The Big Picture
Explore and monitor how The Digital Economy is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

The Digital Economy

This article is part of the World Economic Forum's Geostrategy platform

Could artificial intelligence upend concepts of nuclear deterrence that have helped spare the world from nuclear war since 1945? Stunning advances in AI—coupled with a proliferation of drones, satellites, and other sensors—raise the possibility that countries could find and threaten each other's nuclear forces, escalating tensions.

Lt. Col. Stanislav Petrov settled into the commander's chair in a secret bunker outside Moscow. His job that night was simple: Monitor the computers that were sifting through satellite data, watching the United States for any sign of a missile launch. It was just after midnight, Sept. 26, 1983.

A siren clanged off the bunker walls. A single word flashed on the screen in front of him.

"Launch."

The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.

The danger might soon be more science than fiction. Stunning advances in AI have created machines that can learn and think, provoking a new arms race among the world's major nuclear powers. It's not the killer robots of Hollywood blockbusters that we need to worry about; it's how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.

That's the premise behind a new paper from RAND Corporation, How Might Artificial Intelligence Affect the Risk of Nuclear War? It's part of a special project within RAND, known as Security 2040, to look over the horizon and anticipate coming threats.

"This isn't just a movie scenario," said Andrew Lohn, an engineer at RAND who coauthored the paper and whose experience with AI includes using it to route drones, identify whale calls, and predict the outcomes of NBA games. "Things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful."

Glitch, or Armageddon?

Petrov would say later that his chair felt like a frying pan. He knew the computer system had glitches. The Soviets, worried that they were falling behind in the arms race with the United States, had rushed it into service only months earlier. Its screen now read “high reliability,” but Petrov's gut said otherwise.

He picked up the phone to his duty officer. “False alarm,” he said. Suddenly, the system flashed with new warnings: another launch, and then another, and then another. The words on the screen glowed red:

"Missile attack."

To understand how intelligent computers could raise the risk of nuclear war, you have to understand a little about why the Cold War never went nuclear hot. There are many theories, but “assured retaliation” has always been one of the cornerstones. In the simplest terms, it means: If you punch me, I'll punch you back. With nuclear weapons in play, that counterpunch could wipe out whole cities, a loss neither side was ever willing to risk.​​​​​​​

That theory leads to some seemingly counterintuitive conclusions. If both sides have weapons that can survive a first strike and hit back, then the situation is stable. Neither side will risk throwing that first punch. The situation gets more dangerous and uncertain if one side loses its ability to strike back or even just thinks it might lose that ability. It might respond by creating new weapons to regain its edge. Or it might decide it needs to throw its punches early, before it gets hit first.

That's where the real danger of AI might lie. Computers can already scan thousands of surveillance photos, looking for patterns that a human eye would never see. It doesn't take much imagination to envision a more advanced system taking in drone feeds, satellite data, and even social media posts to develop a complete picture of an adversary's weapons and defenses.

Autonomous systems don't need to kill people to undermine stability and make catastrophic war more likely

A system that can be everywhere and see everything might convince an adversary that it is vulnerable to a disarming first strike—that it might lose its counterpunch. That adversary would scramble to find new ways to level the field again, by whatever means necessary. That road leads closer to nuclear war.

"Autonomous systems don't need to kill people to undermine stability and make catastrophic war more likely," said Edward Geist, an associate policy researcher at RAND, a specialist in nuclear security, and co-author of the new paper. "New AI capabilities might make people think they're going to lose if they hesitate. That could give them itchier trigger fingers. At that point, AI will be making war more likely even though the humans are still quote-unquote in control."

A Gut Feeling

Petrov's computer screen now showed five missiles rocketing toward the Soviet Union. Sirens wailed. Petrov held the phone to the duty officer in one hand, an intercom to the computer room in the other. The technicians there were telling him they could not find the missiles on their radar screens or telescopes.

It didn't make any sense. Why would the United States start a nuclear war with only five missiles? Petrov raised the phone and said again:

False alarm.

Computers can now teach themselves to walk—stumbling, falling, but learning until they get it right. Their neural networks mimic the architecture of the brain. A computer recently beat one of the world's best players at the ancient strategy game of Go with a move that was so alien, yet so effective, that the human player stood up, left the room, and then needed 15 minutes to make his next move.

Russia recently announced plans for an underwater doomsday drone that could guide itself across oceans to deliver a nuclear warhead powerful enough to vaporize a major city

The military potential of such superintelligence has not gone unnoticed by the world's major nuclear powers. The United States has experimented with autonomous boats that could track an enemy submarine for thousands of miles. China has demonstrated “swarm intelligence” algorithms that can enable drones to hunt in packs. And Russia recently announced plans for an underwater doomsday drone that could guide itself across oceans to deliver a nuclear warhead powerful enough to vaporize a major city.

Whoever wins the race for AI superiority, Russian President Vladimir Putin has said, "will become the ruler of the world." Tesla founder Elon Musk had a different take: The race for AI superiority, he warned, is the most likely cause of World War III.

The Moment of Truth

For a few terrifying moments, Stanislav Petrov stood at the precipice of nuclear war. By mid-1983, the Soviet Union was convinced that the United States was preparing a nuclear attack. The computer system flashing red in front of him was its insurance policy, an effort to make sure that if the United States struck, the Soviet Union would have time to strike back.

Stanislav Petrov, 2016

But on that night, it had misread sunlight glinting off cloud tops.

"False alarm." The duty officer didn't ask for an explanation. He relayed Petrov's message up the chain of command.

The next generation of AI will have "significant potential" to undermine the foundations of nuclear security, the researchers concluded. The time for international dialogue is now.

Keeping the nuclear peace in a time of such technological advances will require the cooperation of every nuclear power. It will require new global institutions and agreements; new understandings among rival states; and new technological, diplomatic, and military safeguards.

It's possible that a future AI system could prove so reliable, so coldly rational, that it winds back the hands of the nuclear doomsday clock. To err is human, after all. A machine that makes no mistakes, feels no pressure, and has no personal bias could provide a level of stability that the Atomic Age has never known.

That moment is still far in the future, the researchers concluded, but the years between now and then will be especially dangerous. More nuclear-armed nations and an increased reliance on AI, especially before it is technologically mature, could lead to catastrophic miscalculations. And at that point, it might be too late for a lieutenant colonel working the night shift to stop the machinery of war.

The story of Stanislav Petrov's brush with nuclear disaster puts a new generation on notice about the responsibilities of ushering in profound, and potentially destabilizing, technological change. Petrov, who died in 2017, put it simply: "We are wiser than the computers," he said. "We created them."

RAND researchers brought together some of the top experts in AI and nuclear strategy for a series of workshops. They asked the experts to imagine the state of nuclear weapon systems in 2040 and to explore ways that AI might be a stabilizing—or destabilizing—force by that time.

PERSPECTIVE ONE - Skepticism About the Technology

Many of the AI experts were skeptical that the technology will have come far enough by that time to play a significant role in nuclear decisions. It would have to overcome its vulnerability to hacking, as well as adversarial efforts to poison its training data—for example, by behaving in unusual ways to set false precedents.

PERSPECTIVE TWO – Nuclear Tensions Will Rise

But an AI system wouldn't need to work perfectly to raise nuclear tensions, the nuclear strategists responded. An adversary would only need to think it does and respond accordingly. The result would be a new era of competition and distrust among nuclear-armed rivals.

PERSPECTIVE THREE – AI learns the winning move is to not play

Some of the experts held out hope that AI could some day, far in the future, become so reliable that it averts the threat of nuclear war. It could be used to track nuclear development and make sure that countries are abiding by nonproliferation agreements, for example. Or it could rescue humans from mistakes and bad decisions made under the pressure of a nuclear standoff. As one expert said, a future AI might conclude, like the computer in the 1983 movie "WarGames," that the only winning move in nuclear war is not to play.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

More on Resilience, Peace and Security
See all

What is the International Day of Peace and why is it important?

Andrea Willige

September 19, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum