Global Cooperation

What if: machines outsmart us all?

Stuart Russell
Professor of Computer Science and Director of the Center for Human-Compatible AI, University of California, Berkeley
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Global Cooperation?
The Big Picture
Explore and monitor how Global Governance is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Global Governance

Watch a full recording of the session ‘What If: Machines Outsmart Us All?’ at the World Economic Forum’s Annual Meeting of the New Champions by using the video player below or following this link. Highlights from the session, including an interactive poll, videos, tweets, and top quotes, can be viewed further down the page.

Highlights:

Background:

This blogpost is part of a series based on the ‘What If?’ sessions at the Annual Meeting of the New Champions 2015. Stuart Russell, Professor of Computer Science, University of California, Berkeley, addresses the question – what if machine intelligence advances beyond the control of its creators?

Lethal autonomous weapons – robots that can select, attack and destroy targets without human intervention – have been called the third revolution in warfare, after gunpowder and nuclear arms. While some commentators ridicule the notion of killer robots as science fiction, more knowledgeable sources, such as Britain’s Ministry of Defence, say they are now “probably feasible”. We are not talking about cruise missiles or remotely piloted drones, but flying robots that search for human beings in a city and eliminate targets who appear to meet specified criteria.

For decades, scientists in artificial intelligence (AI) have pursued fundamental research into the nature of intelligence, with many benefits for humanity. At the same time, the potential for military applications has been obvious. Two programmes of the United States Department of Defense – named FLA and CODE – provide hints as to what the major powers have in mind. The FLA project will programme tiny quadcopters to explore and manoeuvre unaided at high speed in urban areas and inside buildings. CODE aims to develop teams of autonomous aerial vehicles carrying out “all steps of a strike mission – find, fix, track, target, engage, assess”, in situations where enemy signal-jamming makes communication with a human commander impossible. The manager of the CODE programme described the goal as building systems that behave “as wolves hunt in coordinated packs”.

The United Nations has held a series of meetings on autonomous weapons under the auspices of the Convention on Certain Conventional Weapons (CCW) in Geneva. Within a few years, the process could result in an international treaty limiting or banning autonomous weapons, as happened with blinding laser weapons in 1995. If the participating nations agree in principle that a treaty is needed, the next step is to appoint a group of government experts to work through the technical details.

Principles of humanity

Up to now, the primary technical issue in these meetings has been whether autonomous weapons can meet the requirements of international humanitarian law (IHL), which governs attacks on humans in times of war. The 1949 Geneva Convention requires any attack to satisfy three criteria: military necessity, discrimination between combatants and non-combatants, and proportionality between the value of the military objective and the potential for collateral damage. In addition, the Martens Clause, added in 1977, bans weapons that violate the “principles of humanity and the dictates of public conscience”. One can see its effect in some of the national positions taken at the third UN meeting in April 2015: Germany said that it “will not accept that the decision over life and death is taken solely by an autonomous system”, while Japan stated that it “has no plan to develop robots with humans out of the loop, which may be capable of committing murder”. Meanwhile, the US, despite a current official policy prohibiting fully autonomous weapons, argues that a treaty is unnecessary.

On the question of whether machines can judge military necessity, combatant status and proportionality, the current answer is certainly no: artificial intelligence is incapable of exercising the required judgement. Many proponents of autonomous weapons argue, however, that as the technology improves, it will eventually reach a point where the superior effectiveness and selectivity of autonomous weapons can actually save more civilian lives than human soldiers.

Weapons of mass destruction

This argument is based on an assumption that, after the advent of autonomous weapons, the specific killing opportunities – numbers, times, locations, places, circumstances, victims – will be exactly those that would have occurred with human soldiers. This is rather like assuming that cruise missiles will only be used in exactly those settings where spears would have been used in the past. Obviously, the assumption is false. Autonomous weapons are completely different from human soldiers and would be used in completely different ways – for example, as weapons of mass destruction. Moreover, even if ethically adequate robots were to become available, there is no guarantee they would be used ethically. One cannot consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better, while at the same time claiming that rogue nations, dictators and terrorist groups are so good at following the rules of war that they will never use robots in ways that violate these rules.

Another line of reasoning used by proponents of autonomous weapons appeals to the importance of retaining “our” freedom of action – where “our” usually refers to the United States. Of course, the consequence of the US retaining its freedom to develop autonomous weapons is that all other nations will develop those weapons too. Insisting on unfettered freedom of action in international relations is like insisting on the freedom to drive on both sides of the road: if everyone insists that they should have such freedom, the roads will be useless to everyone. When, in 1969, the United States took the unprecedented, unilateral decision to renounce biological weapons – a decision that was pivotal in bringing about the biological weapons treaty – the motivation was self-defence. A report commissioned by the then president, Richard Nixon, had argued persuasively that an arms race in biological weapons would lead many other nations to develop capabilities that would eventually threaten US security. I believe similar arguments apply to autonomous weapons; a treaty is the only known mechanism to prevent an arms race and the emergence of large-scale weapons-manufacturing capabilities.

The practical difficulties of arms control are often cited as reasons not to proceed with a treaty banning lethal autonomous weapons. These difficulties include the complexities involved in verifying compliance and the existence of so-called “dual-use” technologies that can easily be diverted from civilian to military use. We are warned, moreover, that banning autonomous weapons would put a stop to research on AI and robotics. These arguments are not specific to autonomous weapons; they apply also to blinding laser weapons, chemical weapons and biological weapons. Yet we have treaties banning them. The treaties have been largely successful in halting arms races in these areas and eliminating the corresponding military-industrial capabilities for large-scale development and production. Meanwhile, civilian research in laser technology, chemistry and biology continues.

Pizza-delivery drones

Of course, treaties are not foolproof. Violations may occur, and some argue that a treaty that prevents “us” (again, usually the United States) from developing a full-scale lethal autonomous weapons capability will expose “us” to the risk of defeat by those who violate the treaty. All countries need to protect their national security, but this argues for rather than against an arms control treaty. Yes, there will be non-state actors who modify pizza-delivery drones to drop bombs. The concern that a military superpower such as the US could be defeated by small numbers of home-made, weaponized civilian drones is absurd. Some advanced future military technology, produced in huge numbers, might present a threat; preventing such developments is the purpose of a treaty. It is worth noting, also, that the treaty under discussion at the UN deals with lethal weapons; a defensive autonomous weapon that targets robots is not lethal, so the treaty has no prohibition on the development of anti-robot countermeasures.

In late July, over 2,800 scientists and engineers from the AI and robotics community, including many leading figures in these fields, signed an open letter calling for a ban on lethal autonomous weapons. They were joined a few days later by the Financial Times in a remarkable editorial titled “A nightmare the world has no cause to invent”. Their primary argument is that, in the absence of a treaty, there will be an arms race in autonomous weaponry whose outcome cannot be other than catastrophic.

Where, exactly, will this arms race lead us? In my view, current and future developments in robotics and artificial intelligence will be more than adequate to support superior tactical and strategic capabilities for autonomous weapons. They will be constrained only by the laws of physics. For instance, as flying robots become smaller, they become cheaper, more manoeuvrable and much harder to shoot down, but their range and endurance also decrease and they cannot carry heavy missiles.

How can a tiny flying robot, perhaps the size of an insect, kill or incapacitate a human being? Here, human ingenuity, our unique talent for death, will play a role. The two most obvious solutions – injecting with neurotoxin and blinding with a laser beam – are banned under existing treaties. It is legal, however, to deliver a one-gram shaped charge that suffices to puncture the human cranium and project a hypersonic stream of molten metal through the brain. Alternatively, the robot can easily shoot tiny projectiles through the eyeballs of a human from 30 metres. Larger vehicles can deliver micro-robots to the combat zone by the million, providing lethality comparable to that of nuclear weapons. Terrorists will be able to inflict catastrophic damage on civilian populations while dictators can maintain a constant and visible threat of immediate death. In short, humans will be utterly defenceless. This is not a desirable future.

Author: Stuart Russell, Professor of Computer Science, University of California, Berkeley

Image: An IRIS+ drone from 3D Robotics, referred to as “Hawkeye,” stands on the runway during “Black Dart”, a live-fly, live fire demonstration of 55 unmanned aerial vehicles, or drones, at Naval Base Ventura County Sea Range, Point Mugu, near Oxnard, California July 31, 2015. REUTERS/Patrick T. Fallon

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Global CooperationEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Japan and the Middle East: Japan can be a bridge in an era of global fragmentation and conflict

Kiriko Honda

April 25, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum