Artificial Intelligence

These tech leaders have signed a pledge against killer robots

French Sculptor Gael Langevin shows his InMoov robot, a life-size robot made of cheap circuit-boards and plastic parts that anyone can download at home and make on a 3-D printer, at the Viva Tech start-up and technology summit in Paris, France, May 25, 2018. REUTERS/Charles Platiau

Campaigning against LAWS - lethal autonomous weapons systems. Image: REUTERS/Charles Platiau

Chris Pash
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Technology industry leaders, backed by some of the world’s biggest science and industry organisations, have signed global pledge against the development of autonomous weapons systems using artificial intelligence (AI).

Engineers and scientists from the technology industry say they will “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”.

Lethal autonomous weapons systems (LAWS), also called killer robots, are weapons that can identify, target, and kill a person, without a human making or authorising such decisions.

The pledge, released in Stockholm at the 2018 International Joint Conference on Artificial Intelligence (IJCAI), the world’s leading AI research meeting with over 5,000 attendees, was signed by 150 companies and more than 2,400 individuals from 90 countries working in artificial intelligence (AI) and robotics.

Corporate signatories include the Google DeepMind, the XPRIZE Foundation, University College London, ClearPath Robotics/OTTO Motors, the European Association for AI, and the Swedish AI Society.

Individuals include head of research at Google.ai Jeff Dean, entrepreneur Elon Musk, AI pioneers Stuart Russell, Yoshua Bengio, Anca Draga, plus British Labour MP Alex Sobel, and Toby Walsh from the University of NSW.

Ethics

Walsh, a professor of artificial intelligence at UNSW in Sydney, points out the ethical issues.

“We cannot hand over the decision as to who lives and who dies to machines,” he says.

“They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”

The pledge, organised by the Future of Life Institute, challenges governments, academia and industry to follow their lead:

“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. … We ask that technology companies and organisations, as well as leaders, policymakers, and other individuals, join us in this pledge.”

Max Tegmark, a physics professor at the Massachusetts Institute of Technology and president of the Future of Life Institute, announced the pledge.

“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” Tegmark said.

“AI has huge potential to help the world – if we stigmatise and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way.”

The pledge begins with the statement: “Artificial intelligence is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”

A clear and present danger

Ryan Gariepy, Founder and CTO of both Clearpath Robotics and OTTO Motors, a strong opponent of lethal autonomous weapons, said: “Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful.

“Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force.”

Advocates of an international ban on LAWS are concerned that the robots will be difficult to control — easier to hack, more likely to end up on the black market, and easier for terrorists and despots to obtain — which could become destabilising for all countries.

In December 2016, the United Nations’ Review Conference of the Convention on Conventional Weapons (CCW) began formal discussion on LAWS. 26 countries attending the Conference have so far announced support for some type of ban, including China.

The next UN meeting on LAWS will be held next month.

The full text of the pledge

Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.

There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.

Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceAgile Governance
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum