Fourth Industrial Revolution

Now scientists are building a ‘kill switch’ for Artificial Intelligence

A technician makes adjustments to the "Inmoov" robot from Russia during the "Robot Ball" scientific exhibition in Moscow May 17, 2014. Picture taken May 17, 2014. REUTERS/Sergei Karpukhin (RUSSIA - Tags: SCIENCE TECHNOLOGY SOCIETY) - RTR3PNJ5

Image: REUTERS/Sergei Karpukhin

Joe Myers
Writer, Forum Agenda
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Fourth Industrial Revolution?
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Fourth Industrial Revolution

Science fiction loves a story about robots rising up and taking control – consider Will Smith’s iRobot – but how realistic are such visions of the future?

DeepMind, Google’s artificial intelligence (AI) division, certainly thinks there’s a risk. They’ve teamed up with Oxford University to develop a "red button" that would interrupt an AI machine’s actions. Their paper “explores a way to make sure a learning agent will not learn to prevent (or seek!) being interrupted by the environment or a human operator.”

The "red button" – or "kill switch" as it’s been termed – adds to the debate on the long-term risks of AI.

AI on the rise

Funding to artificial intelligence start-ups has increased nearly sevenfold in just five years. From US$45m in 2010, it hit $310m in 2015. Investment in 2014 was even higher – 60 deals worth $394m were recorded, according to CB Insights.

 Artificial intelligence global yearly financing history
Image: CB Insights

Interest in AI has also spiked following AlphaGo’s victory over a top human player at Go – an ancient Chinese board game, said to have more possible configurations than there are atoms in the universe.

But with prominent voices, including Stephen Hawking, Elon Musk and Bill Gates, cautioning on the risks posed by the technology, it’s not all rosy.

The red button

As a number of different researchers have now begun to ask, what happens if AI machines go rogue?

The DeepMind and Oxford University team argues that learning agents are unlikely to “behave optimally all the time” given the complexities of the real world. In a reward-based system, if the operator prevents the machine from performing an action for which it expects to be rewarded, it may learn to avoid such an interruption.

It is therefore important to ensure these machines can be interrupted – without them learning to disable or circumvent the red button.

Meanwhile, a roboticist at the University of Berkeley has built a robot that can decide whether or not to inflict pain. Alexander Reben argues that it shows harmful robots already exist, and so some of the issues surrounding AI need attention now.

The robot is capable of pricking a finger, but will not do so all the time. He explained to the BBC that “the robot makes a decision that I as a creator cannot predict.”

Loading...

The robot is nicknamed ‘The First Law’ after Isaac Asimov’s first law of robotics, which states that a robot may not hurt humans. Mr Reben described his robot as a “philosophical experiment”.

Have you read?

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Fourth Industrial RevolutionEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why the Global Digital Compact's focus on digital trust and security is key to the future of internet

Agustina Callegari and Daniel Dobrygowski

April 24, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum