Emerging Technologies

Self-driving cars or predictive policing: what scares you most about AI?

The notion of a robot animated by AI is known as “embodiment.” Image: REUTERS/Wolfgang Ratta

Ashley Rodriguez
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

People in Britain are more scared of the artificial intelligence embedded in household devices and self-driving cars than in systems used for predictive policing or diagnosing diseases. That’s according to a survey commissioned by the Royal Society, which is billed as the first in-depth look at how the public perceives the risks and benefits associated with machine learning, a key AI technique.

Robots at home are scarier than AI-infused systems used for policing and healthcare. Image: Royal Society

Participants in the survey were most worried by the notion that a robot, acting on conclusions derived by machine learning, would cause them physical harm. Accordingly, machines with close proximity to their users, such as those in the home and self-driving cars, were viewed as very risky. The notion of a robot animated by AI is known as “embodiment.” “The applications that involved embodiment… tended to appear as having more risk associated with them, due to concerns about physical harm,” the authors write.

Yet, the risks posed by sprawling machine-learning systems are real—and they aren’t just about being run over by a self-driving car gone rogue. As the data scientist Cathy O’Neil has written, algorithms are dangerous if they possess scale, their workings are kept secret, and have destructive effects. Predictive policing is one example she offers of dangerous algorithms at work, its pernicious effects compounded by biased data sources.

Another area with potentially far-reaching implications is machine learning in healthcare. Cornell researcher Julia Powles pointed out that survey participants were given a particularly striking example of machine learning in breast cancer diagnosis, a 2011 Stanford study, to illustrate the technology at work. As a result of that example, participants reported they were “confident that misdiagnosis would not occur” to such a degree that it would put society at risk. “This is as yet unproven for the general scenario of [computer-driven diagnoses],” Powles said.

This mismatch between perceived and potential risk is common with new technologies, said Alison Powell, an assistant professor at the London School of Economics who is studying the ethics of connected devices. “This is part of the overall problem of the communication of technological promise: new technologies are so often positioned as ‘personal’ that perception of systemic risk is impeded,” she said.

The Royal Society doesn’t have a quick fix. It recommends machine-learning students get a class in ethics to go along with their technical studies. It suggests that the UK government funds public engagement with researchers. It vetoes the idea of regulation specifically for machine learning, in favor of oversight by existing regulations for each industry. What’s beyond doubt is that machine learning is already all around us, and will grow in influence. Peter Donnelly, who chaired the group that put the report together, told journalists assembled for the launch of the research: “It can and probably will impact on many, many, areas of our personal and leisure activities.”

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesArtificial IntelligenceFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How to build the skills needed for the age of AI

Juliana Guaqueta Ospina

April 11, 2024

1:31

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum