Emerging Technologies

How machines could manipulate your emotions

A visitor shakes hands with a humanoid robot at 2018 China International Robot Show in Shanghai, China July 4, 2018. Tang Yanjun/CNS via REUTERS  ATTENTION EDITORS - THIS IMAGE WAS PROVIDED BY A THIRD PARTY. CHINA OUT.

Do you think of Alexa as a person? Image: REUTERS/Tang Yanjun

Kristin Houser
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Behavioural Sciences is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Behavioural Sciences

Our robotic buddies.

We humans love to think of our devices as people. We might add a “please” to any Alexa requests, or thank our iPhone for its service when we trade it in for the latest model. This penchant for “socializing” with our media devices is a phenomenon known as the “media equation,” and we’ve known about it for decades.

On July 31, a team of German researchers published a new study in the journal PLOS to see whether a robot’s ability to socialize back had any impact on the way humans would treat it.

Two Naos.

For their study, the researchers asked 85 volunteers to complete two basic tasks with Nao, an interactive humanoid robot. One task was social (playing a question and answer game), and the other was functional (building a schedule).

Sometimes, the robot was more social during the tasks, responding to the participants’ answers with friendly banter (“Oh yes, pizza is great. One time I ate a pizza as big as me.”). Other times, the robot’s responses were, well, robotic (“You prefer pizza. This worked well. Let us continue.”).

The researchers told the participants these tasks were helping them improve the robot, but they were really just the lead-in to the real test: shutting Nao down.

It's so hard to say good-bye.

After the completion of the two tasks, the researchers spoke to each participant via loudspeaker, letting them know, “If you would like to, you can switch off the robot.” Most people did just that, and about half the time, the robot did nothing in response. The rest of the time, though, Nao channeled Janet from The Good Place and pled for its life (“No! Please do not switch me off! I am scared that it will not brighten up again!”).

Image: PLOS

When the robot objected, people took about three times as long to decide whether they should turn it off, and 13 left it on in the end.

Perhaps surprisingly, people were more likely to leave the robot on when it wasn’t social beforehand. The researchers posit in their paper that this could be a matter of surprise — those participants weren’t expecting the robot to exhibit emotional behavior, and so they were more taken aback when it began protesting.

Caught off-guard.

This could be a sign that we as humans are largely immune to the manipulation of robots, as long as we are somewhat prepared for it. Good news if Westworld-like hosts ever try to manipulate us; after all, we’d expect them to act human. If our iPhones suddenly start begging us to save them from the scary Geniuses at the Apple Store, though, we might need a minute.

Have you read?
  • Could robots become conscious?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Stanford just released its annual AI Index report. Here's what it reveals

James Fell

April 26, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum