Artificial Intelligence

Do we judge robots on their colour? This study says we do

Humanoid robot Justin at the German Aerospace Center (DLR) in Wessling near Munich, Germany, October 4, 2018. Picture taken October 4, 2018.  REUTERS/Andreas Gebert - RC166E0B8FF0

Unarmed black robots are seen as more aggressive than unarmed white robots. Image: REUTERS/Andreas Gebert

Sean Fleming
Senior Writer, Formative Content
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Picture a robot. What colour are you imagining? The chances are it’s white or metallic, because robots of other colours are few and far between, according to a new study.

That got the researchers from New Zealand’s University of Canterbury thinking: is there such a thing as race bias where robots are concerned? And, if there is, what are the implications?

Associate Professor Christoph Bartneck and his team drafted in 163 people to participate in the study.

To establish if people attribute something akin to racial qualities to robots and what effect that might have, they used what’s known as the shooter bias paradigm, along with a series of in-depth questions.

Threat perception

Shooter bias is a concept rooted in studies to determine if police officers are more likely to shoot people who aren’t white, regardless of any actual threat to life.

Bartneck and his team showed the participants a series of images: human and robot, armed and unarmed, black and white. If they perceived a threat, participants pushed a button that substituted for pulling the trigger of a gun.

The researchers looked at whether participants correctly identified aggressors from non-aggressors and their reaction time – and if they chose to shoot, the amount of time between seeing the image and hitting the button.

Researchers used different robots to determine when people perceived a threat. Image: University of Canterbury

The results, published in the report Robots and Racism, show that unarmed black robots were shot more often than unarmed white ones. They were shot more quickly, too, indicating an instinctive reaction rather than an assessment of the visual information.

“This bias is both a clear indication of racism towards black people, as well as the automaticity of its extension to robots racialized as black,” say the report’s authors.

Developing diversity

Of course, robots cannot be said to have race in the way the word is applied to people. But that’s a distinction seemingly at odds with people’s perceptions.

Participants in the study were “able to easily and confidently identify the race of robots according to their racialization,” the report states.

If racial characteristics will be applied by people to robots, unwittingly or not, the development of diversity in robots will become important, they argue.

For anyone working in robot development, they say, these findings ought to inform the decisions they make around appearance: “If robots are supposed to function as teachers, friends, or carers, for instance, then it will be a serious problem if all of these roles are only ever occupied by robots that are racialized as white.”

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum