Fourth Industrial Revolution

Trusting machines versus humans. We must understand the difference

A visitor looks at his mobile device next to Pepper the robot during the "Salon International de la Haute Horlogerie" (SIHH) watch fair in Geneva, Switzerland, January 14, 2019. REUTERS/Denis Balibouse - RC1C6ADB87F0

Research has found people judge humans by their intentions and machines by their outcomes. Image: REUTERS/Denis Balibouse

César A. Hidalgo
Director, Center for Collective Learning,, Artificial and Natural Intelligence Institute (ANITI), University of Toulouse and Corvinus Institute of Advanced Studies (CIAS), Corvinus University
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Fourth Industrial Revolution?
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Fourth Industrial Revolution

  • Humans historically take a long time to trust the latest wave of machine technology.
  • In scenarios involving physical harm, people tend to see machines as more harmful than humans performing the same actions.
  • It's important we combine our interest in how machines should behave with an understanding and of how we judge them.

Recently, voting machines have been on the receiving end of controversy. And yet people’s aversion of machines is nothing new.

Some 500 years ago, printing was being demonised as a satanic device. Today's equivalent — artificial intelligence — is routinely criticised as a source of unemployment and bias.

But is every bit of anger justified?

Scholars studying people’s reactions to machines are beginning to learn when and why we judge humans and machines differently.

Imagine a car that swerves to avoid a falling tree, and in doing so runs over a pedestrian. Do people judge this action differently if they believe it was the action of a self-driving car as opposed to that of a human?

In my latest book, How Humans Judge Machines, my co-authors and I asked over 6,000 Americans to react to scenarios just like this one, using the setup of a clinical trial.

Half of our subjects saw only scenarios involving human actions, while the other half evaluated only scenarios involving the actions of machines. This allowed us to explore when and why people judge humans and machines differently.

Bad machine, good human

In the aforementioned car accident people judge the action of the self-driving car as more harmful and immoral, even though the action performed by the human was exactly the same.

In another scenario we consider an emergency response system reacting to a tsunami. Some people were told that the town was successfully evacuated. Others were told that the evacuation effort failed.

Our results showed that in this case machines also got the short end of the stick. In fact, if the rescue effort failed, people evaluated the action of the machine negatively and that of the human positively.

The data showed that people rated the action of the machine as significantly more harmful and less moral, and also reported wanting to hire the human, but not the machine.

Do machines always get the shortest straw?

For a long time, scholars have known that people have an aversion to algorithms. Even when algorithms are better at forecasting than humans, people tend to choose human forecasters. This phenomenon is known as algorithm aversion, and it can be costly in a world in which small differences in predictive accuracy matter.

In a recent paper, Berkeley Dietvorst, Joseph Simmons and Cade Massey explored algorithm aversions using five experiments whereby individuals could tie a monetary reward to predictions made by themselves, another person or a model.

While there is a need for machines to be transparent, it must be complemented by an understanding that transparency may ultimately bias people against machines.

César A. Hidalgo

In some experiments, people knew the aggregate performance of the predictions and these tended to favour machines. In others, people could also observe the actual predictions.

The upshot? People tended to avoid algorithms more when they witnessed them err. That is, people’s preference for machines decreased when they saw the errors in addition to the aggregate result.

This finding is interesting in a world in which people often demand transparency as a fundamental pillar of ethical AI.

While there is a need for machines to be transparent, it must be complemented by an understanding that transparency may ultimately bias people against machines. If we fail to account for this nuance, transparency may push us to reject machines when they are actually a source of improvement.

Unfair machines

But there are cases in which people rate machines higher than humans, albeit only slightly. These are moral scenarios involving violations of fairness and loyalty, which are also perceived to be highly intentional when performed by a human.

Consider a robot versus a human, both writing lyrics for a record label. Imagine an investigation discovers these lyrics plagiarize the work from lesser-known artists. When we presented people with this scenario, we found that they judged the action of the human as more harmful and less moral than that of the machine.

Have you read?

We obtained similar results for other scenarios involving fairness, such as biased human resource screenings and university admission systems.

People certainly do not like biased humans or machines, but when we test their repudiation experimentally, people rate human biases as slightly more harmful and less moral than those of machines.

We are shifting from an era of imposing norms on machine behaviour to one of discovering laws which do not tell us how machines should behave, but how we judge them. And the first rule is powerful and simple: people judge humans by their intentions and machines by their outcomes.

So, can we trust machines? Do we even want to? A blanket answer to such bold questions may not be possible, but current research is starting to give us some guidance.

César A. Hidalgo is the author of How Humans Judge Machines, a peer-reviewed book by MIT Press that is free to read at judgingmachines.com. He holds a Chair at the Artificial and Natural Intelligence Institute (ANITI) at the University of Toulouse, and appointments at the University of Manchester and Harvard University.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Fourth Industrial RevolutionArtificial Intelligence
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Space: The $1.8 Trillion Opportunity for Global Economic Growth

Bart Valkhof and Omar Adi

February 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum