Artificial Intelligence

To understand what makes us human we should look to machines

SoftBank Corp. unveils human-like robots named 'pepper' at the company's news conference in Urayasu, east of Tokyo June 5, 2014. Japan's SoftBank Corp said on Thursday it will start selling human-like robots for personal use by February, expanding into a sector seen key to addressing labour shortages in one of the world's fastest ageing societies. The robots, which the mobile phone and Internet conglomerate envisions serving as baby-sitters, nurses, emergency medical workers or even party companions, will sell for 198,000 yen ($1,900) and are capable of learning and expressing emotions, Softbank CEO Masayoshi Son told a news conference. REUTERS/Issei Kato (JAPAN - Tags: SCIENCE TECHNOLOGY BUSINESS TELECOMS SOCIETY) - GM1EA6514YA01

Psychologists have explored how ordinary people make sense of the mental capacities that make up mental life. Image: REUTERS/Issei Kato

Milenko Martinovich
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Asking people to think about the sensations and emotions of inanimate or non-human entities, offers a glimpse into how they think about mental life.

The responses show that Americans break mental life into three parts—body, heart, and mind—a finding that challenges earlier research on this topic and could have important implications for understanding people’s social interactions and moral judgments.

Viewing a robot as having a “mind”—or even a “heart” may allow people to humanize robots.

Deep, philosophical questions about mental life, like “What is consciousness?” or “What does it mean to be alive?” are difficult for most people to answer, according to Kara Weisman, a PhD student in psychology at Stanford University and the study’s lead author.

Rather than looking at broad, philosophical questions, Weisman, along with Stanford psychologists and study coauthors Carol Dweck and Ellen Markman, explored how ordinary people make sense of the sensations, emotions, thoughts, and other mental capacities that make up mental life.

The group asked 1,400 US adults simple questions about the mental capacities of different beings. For example, in the first study, half the participants saw a picture of a robot and the other half a picture of a beetle. They then answered questions such as, “Is a beetle capable of experiencing joy?” and “Is a robot capable of experiencing guilt?” In total, they asked each participant 40 similar questions, then analyzed how all the responses related to each other.

“Our primary interest was really in the patterns of people’s answers to these questions,” Weisman says. “So, when a certain person thought a robot could think or remember things, what else did they think it was capable of doing? By looking at the patterns in people’s responses to these questions, we could infer the underlying, conceptual structure.”

Those patterns resulted in three main clusters of mental capacities: body (physiological sensations, like hunger and pain), heart (social-emotional abilities, like guilt and pride), and mind (perceptual and cognitive abilities, like memory and vision). These clusters were prominent whether participants judged beings individually, when they were compared directly against each other or when the researchers expanded the cast of characters to include entities like a fetus, a chimpanzee, or a stapler.

Two components or three?

A 2007 study from Harvard psychologists has largely served as the standard in mind perception. That study produced a framework with two components: experience (the ability to feel hunger and joy) and agency (the ability to plan or have self-control).

The Stanford scholars called that study “pioneering work,” but say it does not address how people parse mental life itself. Instead, Weisman says that the Harvard study addressed the difference among beings, for example, between a beetle and a dog, but didn’t identify the categories or parts of the mind.

“If the question is, ‘What are the parts of the mind?’ then I think our studies indicate the answer is more like this body-heart-mind than the agency-experience framework. I think these two frameworks can work together to inform our social reasoning more broadly, and it would be fascinating to explore this in future research,” Weisman says.

Humanizing robots and each other

The findings, the researchers say, may play a role in improving people’s relationships with technology and with fellow humans. For example, through the body-heart-mind framework, viewing a robot as having a “mind”—or even a “heart”—may allow people to humanize robots, therefore increasing the likelihood of a smooth interaction.

The framework could also shed light on how to reduce dehumanization between people, the researchers say. For example, objectification might take the form of emphasizing a person’s body over the mind and heart, while other forms of prejudice and stereotyping might take the form of focusing only on people’s “minds” and neglecting their emotional life, or focusing only on people’s “hearts” and underestimating their intellectual abilities.

The body-mind-heart model may provide a useful perspective for understanding how and why people enhance or reduce mental capacities within those three major clusters.

“This is an exciting new framework, but it’s just the beginning,” Dweck says. “We hope it can serve as a takeoff point for theory and research on how ordinary people think about age-old questions about the mind.”

Support for the work came from the National Science Foundation and by a William R. and Sara Hart Kimball Stanford Graduate Fellowship.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceBehavioural Sciences
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum