Emerging Technologies

Empathic AI could be the next stage in human evolution - if we get it right

"Empathy is seeing with the eyes of another." Image: REUTERS/Arnd Wiegmann

Jesus Mantas
Global Head of Strategy and Offerings, IBM Global Business Services
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Neuroscience is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Neuroscience

This article is part of: Annual Meeting of the New Champions

Human progress has been fueled by the development of tools, machines and technology that augment our natural capabilities. Yet our emotional brain – the part that controls our empathy – has had little help from technology to-date.

Artificial intelligence (AI) has the power to change that. Designing human-centric AI interactions, optimized to develop trusted relationships between AI and humans, presents the largest opportunity for human and societal advancement in the modern era.

Despite the generally-held belief that society runs based on rational decisions and articulated rules, research shows that most of what we do as individuals, organizations and society is governed by subconscious, emotional decisions. The potential for human-centered AI design is to augment human empathy, improving the 95% of all decisions that are made subconsciously. In the process, we can make foresight a common superpower.

The road to this next stage of progress begins with designing human-AI interactions that prioritize enhancing people’s humanity, not replacing it. A passionless, automatonic future would weaken what has allowed humans to survive and thrive throughout millennia. The biggest benefits of AI will be achieved by ‘chemistry-matching’ of humans and AI - and in teaching AI to be more human, we will find opportunities to learn how to be more human ourselves.

Empathy, our species survival trick, is the clue to our next evolution

Austrian doctor and psychotherapist Alfred Adler said: “Empathy is seeing with the eyes of another, listening with the ears of another and feeling with the heart of another.” As he wrote this in 1928, it’s certain he wasn’t thinking about computers. Indeed, empathy is one of those traits that has been difficult to teach and augment with tools or technology.

Until recently, technology has largely helped augment the rational side of our brain as well as our physical prowess. Rudimentary interfaces like levers and pedals have given way to button, keyboard, mouse and screen.

Throughout, the goal has been to enhance the mechanical and computational capabilities of humans.

Yet the rational side of the human brain, while amazing, actually controls only a small fraction of our behaviour. The subconscious side - essential for survival - rules many more aspects of our lives. Beyond instincts like fight-or-flight, it hosts our empathy and emotion, which drive the vast majority of our day-to-day decisions. And this part of our brain has not had much help from tools or technology.

Over the last five millennia, machines of increasing sophistication have expanded our natural physical abilities, exemplified by the cars and airplanes that move us at hundreds of times the speed and distance that our legs might manage. More recently, machines have been devised to supplement our cognitive abilities, expanding the near-term storage, retrieval and computational parameters of our brains. (We can store and retrieve the equivalent of more than 60 million written pages in real time on our mobile devices.)

What AI offers for the future - and what is routinely overlooked in both the excitement and trepidation about its impact - is not just additional augmentation to the rational mind, but the enhancement of the emotional mind. By learning and presenting human-like interactions, the machines of tomorrow can be far richer tools. If properly designed, AI might augment our human empathy at the same accelerated scale at which earlier technology has improved our physical and computational abilities. What can we become when our ability to understand, and relate to others, is enhanced a hundred-fold? What society might we build if we can ‘reverse-engineer’ our unconscious biases? Could we improve each other’s understanding of situations and, in doing so, actually make ‘common sense’ a common sense?

This may seem far fetched to some, yet so was the idea of walking on the moon or sharing all the world’s information in a frictionless communications network. The opportunity that AI presents for impactful human change, at scale, could arguably be the most significant single step for humanity since the evolution of Homo sapiens.

Is this enhancement of empathy really possible? In fact, it is already being worked on - and it doesn’t require electrodes connected to our brains. The first step in this journey is to approach AI through a lens of human-centered design; it is to define the technology’s purpose and place it in a deeply human context. It is to design human-machine interactions built around trusted relationships, to understand the subconscious interfaces that our brains already expose in our natural senses and tap into the natural application programming interfaces (API) that govern how we interpret the world around us.

And so this journey toward the future begins by understanding ourselves; examining the social science that already exists about humanity. The many documented human biases that exist must be approached as ‘features’ - to be programmed to emulate or to counteract - so we can build brain-natural interfaces that enhance our collective progress.

Do we make rational or emotional decisions?

Why should human-AI interactions be ‘tuned’ to the subconscious brain? Why does it offer such enhancement potential? The answer is remarkably simple: because people behave and act emotionally much more than they do rationally. The majority of what we actually decide and do depends more on the subconscious side of our brain, even if our rational side controls what we say about these decisions and actions.

There are many proof points. Here is one example: While we would like to think that our purchasing decisions are based on rational comparison of prices and brands, Harvard Business School professor emeritus Gerald Zaltman has shown that 95% of those decisions occur in the subconscious mind. Another: We commonly accept that ‘emotional intelligence’ is a key leadership skill in driving outcomes from organizations. The deep ‘circuits’ in the subconscious brain influence decisions from hiring to investing.

Neuroscience-supported clues for ‘hacking’ the emotional brain started to be articulated more than a century ago and developed significantly in the last 20 years. For instance, studies show that humans do what’s easy more than they do what’s right (what is called ‘the principle of least effort’).

Essentially, we often make less optimal decisions because they are easier choices. So one easy way to help people make better decisions for themselves would be to make the right choices the easier ones.

Look at a simple form of choice architecture: opt-in versus opt-out. The first requires an action to make a decision, while the second has already assumed a decision, which can be overridden; in both cases, the set of options and the information is identical. Yet our behaviour toward the two is radically different. When 401(k) participants in the US are required to make an explicit decision to enroll, only 18% of those making up to

$30,000 choose to make retirement planning a priority. Yet when enrolled automatically and offered the ability to opt-out, 88% participate.4 The net effect of this “opt-out” choice architecture drove an increase of $30 billion in personal savings.

These concepts may not be new to those familiar with behavioural economics. But they underscore the power of our subconscious brain. It is precisely the design of the interaction (in the 401(k) case, the choice paradigm) that drives significantly different outcomes. With AI, we can become obsessed by the power of algorithms, the rational ‘facts’. But we cannot overstress the importance of human-centered design. To enrich and enhance the workflows between humans and the technology - to unlock the potential of AI algorithms for positive impact versus manipulative exploitation, and to protect against unintended consequences - we need to focus on human-centered design interactions.

Learning from the masters of human-machine interaction

If we are to effectively partner with technology to enhance ourselves – rationally and emotionally – we must design interactions that promote and develop trust between AI and people.

The most intense, and successful, human-tech relationships to date have been built as much through art as science. Machines designed by the likes of Pininfarina with Ferrari, Sir David Brown with Aston Martin, and Sir Jony Ive with Apple captured human emotions. These great masters appreciated that connecting technology to the emotional side of our minds is as important as the rational functionality of their machines’ purpose. Steve Jobs famously chose to include a handle on the translucent shell of the iMac G3, increasing the cost by $60, even though the computer was not meant to be moved. “People were afraid of technology and computers,” Ive later explained. “If this computer has a handle, it makes a relationship possible, it’s approachable, intuitive, it gives you permission to touch.”

Have you read?

So, how do we apply these design lessons to AI-human interactions? How do we build trust in these new technologies? As humans, we find trust in others when we can perceive intention aligned with our expectations in an authentic, consistent, predictable and synchronized manner. AI systems have all the underlying capabilities to understand, reason, learn and interact with us, and therefore are capable of creating and maintaining that sort of relationship with a human.

Mark Knapp is a teaching professor at the University of Texas working on a relationship model that describes 10 different stages of how we initiate, form, maintain, and deteriorate relationships. Adam Cutler in our IBM Design program office has been working on translating this relationship model into the design principles of human-centered AI systems optimized to create trust between people and AI – effectively tapping into our emotional brains and activating the circuits of trust. I encourage you to read about his work here.

The biggest step or the last step?

We must exercise great care and responsibility as we develop AI, and ethical AI must become a global priority. When we do so, I am optimistic that we can steward it to enhance society and, in the process, help solve many of our most pressing problems. As we invest in artificial intelligence, we must not forget to invest even more in ‘human intelligence’ – in its most diverse and inclusive form.

In a way, the more effort we put into teaching AI to be human, the more we learn to be more human ourselves, and that is a purpose worth investing in.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Stanford just released its annual AI Index report. Here's what it reveals

James Fell

April 26, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum