Artificial Intelligence

How do we build trust between humans and AI?

A girl with a robot in Kuromon Ichiba Market, Ōsaka-shi, Japan.

The key to building AI that truly “gets" us is to focus not on cognition but emotional intelligence. Image: Andy Kelly/Unsplash

Rana el Kaliouby
Co-Founder and Chief Executive Officer, Affectiva
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

My mother lives 5,000 miles away in Cairo, Egypt. When she calls me, she can immediately tell if something is wrong, simply from the way I say “Hello” or “I’m fine”. Like many relationships with those we hold close, my mom and I have built a level of trust to the point that she knows how I’m feeling from a simple word or phrase.

But unfortunately, I, like many others in the world today, spend as much (if not more) time interacting with technology as I do with the people close to me. Yet unlike talking to my mom or a friend, the way we interact with devices is completely transactional. My cell phone can’t read between the lines and understand what’s really going on with me.

Have you read?

This issue is becoming more noticeable as our interactions with technology make increasing use of artificial intelligence (AI). As AI takes on new roles in society – from working alongside us, to driving our cars, assisting with our healthcare and more – we’re forging a new kind of partnership with technology. And with that partnership comes a new social contract: one that’s built on mutual trust, empathy and ethics.

Can AI trust us?

It starts with mutual trust. After all, we cannot effectively work or live with people that we don’t trust, and who don’t trust us back. AI is no different.

The examples of why mutual trust matters between people and AI reach far and wide. Take semi-autonomous vehicles, for instance. As these cars are still in development, they require a human driver to be prepared to take back control if the car’s AI can no longer safely navigate.

But how can the AI trust that the person is ready to take over? That the person who’s meant to drive is alert and engaged, and not drowsy or otherwise distracted? AI systems need to be able to really understand our emotional and cognitive states – in this case, recognizing if someone is showing signs of distraction or potentially dangerous states of impairment – before entrusting them.

So the question becomes: how can we foster a feeling as personal as trust with machines – and how can we make it mutual?

AI as an empathetic companion

According to Harvard Business School technology professor Frances Frei, empathy is one of the most important elements in establishing trust between people. So perhaps empathy is also the key to creating an understanding and trust between AI and humans.

The key to building AI that truly “gets” us isn’t to focus solely on cognition, but to develop algorithms with emotional intelligence. Much like in partnerships between people, giving AI the ability to understand how someone is feeling is the only way that a semi-autonomous car will know if its driver is fit to take the wheel, or a co-bot will understand if its human colleagues are feeling up to the job on a given day.

This may seem too personal or unnatural to some. But to me, continuing to advance AI research and development is not what will pose a threat to our jobs or mankind. The real question is, what will people do with the technology, and how will our choices for AI change our world?

Outlining the ethical development and deployment of AI

AI systems that are designed to engage with humans will have a lot of data, and will know a lot about the people they interact with. This raises concerns: while there’s a lot of potential for AI to improve our lives, there’s just as much potential for it to aggravate inequality or cause harm.

Image: Franck V/Unsplash

As we devise this new social contract, we need to set standards for the ethical development and deployment of AI. This means making sure that AI is built by diverse teams, and with diverse data, to ensure that the technology does not replicate biases that are inherent in society. It means considering the need for opt-in and consent when people interact with AI, and prioritizing data privacy.

Recognizing this, some of the brightest minds in technology, academia and beyond are already partnering up to set standards that will ensure people use AI ethically and for good – from MIT and Harvard’s Ethics and Governance of AI Initiative, to the Partnership on AI, which is a collaboration between large tech companies like Microsoft, Amazon, Google, Facebook and Apple, and start-ups like Affectiva. This is part of my charge as a Young Global Leader with the World Economic Forum, too. The Forum gives us an opportunity to discuss the implications of AI with leaders who bring perspectives from all different arenas and backgrounds.

There’s still so much to figure out as we navigate our changing relationship with AI. But there’s an immediate need to start framing the conversation in this way – as a partnership rooted in trust, empathy and understanding – rather than continuing to discuss AI in fear, and develop the technology without enabling it to really relate to us.

We’ll be exploring the new social contract between people and AI, and all of the associated implications, at Affectiva’s third annual Emotion AI Summit in Boston on 15 October. Register here if you’d like to join and help advance the conversation – we’d love to hear your thoughts.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum