Artificial Intelligence

Can we ever build a robot with empathy?

Pascale Fung
Director of the Centre for Artificial Intelligence Research (CAiRE) and Professor of Electrical & Computer Engineering, The Hong Kong University of Science and Technology
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

At this year’s International Consumer Electronics Show, we started to see consumer-grade robots – machines that can fly up to take your picture, clean your floor and even wink or hold a conversation. Japanese company SoftBank is unveiling a robot that can tell a joke and converse in four languages. By now we are all used to talking to our smartphones; the message of the electronics show seemed to be that the market is finally ready for intelligent machines.

Then came ominous warnings from two famous thinkers. Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race”, while AI investor Elon Musk proclaimed: “Artificial intelligence is our biggest existential threat.”

In the same month, the killing of 12 journalists at the offices of satirical magazine Charlie Hebdo in Paris, alongside attacks on a Jewish school and synagogue, sparked widespread debate about religious extremism, freedom of speech and the importance of tolerance. Empathy, so vital in relationships between different cultures and religions, also holds the key to the future of human-machine interactions.

The rise of intelligent machines

The ancestors of intelligent machines date back to after WWII, with early examples including W. Grey Walter’s mechanical turtles and John Hopkins’ mechanical beast, which would wander around the laboratory for outlets to recharge itself. These were the first robots, but they were based on analogy circuits instead of digital computers. In 1950, Alan Turing published a paper in which he proposed the Turing Test – if a machine can fool human judges into thinking that it is a human while having a conversation, then the machine is said to be really “thinking”.

Ever since, we have become accustomed to the help of appliances, whether it’s a calculator, washing machine, rice cooker, assembly-line robot or aeroplane autopilot system. They are not immune from causing harm: industrial accidents can be caused by software bugs, security holes, unintended actions and decisions. But can machines ever be intentionally malicious? Not if we don’t build them that way.

How do we build an intelligent machine? It is basically a software system consisting of modules, one responsible for natural language dialogues between human and machine, another allowing the machine to recognize what its video camera is capturing. Yet more modules might be making decisions based on user intentions, or be responsible for machine response in terms of speech, motion or facial expressions. Each is a software program that uses machine-learning algorithms to mimic human speech recognition, natural dialogue, vision, motion and response.

For the most part, each module is a “savant” that is super-human in that they can be trained to translate between hundreds of languages, or talk to you non-stop, or assemble the parts of thousands of cars. IBM’s Deep Blue beat the world champion of chess in 1997. IBM Watson won the US quiz show Jeopardy in 2011. Machine intelligence seems poised to surpass human intelligence in many areas. However, like cars that travel faster than humans, planes that fly to great heights or internet search engines that can locate any piece of information, machine intelligence is constrained to specific tasks.

More than a helping hand

Increasingly, we also want intelligent machines to be companions rather than just assistants. We can envision a future in which our household chores are carried out by robot servants who anticipate what needs to be done, in which our elderly and sick are taken care of by robot nurses, our children learn from robot teachers, and our drinks are served by a robot bartender who strikes up a friendly conversation. Stressed employees might even benefit from a session with the sympathetic robot psychoanalyst.

To achieve this, we need to incorporate an “empathy module” into the machines. They could be programmed to recognize your speech and understand your queries, and also to detect stress in your voice and ask: “Did you have a bad day? How about a hot bath and some cool music?” Machines that transcribe and take minutes at meetings can be programmed to detect tension in the room and tell a joke. Household robots can lull children to sleep with a lullaby or story.

Researchers are working on machines that can understand not only the content of human speech, but also the emotion. Much like a lie detector but with advanced machine-learning algorithms, software can be designed to identify your emotional state from the melody and tone of your voice, as well as your facial expressions and body language. Systems that can read emotions from perceptual signals are still at a preliminary stage, however studies have shown that they are performing at the level of humans for the detection of lying, stress and even flirting. In some cases, the machines perform better than some humans. One day they may become a less invasive alternative to having a brain scan.

In the meantime, research on sentiment analysis has already been applied commercially to recommend books, movies and merchandise. Every time you like or dislike something online, it creates an explicit emotion label that the machines use to push content and products to you.

Will robots turn against us?

One day machines will recognize emotions as efficiently as they do speech. Whether they use that power for us or against us is up to the people who program them. For the time being, intelligent systems are being built to provide information and enable humans to make better decisions. In medicine, for example, a system can synthesize the information in a patient’s medical history with their symptoms and other, similar cases to create a detailed profile that can help doctors make a diagnosis. It could certainly alert the doctor to the level of pain felt by the patient.

To return to the possibility of industrial accidents being caused by malfunctioning machines: just like other mechanical and electronic devices, safety and security checks will need to be built in. We will need to program machines to avoid carrying out unintentional harm.

If we ensure that empathetic, emotionally intelligent machines evolve at the hands of scientists and engineers whose sole purpose is to help humans, they are more likely to improve lives and make jobs more efficient. They will understand and empathize with you. They will apologize for their mistakes and ask permission before proceeding. They might even sacrifice themselves to save your life – the ultimate act of empathy.

Author: Pascale Fung is a Professor of Electronic and Computer Engineering at Hong Kong University of Science and Technology. She was elected Fellow of the Institute of Electrical and Electronic Engineers for her contributions to human-machine interactions.

Image: Children touch the hands of the humanoid robot Roboy at the exhibition Robots on Tour in Zurich, March 9, 2013. REUTERS/Michael Buholzer

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum