Emerging Technologies

This AI can tell when you're lying

SoftBank humanoid robot known as Pepper is prepared at the venue for Pepper World 2016 Summer, ahead of its opening on Thursday, in Tokyo, Japan, July 20, 2016. REUTERS/Kim Kyung-Hoon - D1BETQQFTVAA

Researchers have developed a system that uses artificial intelligence to autonomously detect deception in courtroom trial videos. Image: REUTERS/Kim Kyung

Dom Galeon
Writer, Futurism
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Behavioural Sciences is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Behavioural Sciences

An AI That Detects Deception

Being able to tell when a person is lying is an important part of everyday life, but it’s even more crucial in a courtroom. People may vow under oath that they will tell the truth, but they don’t always adhere to that promise, and the ability to spot those lies can literally be the difference between a verdict of innocent or guilty.

To address this issue, researchers from the University of Maryland (UMD) developed the Deception Analysis and Reasoning Engine (DARE), a system that uses artificial intelligence (AI) to autonomously detect deception in courtroom trial videos. The team of UMD computer science researchers led by Center for Automation Research (CfAR) chair Larry Davis describe their AI that detects deception in a study that’s still to be peer-reviewed.

DARE was taught to look for and classify human micro-expressions, such as “lips protruded” or “eyebrows frown,” as well as analyze audio frequency for revealing vocal patterns that indicate whether a person is lying or not. It was then tested using a training set of videos in which actors were instructed to either lie or tell the truth.

So, just how accurate is DARE?

According to UMD researcher Bharat Singh, “accurate” might not be the best word to describe the system. “Some news articles misunderstood [Area Under the Curve to mean] accuracy,” he told Futurism. AUC refers to the probability of a classifier ranking a randomly chosen positive instance higher than a randomly chosen negative one.

Image: Futurism

Ultimately, DARE did perform better than the average person at the task of spotting lies. “An interesting finding was the feature representation which we used for our vision module,” said Singh. “A remarkable observation was that the visual AI system was significantly better than common people at predicting deception.”

DARE scored an AUC of 0.877, which, when combined with human annotations of micro-expressions, improved to 0.922. Ordinary people have an AUC of 0.58, Singh pointed out.

The researchers will present their study on this AI that detects deception at the Association for the Advancement of Artificial Intelligence (AAAI) 2018 Conference on AI this February.

Bringing out the Truth

While some existing lie-detecting technologies can produce fairly reliable results, they aren’t particularly useful in a courtroom setting. Truth serums, for example, are usually illegal, while polygraphs are inadmissible in court. DARE could prove to be the exception to the rule, but the researchers don’t see its applications as limited to the courtroom.

“The goal of this project is not to just focus on courtroom videos but predict deception in an overt setting,” said Singh, noting that DARE could be used by intelligence agencies in the future.

“We are performing controlled experiments in social games, such as Mafia, where it is easier to collect more data and evaluate algorithms extensively,” he told Futurism. “We expect that algorithms developed in these controlled settings could generalize to other scenarios, also.”

Have you read?

According to Raja Chatilla, executive committee chair for the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems at the Institute of Electrical and Electronics Engineers (IEEE), DARE should be used with caution.

“If this is going to be used for deciding […] the fate of humans, it should be considered within its limitations and in context, to help a human — the judge — to make a decision,” Chatilla told Futurism, pointing out that “high probability is not certainty” and not everyone behaves the same way. Plus, there’s a chance of bias based on the data used to train the AI.

Chatilla did note that image and facial expression recognition systems are improving. According to Singh, we could be just three to four years away from an AI that detects deception flawlessly by reading the emotions behind human expressions.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Stanford just released its annual AI Index report. Here's what it reveals

James Fell

April 26, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum