This AI can tell when you're lying
Researchers have developed a system that uses artificial intelligence to autonomously detect deception in courtroom trial videos. Image: REUTERS/Kim Kyung
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:
Behavioural Sciences
An AI That Detects Deception
Being able to tell when a person is lying is an important part of everyday life, but it’s even more crucial in a courtroom. People may vow under oath that they will tell the truth, but they don’t always adhere to that promise, and the ability to spot those lies can literally be the difference between a verdict of innocent or guilty.
To address this issue, researchers from the University of Maryland (UMD) developed the Deception Analysis and Reasoning Engine (DARE), a system that uses artificial intelligence (AI) to autonomously detect deception in courtroom trial videos. The team of UMD computer science researchers led by Center for Automation Research (CfAR) chair Larry Davis describe their AI that detects deception in a study that’s still to be peer-reviewed.
DARE was taught to look for and classify human micro-expressions, such as “lips protruded” or “eyebrows frown,” as well as analyze audio frequency for revealing vocal patterns that indicate whether a person is lying or not. It was then tested using a training set of videos in which actors were instructed to either lie or tell the truth.
So, just how accurate is DARE?
According to UMD researcher Bharat Singh, “accurate” might not be the best word to describe the system. “Some news articles misunderstood [Area Under the Curve to mean] accuracy,” he told Futurism. AUC refers to the probability of a classifier ranking a randomly chosen positive instance higher than a randomly chosen negative one.
Ultimately, DARE did perform better than the average person at the task of spotting lies. “An interesting finding was the feature representation which we used for our vision module,” said Singh. “A remarkable observation was that the visual AI system was significantly better than common people at predicting deception.”
DARE scored an AUC of 0.877, which, when combined with human annotations of micro-expressions, improved to 0.922. Ordinary people have an AUC of 0.58, Singh pointed out.
The researchers will present their study on this AI that detects deception at the Association for the Advancement of Artificial Intelligence (AAAI) 2018 Conference on AI this February.
Bringing out the Truth
While some existing lie-detecting technologies can produce fairly reliable results, they aren’t particularly useful in a courtroom setting. Truth serums, for example, are usually illegal, while polygraphs are inadmissible in court. DARE could prove to be the exception to the rule, but the researchers don’t see its applications as limited to the courtroom.
“The goal of this project is not to just focus on courtroom videos but predict deception in an overt setting,” said Singh, noting that DARE could be used by intelligence agencies in the future.
“We are performing controlled experiments in social games, such as Mafia, where it is easier to collect more data and evaluate algorithms extensively,” he told Futurism. “We expect that algorithms developed in these controlled settings could generalize to other scenarios, also.”
According to Raja Chatilla, executive committee chair for the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems at the Institute of Electrical and Electronics Engineers (IEEE), DARE should be used with caution.
“If this is going to be used for deciding […] the fate of humans, it should be considered within its limitations and in context, to help a human — the judge — to make a decision,” Chatilla told Futurism, pointing out that “high probability is not certainty” and not everyone behaves the same way. Plus, there’s a chance of bias based on the data used to train the AI.
Chatilla did note that image and facial expression recognition systems are improving. According to Singh, we could be just three to four years away from an AI that detects deception flawlessly by reading the emotions behind human expressions.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
David Elliott
October 3, 2024
Mirek Dušek
October 1, 2024
David Elliott
September 30, 2024
Eleonore Pauwels and Steven Vosloo
September 27, 2024
Pooja Chhabria and Chris Hamill-Stewart
September 27, 2024
Emma Charlton
September 27, 2024