Artificial Intelligence

This AI can spot fake robbery reports

Domitilla Stefanini writes a love letter dictated by ghostwriter Micol Graziano, not seen, in Rome, Italy, March 19, 2018. Picture taken March 19, 2018.  REUTERS/Max Rossi - RC1D0CC9D230

No hiding place for liars. Image: REUTERS/Max Rossi

Olivia Goldhill
Weekend Writer, Quartz
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

There’s no foolproof way to know if someone’s verbally telling lies, but scientists have developed a tool that seems remarkably accurate at judging written falsehoods. Using machine learning and text analysis, they’ve been able to identify false robbery reports with such accuracy that the tool is now being rolled out to police stations across Spain.

Computer scientists from Cardiff University and Charles III University of Madrid developed the tool, called VeriPol, specifically to focus on robbery reports. In their paper, published in the journal Knowledge-Based Systems earlier this year, they describe how they trained a machine-learning model on more than 1000 police robbery reports from Spanish National Police, including those that were known to be false. A pilot study in Murcia and Malaga in June 2017 found that, once VeriPol identified a report as having a high probability of being false, 83% of these cases were closed after the claimants faced further questioning. In total, 64 false reports were detected in one week.

VeriPol works by using algorithms to identify the various features in a statement, including all adjectives, verbs, and punctuations marks, and then picking up on the patterns in false reports. According to a Cardiff University statement, false robbery reports are more likely to be shorter, focused on the stolen property rather than the robbery itself, have few details about the attacker or the robbery, and lack witnesses.

Have you read?

Taken together, these sound like common-sense characteristics that humans could recognize. But the AI proved more effective at unemotionally scanning reports and identifying patterns, at least compared to historical data: Typically, just 12.14 false reports are detected by police in a week in June in Malaga, and 3.33 in Murcia.

Of course, that doesn’t mean the tool is perfect. “[O]ur model began to identify false statements where it was reported that incidents happened from behind or where the aggressors were wearing helmets,” co-author of the study Dr Jose Camacho-Collados, from Cardiff University’s School of Computer Science and Informatics, said in a statement. Bad luck for those who really were robbed from behind or by those wearing a helmet.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum