Artificial Intelligence

Is AI the only antidote to disinformation?

People around the world face threats to life and personal safety because of disinformation

People around the world face threats to life and personal safety because of disinformation Image: REUTERS/Thomas Peter

Arijit Goswami
Innovation Program Manager , Capgemini India
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

  • AI-based programmes are being used to create deep fakes that can be used to sow the seeds of discord in society and create chaos in markets.
  • Algorithms will soon produce content that is indistinguishable from that produced by humans.
  • Human intervention is required to enhance AI-detection of disinformation but educating people to objectively evaluate online content is top priority.

The stability of our society is more threatened by disinformation than anything else we can imagine. It is a pandemic that has engulfed small and large economies alike. People around the world face threats to life and personal safety because of the volumes of emotionally charged and socially divisive pieces of misinformation, much of it fuelled by emerging technology. This content either manipulates the perceptions of people or propagates absolute falsehoods in society.

Have you read?

AI-based programmes are being used to create deep fakes of political leaders by adapting video, audio and pictures. Such deep fakes can be used to sow the seeds of discord in society and create chaos in markets. AI is also getting better at generating human-like content using language models such as GPT-3 that can author articles, poems and essays based on a single-line prompt. Doctoring of all types of content has been made so seamless by AI that open-source software like FaceSwap and DeepFaceLab can enable even discreet amateurs to be epicentres of social disharmony. In a time when humans can no longer comprehend where to place their trust, “technology for good” looks to be the only saviour.

Russia and Iran are Meta's top source of disinformation.
Russia and Iran are Meta's top source of disinformation. Image: Statista

Semantic analytics for basic filtering of disinformation

The very first idea that comes to mind to combat disinformation with technology is content analytics. AI-based tools can perform linguistic analysis of textual content and detect cues including word patterns, syntax construction and readability, to differentiate computer-generated content from human-produced text. Such algorithms can take any piece of text and check for word vectors, word positioning and connotation to identify traces of hate speech. Moreover, AI algorithms can reverse engineer manipulated images and videos to detect deep fakes and highlight content that needs to be flagged.

But that’s not enough: generative adversarial networks are becoming so sophisticated that algorithms will soon produce content that is indistinguishable from that produced by humans. To add to these woes, such semantic analysis algorithms cannot interpret content inside hate speech images that have not been manipulated but rather are shared with the wrong or malicious context or additional content. It also cannot check if the claims made by some pieces of content are false. Linguistic barriers also add to the challenges. Basically, the sentiment of the online post can be assessed, not its veracity. This is where human intervention is required with AI.

Semantic analytics mechanism for detecting disinformation.
Semantic analytics mechanism for detecting disinformation.

Root tracing: the next-level cop

Fake news has often been found to share the same root – the place of origin before the spread of the news. The Fandango project, for example, uses stories that have been flagged as fake by human fact-checkers and then searches for social media posts or online pages that have similar words or claims. This allows the journalists and experts to trace the fake stories to their roots and weed out all the potential threats before they can spread out of control.

Services such as Politifact, Snopes and FactCheck employ human editors who can perform the primary research required for verifying the authenticity of a report or an image. Once a fake is found, AI algorithms help to crawl through the web and counter similar pieces of content that can foment social discord. If the content is found to be genuine, a reputation score can be assigned to the website article. The Trust Project uses parameters such as sources, references, ethical standards and corrections to assess the credibility of news outlets.

Loading...

With the mushrooming volume of fake news and hate speech and the rapid spread of such content on social networks, relying on human fact-checkers is not sufficient. This process also has a bias since what is offensive and socially divisive is dependent on the subjective opinion of the human fact-checkers. For example, an allegedly acrimonious news article may talk about a genuine incident but with emotionally charged language, which may suit the views of one human checker but not the other. Therefore, such a filtering method can help establish the veracity of content, but not its sentiments.

Spread analysis to arrest propagation

There is a marked difference between the way fake news and genuine news travel over social networks. Researchers from MIT suggest that fake news travels six times faster than genuine news to reach 1,500 people on Twitter. Moreover, the chain length of genuine news (the number of people who have propagated a social media post) was never above 10 but rose to 19 for fake news. This is partly because of swarms of bots deployed by malicious elements to make fake stories go viral.

Humans are equally responsible, as people usually share fake news faster without much critical thinking or a sense of judgment. GoodNews uses an AI engine to identify fake news using engagement metrics, as fake news shows more shares than likes, compared to vice versa for genuine news. Such techniques to capture suspicious content based on its spread can help prevent radicalization.

Humans at the core

Technology use is a reactionary step when the world needs a proactive approach to combat disinformation. AI won’t be successful alone unless we educate the masses – especially the youth – to be vigilant of disinformation. Some schools in India are teaching critical thinking methods and inculcating fact-checking habits in secondary school students. Fake news is not a matter of mere algorithms, but of the philosophy behind how we deal with knowledge – good or bad. Communities of informed users can contribute to ethical monitoring activities, while the crowdsourcing of collaborative knowledge among professional organizations is crucial to verifying raw information.

Humanizing the approach to combating disinformation must be the highest priority in order to build a well-informed society of critical thinkers. A lack of proactive measures involving all stakeholders can lead to the rapid erosion of trust in media and institutions, which is a precursor to anarchy. Until humans learn to objectively evaluate online content, AI-based technologies have to be our ally in combatting disinformation online.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceEmerging TechnologiesMedia, Entertainment and Sport
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum