Emerging Technologies

Stopping AI disinformation: Protecting truth in the digital world

side view of a man holding a mobile phone with a dark background in a story about AI and misinformation

Despite being used to create deepfakes, AI can also be used to combat misinformation and disinformation. Image: Gilles Lambert/Unsplash

Cathy Li
Head, AI, Data and Metaverse; Member of the Executive Committee, World Economic Forum
Agustina Callegari
Project Lead, Global Coalition for Digital Safety, World Economic Forum
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
  • The proliferation of artificial intelligence in the digital age has ushered in both innovations and challenges, particularly in information integrity.
  • AI technologies which can generate 'deepfakes' can be used in the production of both misinformation and disinformation.
  • However, AI can also help combat false information through analysing patterns, language and context to aid content moderation.

The proliferation of artificial intelligence (AI) in the digital age has ushered in both remarkable innovations and unique challenges, particularly in the realm of information integrity.

AI technologies, with their capability to generate convincing fake texts, images, audio and videos (often referred to as 'deepfakes'), present significant difficulties in distinguishing authentic content from synthetic creations. This capability lets wrongdoers automate and expand disinformation campaigns, greatly increasing their reach and impact.

Have you read?

However, AI is not a villain in this story. It also plays a crucial role in combating disinformation and misinformation. Advanced AI-driven systems can analyse patterns, language use and context to aid in content moderation, fact-checking and the detection of false information.

Understanding the nuances between misinformation (unintentional spread of falsehoods) and disinformation (deliberate spread) – as is crucial for effective countermeasures – could also be facilitated by AI analysis of content.

The social cost of disinformation

The consequences of unchecked AI-powered disinformation are profound and can erode the very fabric of society.

The World Economic Forum’s Global Risks Report 2024 identifies misinformation and disinformation as severe threats in the coming years, highlighting the potential rise of domestic propaganda and censorship.

The political misuse of AI poses severe risks, with the rapid spread of deepfakes and AI-generated content making it increasingly difficult for voters to discern truth from falsehood, potentially influencing voter behaviour and undermining the democratic process. Elections can be swayed, public trust in institutions can diminish, social unrest can be ignited, and violence can even erupt.


Moreover, disinformation campaigns can target specific demographics with AI-generated harmful content. Gendered disinformation, for example, perpetuates stereotypes and misogyny, further marginalizing vulnerable groups.

Such campaigns manipulate public perception, leading to widespread societal harm and deepening existing social divides.

A multi-pronged approach to tackle fake content

The rapid development of AI technologies often outpaces governmental oversight, leading to potential social harms if not carefully managed.

Industry initiatives like content authenticity and watermarking address key concerns about disinformation and content ownership. These tools require careful design and input from multiple stakeholders to prevent misuse, such as eroding privacy or persecuting journalists in conflict zones.

For example, the Coalition for Content Provenance and Authenticity (C2PA) – integrated by Adobe, Arm, Intel, Microsoft and TruePic – addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history, or provenance, of media content.

To further mitigate the risks associated with AI, developers and organizations must implement robust safeguards, transparency measures and accountability frameworks.

By establishing comprehensive systems, developers can ensure that AI is deployed ethically and responsibly, thereby fostering trust and promoting the beneficial use of AI in various domains.

In addition to technical measures, public education on media literacy and critical thinking is essential to empower individuals to navigate the complex landscape of digital information.

Schools, libraries and community organizations play a vital role in promoting these skills, providing resources and training programmes to help individuals develop the ability to critically evaluate information sources, discern misinformation from factual content, and make informed decisions.

Collaboration is key to tackling misinformation

Moreover, collaboration among stakeholders, including policy-makers, tech companies, researchers, and civil organizations, is vital to effectively address the multifaceted challenges posed by AI-enabled misinformation and disinformation.

This situation highlights the importance of fostering a global understanding and cooperation to tackle the spread of false information facilitated by the rise of man-made content and AI technologies.

The AI Governance Alliance, a flagship initiative by the World Economic Forum and part of the Centre for the Fourth Industrial Revolution, unites experts and organizations worldwide to address the complex challenges of AI, including the generation of misleading or harmful content and the violation of intellectual property rights.

Through collaborative efforts, the Alliance develops pragmatic recommendations to ensure that AI are developed and deployed responsibly, ethically and for the greatest benefit of humanity.

Another Forum initiative is the Global Coalition for Digital Safety, which is spearheading efforts to combat disinformation by promoting a whole-of-society approach to enhancing media literacy. This includes understanding how false information is produced, distributed and consumed, and identifying the necessary skills at each stage to counter it.

The coalition brings together tech companies, public officials, civil society and international organizations to exchange best practices and coordinate actions aimed at reducing online harms.

Advancing our approach to digital safety

As AI continues to transform our world, it is imperative to advance our approach to digital safety and information integrity.


How is the World Economic Forum creating guardrails for Artificial Intelligence?

Through enhanced collaboration, innovation and regulation, we can harness the benefits of AI while safeguarding against its risks, ensuring a future where technology uplifts rather than undermines public trust and democratic values.

By working together, we can ensure that AI serves as a tool for truth and progress, not manipulation and division.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

What is the 'perverse customer journey' and how can it tackle the misuse of generative AI?

Henry Ajder

July 19, 2024

About Us



Partners & Members

  • Sign in
  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum