Emerging Technologies

The US is drafting new laws to protect against AI-generated deepfakes

Code on a laptop screen.

New laws to protect against deepfake technology is being implemented. Image: Unsplash/ Markus Spiske

Simon Torkington
Senior Writer, Forum Agenda
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

  • The US is drafting new laws to protect against harmful deepfake content.
  • Deepfakes can be used to destroy reputations and undermine democracy.
  • The World Economic Forum’s Digital Trust Initiative and the Global Coalition for Digital Safety aim to counter harmful online content.

What do Tom Cruise, Barack Obama, Rishi Sunak and Taylor Swift have in common? The answer is they have all been the subject of deepfake videos posted online.

Deepfakes are content created using AI. They can be used to impersonate real people and can be so lifelike, only a computer can detect they are not the real thing.

Deepfake of Tom Cruise
Deepfake videos featuring an AI-generated likeness of Tom Cruise went viral on TikTok. Image: Chris Umé/TikTok

The Tom Cruise deepfakes went viral on TikTok and were intended as lighthearted entertainment. Cruise himself has made no moves to have the videos taken offline, according to a report by CNN.

When the deepfake fun stops

The proliferation of AI tools is raising serious concerns about the use of deepfakes to destroy reputations, undermine democratic elections and damage trust in genuine online sources of verified information. The World Economic Forum’s Global Risks Report 2024 ranks misinformation and disinformation as the number one threat the world faces in the next two years.

In February 2024, an audio deepfake emerged that mimicked the voice of US President, Joe Biden.The audio clip was used in an automated telephone call targeting Democratic voters in the US State of New Hampshire. In the faked message, an AI-generated version of Biden’s voice is heard urging people not to vote in the state’s primary election.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

With a potentially divisive US election scheduled for November 2024, in which Biden looks likely to contest the presidency with Donald Trump, US authorities are drafting new laws that would ban the production and distribution of deepfakes that impersonate individuals.

Legal moves to outlaw deepfakes

The proposed laws are being put forward by the US Federal Trade Commission (FTC). The FTC warns that advanced technology is driving a huge rise in deepfakes used to defraud unsuspecting members of the public.

“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” said FTC Chair Lina M. Khan. “Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.”

While these proposed laws are aimed at stopping scammers, they extend to impersonating government and business entities and so could, if and when signed into law, provide legal cover that would protect the election process.

Have you read?

Deepfakes and divided societies

One of the key themes of the World Economic Forum’s Annual Meeting in Davos, in January 2024, was rebuilding trust.

The potential implications of deepfakes for public trust are profound. When they are used to misrepresent politicians, civic leaders and heads of industry, they can erode people’s faith in government, media, justice systems and private institutions.

As the public becomes increasingly sceptical of the authenticity of digital content, this scepticism can lead to a general disengagement from civic life and a decline in informed public discourse.

The Forum’s Digital Trust Initiative aims to counter these negative consequences by ensuring technology is developed and used in a way that protects and upholds society’s expectations and values. The Forum’s Global Coalition for Digital Safety also advocates for a whole-of-society approach that mobilizes governments, the private sector, and citizen groups to build media and information literacy to combat disinformation.

“Open and democratic societies rely on truthful information in order to function," says Daniel Dobrygowski, Head of Governance and Trust at the World Economic Forum. "Where new technologies are used to intensify attacks on truth and on democratic institutions, action by governments, the private sector and citizens is required to fend off the potential for a digital disruption of democracy,” Dobrygowski adds.

How to spot a deepfake

As AI emerges as the key technology for producing deepfakes, it also offers great potential for detecting faked content. Stanford University has developed AI tools that can detect the lip-synching processes that are frequently used to put words never spoken into the mouths

Big tech organizations, including Microsoft, have developed toolkits to keep families safe online. These include guides to help people detect misinformation and disinformation, by thinking critically and questioning whether what they are looking at is authentic.

As the world heads deeper into the age of AI, legal measures to control its misuse will be vital to protect trust in information and institutions. As online users, we may all have to challenge ourselves more frequently with the question: is this real?

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Robot rock stars, pocket forests, and the battle for chips - Forum podcasts you should hear this month

Robin Pomeroy and Linda Lacina

April 29, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum