Industries in Depth

Deep fakes could threaten democracy. What are they and what can be done?

A still from Buzzfeed's video, in which technology was used to fake footage of Barack Obama, accompanied by an impersonation of his voice by Jordan Peele.

A still from Buzzfeed's video, in which technology was used to fake footage of Barack Obama, accompanied by an impersonation of his voice by Jordan Peele.

Andrew Chakhoyan
Senior Manager Public Affairs, Strategic Engagement, Booking.com
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Industries in Depth?
The Big Picture
Explore and monitor how The Digital Economy is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

Will deep-fake technology destroy democracy, asked the New York Times. While some of us are just learning the jargon, others expect that we’ll enter this brave new post-fact world in a year or two. Concerned with the emerging tech trend, a group of US lawmakers sent a letter to the Director of National Intelligence with this warning:

“Hyper-realistic digital forgeries use sophisticated machine learning techniques to produce convincing depictions of individuals doing or saying things they never did, without their consent or knowledge. By blurring the line between fact and fiction, deep fake technology could undermine public trust in recorded images and videos as objective depictions of reality.”

At present, we can spot most forgeries with the naked eye. But, as with all things digital, the improvements are exponential. And where technology will fail to convince, the confirmation bias will bridge the gap. If the content conforms to your beliefs, you are likely to miss the unnatural facial expressions or muffled voice track.

Loading...

A deep fake classic is a video by Buzzfeed that features President Obama making outrageous proclamations. The clip demonstrates how the hoax was made. One could take solace in the fact that a production of this video required an actor mimicking Obama’s voice. However, this requirement is about to expire as BBC's experiment with voice-imitation technology has recently proved. You can try the BBC test yourself, but the majority of the people polled agreed that AI outshined a professional impersonator.

Whether with malicious intent or not, one could use open platforms like FakeApp to create rudimentary video forgeries. The internet didn’t skip a beat democratizing this dangerous capability. No longer will technical training or major resources be required to wreak considerable misinformation havoc.

It is not the first time a disruptive technology poses a major civic challenge. And we’d be right to expect that when deep fakes become more advanced, so will the detection technology. Facebook, for instance, just announced plans to accelerate its efforts to spot and weed out misinformation, and the European Union funded In Video Veritas – a browser plugin to sniff out fake videos.

But our challenge goes beyond mere detection: it is the polarization of our society, the filter bubbles, which are dangerous side effects of algorithm-powered, information-overloaded reality. Sensationalist headlines, whether factual or not, spread like wildfire already. Content of questionable quality or obscure origin can go viral in no time. Videos appeal at a visceral level more than any text or picture ever will, and the danger of the hoax spreading and shaping our perceptions is many times higher.

On one hand, greater awareness is a go-to antidote. If we understood how easy it is to manipulate a video and produce a high-quality forgery, we’d all be more vigilant. On the other hand, greater vigilance stands for diminishing trust and an impetus for further retrenchment into our digital echo chambers, extending credence to an ever-shrinking circle.

In addressing this threat, there are no easy answers. But here are a few ideas of where to start:

The policymakers must be proactive and bold. Pre-emptive measures might include early communication of penalties, be it to the producers or publishers of malicious content, and clear protocols for enforcement. More important, though, would be an outreach to traditional and digital media platforms to fundamentally rethink the safety net around online content. Take illicit peer-to-peer music sharing, for example. At its peak, the trend looked unstoppable, but it was halted by legal action in combination with new business models.

The tech giants ought to evaluate their role in society. The filter bubbles weren’t invented in Silicon Valley, but they have grown to a level that threatens our socio-political order, along with the digital platforms that reinforced them. If the product is free, you are the product, goes an old adage. As long as customers of online services remain the product, our digital commons remain vulnerable. When one pays to consume digital content on one platform or another, one has a direct stake and the standing to demand stronger quality controls. When the FT introduced a paywall in 2002, this went against many conventions at the time. A “social journalism” platform – Medium – was launched a decade later, and now offers paid subscriptions. For a membership fee of $11.99 a month, one can now watch YouTube ad-free. It looks like this trend if picking up steam already.

As the cyber-threats evolved, so did internet security. Early firewalls and antivirus software helped weed out malicious code, but now Zero Trust architecture is emerging as a new standard. The old system security relied on blacklisting of bad content and allowing all the rest in; the new approach is built on a different assumption – all content is viewed as a risk unless it has been whitelisted. Perhaps this paradigm shift is inevitable for all digital content going forward. In this respect, blockchain technology might prove to be helpful, as Antonio García Martínez argued in his article for Wired.

Have you read?

Besides the self-evident recommendation that we all apply critical thinking to the information we consume, it is now high time for citizens to demand from their elected representatives a new social contract for the digital age.

Deep fakes will test the strengths of our institutions and challenge fundamental ideas such as facts and truth. They will sow doubt, if not defeatism and cynicism. If humanity is unable to act pre-emptively, the proliferation of AI-enabled fabrications might set off a vicious cycle with ever-widening divisions in society accompanied by the decay of trust. We can’t afford to wait and see. We must take notice, act now, and redefine the framework of interaction between humans and algorithms.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Industries in DepthEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Industry government collaboration on agritech can empower global agriculture

Abhay Pareek and Drishti Kumar

April 23, 2024

1:44

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum