Deepfake democracy: Here's how modern elections could be decided by fake news
Could deepfakes decide modern elections? Image: REUTERS/Jonathan Ernst
Alexander Puutio
PhD Researcher at the University of Turku, Founding Curator of the New York Queens Global Shapers Hub- The emerging threat of deepfakes could have an unprecedented impact on this election cycle, raising serious questions about the integrity of democratic elections, policy-making and our society at large.
- A new ethical agenda for AI in political advertising and content on online platforms is required. Given the cross-border nature of the problem, the agenda must be backed by global consensus and action.
- Communities and individuals can also take action directly by setting higher standards for how to create and interact with political content online.
In a few months the United States will elect its 46th President. While some worry about whether campaigning and casting votes can be done safely during the COVID-19 pandemic, another question is just as critical: how many votes will result via the manipulative influence of artificial intelligence?
Specifically, the emerging threat of deepfakes could have an unprecedented impact on this election cycle, raising serious questions about the integrity of elections, policy-making and our democratic society at large.
Understanding deepfakes
AI-powered deepfakes have the potential to bring troubling consequences for the US 2020 elections.
The technology that began as little more than a giggle-inducing gimmick for making homebrew mash-up videos has recently been supercharged by advances in AI.
Today, open sourced software like DeepFaceLab and Faceswap allow virtually anyone with time and access to cloud computing to deploy sophisticated machine learning processes and graphical rendering without any prior development.
More worryingly, the technology itself is improving at such a rapid pace where experts predict that deepfakes may soon be indistinguishable from real videos. The staggering results that AI can create today can be attributed to herculean leaps in a subfield called Generative Adversarial Networks. This technology enables neural networks to make the jump from mere perception to creation.
As one can expect with viral technology, the number of deepfake videos is growing exponentially as the continuing democratization of AI and cloud-computing make the underlying processes more and more accessible.
A new infodemic?
As we have seen during the COVID-19 pandemic, the contagious spread of misinformation rarely requires more than a semblance of authority accompanying the message, no matter how pernicious or objectively unsafe the content may be to the audience.
Given how easily deepfakes can combine fake narratives and information with fabricated sources of authority, they have an unprecedented potential to mislead, misinform and manipulate, giving ‘you won’t believe your eyes’ a wholly new meaning.
In fact, according to a recent report published by The Brookings Institute, deepfakes are well on their way to not only distort the democratic discourse but also to erode trust in public institutions at large.
How can deepfakes become electoral weapons?
How exactly could deepfakes be weaponized in an election? To begin with, malicious actors could forge evidence to fuel false accusation and fake narratives. For example, by introducing subtle changes to how a candidate delivers an otherwise authentic speech could be used to put character, fitness and mental health into question without most viewers knowing any better.
Deepfakes could also be used to create entirely new fictitious content, including controversial or hateful statements with the intention of playing upon political divisions, or even inciting violence.
Perhaps not surprisingly, deepfakes have already been leveraged in other countries to destabilize governments and political processes.
- In Gabon, the military launched an ultimately unsuccessful coup after the release of an apparently fake video of leader Ali Bongo suggested that the President was no longer healthy enough to hold office.
- In Malaysia, a video purporting to show the Economic Affairs Minister having sex has generated a considerable debate over whether the video was faked or not, which caused reputational damage for the Minister.
- In Belgium, a political group released a deepfake of the Belgian Prime Minister giving a speech that linked the COVID-19 outbreak to environmental damage and called for drastic action on climate change.
The truth may win
As of today, we are woefully ill-equipped to deal with deepfakes.
According to the Pew Research Center, almost two-thirds of the US population say that fake content creates a great deal of confusion about the political reality. What is worse, even our best efforts to correct and fact check fake content could ultimately serve to only strengthen the spread of faked narrative instead.
For AI and democracy to coexist, we must urgently secure a common understanding of what is true and create a shared environment for facts from which our diverging opinions can safely emerge.
What is most desperately needed is a new ethical agenda for AI in political advertising and content on online platforms. Given the cross-border nature of the problem, the agenda must be backed by global consensus and action.
Initiatives like the World Economic Forum’s Responsible Use of Technology, which bring tech executives together to discuss the ethical use of their platforms, are a strong start.
On the more local level, legislatures have started to follow California’s initiative to ban deepfakes during elections and even Facebook has joined the fight with its own ban on certain forms of manipulated content and a challenge to create technologies to spot them.
The future: fact or fiction?
Still, more can be done.
We do not necessarily need a technology or regulatory paradigm change in order to disarm deepfakes. Instead, communities and individuals can also take action directly by setting higher standards for how we create and interact with political content online ourselves.
In fact, unless voters themselves stand up for facts and truth in online discourse, it will be all but impossible to drive meaningful change, simply because of the inherent subjectivity of online platforms that puts reality at a disadvantage.
Whether we want it or not deepfakes are here to stay. But November 2020 could mark the moment we take a collective stand against the threats AI poses before it’s too late.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.