From deepfakes to social engineering, here's what to know about elections, cybersecurity and AI

A polling station entrance, illustrating the possibility of fraud in elections

Collaboration is fundamental to building resilience in the electoral process. Image: Red Dot on Unsplash

Gretchen Bueermann
Daniel Dobrygowski
Head, Governance and Trust, World Economic Forum
  • AI-generated content is becoming a powerful tool for manipulation.
  • The prevalence of deepfakes is especially a concern with regards to elections.
  • There is an urgent need for global cooperation around countering AI-related and cybersecurity threats to democracy.

In the days leading up to Slovakia's recent elections, a storm of disinformation erupted, challenging the very fabric of the democratic process. An audio recording surfaced on Facebook, allegedly capturing a conversation between a candidate and a media representative discussing plans to manipulate the election, including buying votes.

Although the denunciation of the audio as fake was swift, the damage had already been done. This incident serves as a chilling testament to the evolving threat landscape of elections, where artificial intelligence (AI) is becoming a powerful tool for manipulation.

The incident underscores the vulnerability of election systems to deepfakes and their ability to influence and swiftly and cheaply capitalize on the temporal limitations of fact-checking. This issue is not confined to one election or one country. Because of the interconnection of digital media and the relative homogeneity of communications and social media platforms, bad actors can experiment in one country and quickly pivot and scale successful attacks on democratic processes worldwide. Looking to the number of elections next year, the range or targets is the largest it’s been since the dawn of the digital age.

In 2024, there will be over 40 national elections held by countries making up over 50% of global GDP. Slovakia's experience serves as a cautionary tale for upcoming elections worldwide. The world’s largest democracies, the UK, India, the EU and the US, are all preparing for elections next year and the international community must heed the lessons from Slovakia. The prevalence of AI-generated content and an increasingly vast cyber-attack surface ripe for interference demands a reevaluation of the tools to protect democracy and an urgent need for global cooperation in countering emerging threats.

The cybersecurity threat landscape during elections is multifaceted and evolving. While paper ballots are, by far, the most common form of voting worldwide, cyber and other digital threats can still impact citizens’ ability to vote and the information they use to make choices. Elections are vulnerable to a spectrum of technological threats that range from traditional cybersecurity concerns, such as hacking and data breaches, to more sophisticated forms of manipulation, such as deepfakes and AI-generated disinformation.

Threat actors may exploit vulnerabilities in election infrastructure, targeting electronic voting systems and voter databases. Compounding these vulnerabilities is the fact that attackers need not be successful in their attacks. As democracies derive their strength from the legitimacy of their elections, it is enough to merely cast doubt on the integrity of electoral processes to potentially erode social cohesion and support for the results.

AI allows users to significantly increase the scale, scope and speed of their activities – even if a user is focused on antisocial and harmful goals. Social engineering tactics, including phishing and misinformation campaigns, pose significant risks to the integrity of electoral processes already. The interconnected nature of technology in modern elections amplifies the potential impact of these threats and the speed at which AI tools process information exponentially increases these impacts. As witnessed in Slovakia, the manipulation of audio recordings using AI demonstrates the pressing need for comprehensive measures to safeguard the democratic foundation of elections against digital disruption.

What cyber risks especially impact elections?

• Voter registration systems: manipulation or compromise of voter registration databases can result in voter disenfranchisement or fraudulent registrations.

• Voting machine vulnerabilities: hacking or tampering with electronic voting machines can manipulate vote counts and malware or ransomware attacks on voting machines can disrupt the voting process.

• Election management systems: manipulation of election management systems can lead to misallocation of resources or inaccurate reporting of results.

• Phishing and social engineering: targeted phishing attacks on election officials or political parties can compromise sensitive information or introduce malware.

Have you read?

Where can malicious use of AI compound these risks?

• Misinformation and disinformation: organized campaigns spreading misinformation through social media or other channels can influence public opinion, cast doubt on election integrity and sway election outcomes.

• Deepfakes: in this specific species of disinformation, AI-generated deepfake videos or audio recordings can be used to spread false information about candidates or manipulate public perception.

• Automated disinformation: AI algorithms can be employed to generate and spread large volumes of disinformation, making it harder to detect and combat.

• Targeted advertising: AI-driven microtargeting of mis- or disinformation of voters through personalized advertisements can be used to manipulate opinions or suppress voter turnout.

• Data privacy concerns: where voting information is drawn from national ID, residence records or other methods that connect to personally identifiable information (PII), automated processing may create avenues for the leakage of personal data not relevant to voting eligibility determinations.

• Algorithmic manipulation of social media: AI algorithms on social media platforms can be manipulated to amplify certain political messages or suppress others, influencing public opinion.

To address these risks, election authorities, technology providers and policymakers need to work together to implement robust cybersecurity measures, ensure the integrity of AI systems and promote transparency in electoral processes. Regular security audits, public awareness campaigns and international cooperation are essential components of safeguarding elections in the digital age.

Organizations are already working to address these problems. CISA in the United States has published a Cybersecurity Toolkit to Protect Elections that includes guides to securing things such as voter information websites, email systems and networks. Likewise, in the EU, the European Commission offers support on tackling online disinformation. Many other countries and international organizations have published similar guides, but are they enough?

Guidelines and best practices for protecting elections against technological disruption emphasize these main themes:

The importance of proactive cybersecurity measures

Proactive cybersecurity measures are paramount in safeguarding elections against evolving cyber threats. By anticipating potential vulnerabilities and staying ahead of malicious actors, election systems can significantly reduce the risk of cyberattacks. This includes continuous monitoring, regular security audits and the implementation of advanced technologies to detect and mitigate potential threats before they can compromise the integrity of electoral processes. A proactive approach ensures that election systems are fortified against emerging cyber risks, contributing to the overall resilience of the electoral infrastructure.

Discover

What is the World Economic Forum doing about the Fourth Industrial Revolution?

Use of risk-limiting audits to ensure election integrity

Risk-limiting audits are meant to achieve two important goals. First, that an election is conducted as expected – including that election technology functions as designed. Secondly, citizens and election observers must have the information they need to be assured of the proper functioning of the voting process. Such audits serve as beneficial credibility-building mechanisms in the face of a rapidly changing technological and risk landscape.

The role of education and training in building resilience against cyber threats and disinformation

Education and training play a crucial role in enhancing cyber resilience, especially in the context of elections. Providing election officials, IT personnel and other stakeholders with comprehensive training on cybersecurity best practices equips them with the skills needed to identify, respond to and mitigate cyber threats effectively. Both election officials and the public at large would also benefit from greater media and information literacy, including awareness of social engineering tactics, phishing prevention and better training and systems to identify deepfake and other AI-generated content. A well-informed and trained workforce is better prepared to navigate the complex landscape of cyber threats, thereby reinforcing the overall resilience of election systems.

Collaboration between government agencies, cybersecurity experts and the public

Collaboration is fundamental to building robust cyber resilience in the electoral process. Government agencies, cybersecurity experts and the public must work together to share information, expertise and resources. Government agencies can implement and enforce cybersecurity standards, while cybersecurity experts contribute specialized knowledge to identify and address emerging threats. Engaging the public through awareness campaigns fosters a sense of collective responsibility and encourages individuals to adopt secure online practices. A collaborative approach ensures a multi-faceted defence strategy, enhancing the overall resilience of election systems against a diverse range of cyber threats.

As the world grapples with the fallout of Slovakia's elections, one thing is clear: the intersection of elections and cybersecurity is more perilous than ever. Malicious actors using AI have emerged as formidable adversaries, challenging the traditional tools of fact-checkers and exploiting vulnerabilities in social media policies. The international community must act swiftly, developing robust strategies and tools to safeguard the integrity of elections in the face of AI-driven disinformation. The battle for secure elections has entered a new era and the world must rise to the occasion.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Tech and Innovation

Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum