How artificial intelligence silently took over democracy

A voter arrives at a polling location to vote in Portland, Maine November 3, 2009. REUTERS/Joel Page

'AI can be used to run better campaigns in a more legitimate way' Image: REUTERS/Joel Page

Vyacheslav Polonski
Alumni, Global Shapers Community, Google
Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale

There has never been a better time to be a politician. But it’s an even better time to be a machine learning engineer working for a politician.

Throughout modern history, political candidates have had a limited number of tools to take the temperature of the electorate. More often than not, they’ve had to rely on instinct rather than insight when running for office.

Big data can now be used to maximize the effectiveness of your campaign. The next level will be using artificial intelligence (AI) in election campaigns and political life.

Machine learning systems can already predict which US congressional bills will pass by making algorithmic assessments of the text of the bill as well as other variables, such as how many sponsors it has and even the time of year it is being presented to congress.

Machine intelligence solutions are also now carefully deployed in election campaigns to engage voters and help them be more informed about important political issues.

Did you vote because of AI?

This use of technology raises ethical issues, as artificial intelligence can be used to manipulate individual voters.

During the 2016 US presidential election, for example, the data science firm Cambridge Analytica rolled out an extensive advertising campaign to target persuadable voters based on their individual psychology.

Using big data and machine learning, voters received different messages based on predictions about their susceptibility to different arguments. The paranoid received ads with messages based on fear, while people with a conservative predisposition received ads with arguments based on tradition and community.

This was made possible by the availability of real-time data on voters – from their behaviour on social media to their consumption patterns and relationships. Their internet footprints were used to build unique behavioural and psychographic profiles.

The problem with this approach is not the technology itself but the covert nature of the campaigning and the insincerity of the political messages being sent out. A candidate with flexible campaign promises like President Donald Trump is particularly well-suited to this tactic. Every voter can be sent a tailored message that emphasizes a different side of a particular argument and each voter will get a different Trump. The key is simply to find the right emotional triggers to spur each person into action.

Attack of the bots

Massive swarms of political bots were used in the 2017 general election in the UK to spread misinformation and fake news on social media. The same happened during the US presidential election in 2016 and several other key political elections around the world.

Bots are autonomous accounts programmed to aggressively spread one-sided political messages to manufacture the illusion of public support. Typically disguised as ordinary human accounts, they can be used to highlight negative social media messages about a candidate to a demographic group more likely to vote for them, the idea being to discourage them from turning out on election day.

In the 2016 election, it's claimed that Pro-Trump bots infiltrated Twitter hashtags and Facebook pages used by Hillary Clinton supporters to spread automated content. Bots were also deployed at a crucial point in the 2017 French presidential election, throwing out a deluge of leaked emails, from candidate Emmanuel Macron’s campaign team on Facebook and Twitter. The information dump also contained what Macron says was false information about his financial dealings. The aim of #MacronLeaks was to build a narrative that Macron was a fraud and a hypocrite – a common tactic used by bots to push trending topics and dominate social feeds.

Using AI for good

It is easy to blame AI for the world’s wrongs (and for lost elections) but the underlying technology itself is not inherently harmful. The algorithmic tools that are used to mislead, misinform and confuse could equally be repurposed to support democracy.

AI can be used to run better campaigns in a more legitimate way. An ethical approach to AI can work to inform and serve an electorate. New AI start-ups like Factmata and Avantgarde Analytics are already providing these technological solutions.

We can, for example, programme political bots to step in when people share articles that contain known misinformation. They could issue a warning that the information is suspect and explain why. This could help to debunk known falsehoods, like the infamous article that falsely claimed that Pope Francis had endorsed Trump.

We can use AI to better listen to what people have to say and make sure their voices are clearly heard by their elected representatives. Based on these insights, we can deploy micro-targeting campaigns that help to educate voters on a variety of political issues to help them make up their own mind.

People are often overwhelmed by political information in TV debates and newspapers. AI can help them discover the political positions of each candidate based on what they care about most. For example, if a person is interested in environmental policy, we could use an AI targeting tool to help them find out what each party has to say about the environment.

In other words, we can use AI techniques to counteract computational propaganda and break up echo chambers. Crucially, personalized political ads need to always serve their voters and help them be more informed, rather than undermining their interests.

Have you read?
Lost in regulation

An alternative scenario is more regulation to restrict computational propaganda. Stricter rules on data protection and algorithmic accountability could also reduce the extent to which machine learning can be abused in political contexts. But this could also harm the innovation of AI for good.

Regulation always moves slower than technology. The EU General Data Protection Regulation promises EU citizens a universal “right to explanation” for when they are affected by automated decision-making systems.

However, there are many misconceptions around the extent to which this can bring about new standards for algorithmic accountability and transparency. In particular, the legislation lacks precise language and any well-defined safeguards against the abuse of AI systems. Furthermore, it only mandates a right to be informed, rather than the right to opt out of any data operations and automated decision-making systems altogether.

The use of AI techniques in politics is not going away anytime soon; it is simply too valuable to politicians and their campaigns. In addition to the long-term efforts of regulators, the political world should commit to using AI ethically and judiciously in the short-term to ensure that attempts to sway voters do not end up undermining democracy. Artificial intelligence is part of our politics now – so let's make it work for everyone.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum