Is there a more significant issue in the public domain than the spread of misinformation online? “Post-truth” has entered the lexicon, fake news has played a significant role in political referendums in Europe and the election of a United States President, media organizations across the board have been accused of lying, and serious political players are saying that there's no such thing as facts anymore. At a time like this, finding a commonly agreed basis for reality has become a fundamental challenge for all citizens.

For the social media platforms that serve as the ether through which both truth and lies travel, these are existential challenges. The European government has warned Facebook, the UK government has opened a formal inquiry into the issue. Facebook has recognised this at the highest levels, and is rolling out fixes around the world. The BBC has established a dedicated team, Le Monde has launched a suite of fact-checking products.

The World Economic Forum, identified the spread of misinformation online as a major risk in its Global Risks Report as far back as 2013. The vulnerability of digital platforms to attack has been clear for years: our Annual Meeting has also been victim to spambot attacks in the past. Our hashtag #wef12 for Davos 2012 was mobbed by a bot, and the digital communications team spent all night working with Twitter to manually delete thousands of unmentionable tweets.

Risks are not limited to a single platform. On Twitter, fake news can become viral using large botnets, managed both by software agents and unaware users, whose accounts have been hijacked for that purpose. In other words, malware can take control of large numbers of user accounts, and an army of Twitter bots starts sharing fake news via millions of people’s Facebook or Twitter feeds.

Facebook pages and groups can also be used for spreading false news. In 2013, this study published in the Guardian demonstrated how millions of pages with "likes" were used to spread links to sites with false news.

There are numerous "black hat" techniques on Google to boost the rank of sites that share false information to the top of the search. Very often, criminals who spread false information use such techniques to penalise sites that disseminate legitimate news. New platforms such as Instagram and Snapchat, and apps like WhatsApp and WeChat are not immune. Indeed, uploading a video with false news on Instagram, a self-destructing story about Snapchat or launching false news via WhatsApp and WeChat through mass spamming to a list of numbers is not complicated. It is difficult to defend against these attacks.

The question on everyone's mind is how to deal with this issue? Unfortunately, there is no algorithm that can block the misinformation and no government alone can keep their citizens safe from misinformation and other digital threats. However, many initiatives could have positive and tangible outcomes, but only if carried out in collaboration between the various stakeholders.

An experimental factbot

As an example, we developed an experimental Twitterbot able to follow specific news and commentaries about our Davos meeting and to provide context or corrections – that is, linking to a webpage on our official website addressing such items as fake or unverified news and responding with our viewpoint about that issue.

For instance, we identified an article published by Fox news that was spreading false facts about the World Economic Forum. We drafted a lengthy, accurate reply and published it on our own platform. We then programmed the bot to identify significant users on Twitter - those who are verified users, or who have over 2,000 followers -- and reply to those users with an @mention, attaching the accurate story. The bot account was transparent: openly owned by the World Economic forum, and labelled as automated.

Our bot was designed to help social media users to quickly identify fake news and respond appropriately. Fostering collaboration between our editorial and technical teams. The bot was a benevolent experiment that has fostered new ideas and innovative projects to alert users worldwide against fake news, in contrast to bots that operate to automate and amplify opinions on the Twitter platform. This is the combination of editorial and technical response required to battle a complex challenge caused by editorial challenges, but facilitated by the technical.

Needless to say, We are living in an increasingly interconnected world that needs collaboration, dialogue and transparency to prevent the spread of misinformation on any given topic or event, with all its nefarious consequences. In this case, our organisation and technology companies have achieved significantly positive results. The challenges before us are complex and it is only through collaboration and cooperation between the different stakeholders that we will have the strength to deal with them.