• The use of deepfake technology is growing in volume and sophistication.
  • Awareness and vigilance are our best weapons against this threat.
  • These technologies may be smart - but they are not human. We are.

Just when you thought modern life couldn’t get any crazier, a video emerged during the run-up to the recent UK election, in which the Prime Minister Boris Johnson appeared to endorse his political opponent Jeremy Corbyn.

“Appeared” is the important word here, because this was actually just one of the latest in a steady stream of deepfakes – video and audio clips in which artificial intelligence simulates real people doing unreal things.

Of course, humans have been faking it for centuries. From tattoos and piercings to face paints and wigs, we love altering ourselves and indulging in a bit of make-believe. My own little secret for years was that I wore green-coloured contact lenses. I did need them for short-sightedness – the colour was purely a personal choice. And who can seriously say they haven’t tried an Instagram filter or two?

But there’s a darker side to this story.

In an increasingly divisive political climate, sabotaging politicians by showing them in fictionalized situations could be personally and politically devastating. The use of deepfakes in creating pornography is another disturbing trend.

It’s something that could potentially start to do real damage to businesses, too. Cybercriminals have already fooled a company into making a $234,000 wire transfer using an AI-powered deepfake of its CEO’s voice – and who knows how many other stories have gone unreported?

Recently, one of my team received a WhatsApp voice message from someone pretending to be our managing partner. Others received an email, supposedly from me, asking for a wire transfer to be made. In both cases, the phishing attempts didn’t succeed, but it was human instinct rather than formal security controls that saved the day.

So how can we protect ourselves as individuals and organisations from deepfake attacks?

It comes down to being super vigilant. Organizations can help by making sure employees undergo a thorough cybersecurity awareness programme that is updated frequently to inform them about the latest threats, and how to react. Here are a few things to think about.

1. Choose your information sources wisely

There has been a rise in the use of social media as a news source, particularly among younger people – which is not surprising, given they have grown up with digital media. Meanwhile, in non-Western countries like Brazil, Malaysia and South Africa, WhatsApp has become a primary network for sharing and discussing news.

In both scenarios, people will be seeing a mixture of genuine news, fake news, and subjective opinion presented as fact – sometimes from authentic sources, and sometimes from bots.

Perhaps this is why the report linked above also reveals that 55% of those surveyed are concerned about their ability to tell what’s real from what's fake online, while 26% said they had started relying on “more reputable” news sources. Of course, what might be considered reputable is still highly subjective, so if you want a clear, evidence-based analysis of a story, fact-checking sites like Snopes are probably the most useful resource.

2. Be careful about the information you share online

I wouldn’t say I’m reckless about my internet profile, but like many people there are certain things I do without thinking too much about, because they are just the norm these days, and convenience outplays caution. My husband, on the other hand, wants nothing to do with any of it and is extremely careful about any information he puts out there. He even deleted his Facebook account and was upset when a photo of him appeared on Google.

He probably has the right approach. It’s incredibly important to be sure you can trust any organization that has an online presence. There are lots of tips available out there, like this post from security firm ASecureLife.

Another thing to do on a regular basis is Google yourself – and your kids if you have them - to get an idea of your online footprint. The same goes for any accounts that you’ve created, for example to buy something online. You’ll probably be horrified by the number of places that hold information about you. If there’s anything lying dormant, don’t just leave it there – get it deleted.

3. Run the “Real person, or bot?” test

Although bots are increasingly capable of more and more, there’s one thing they haven’t cracked yet – and that’s coming across as convincingly human. Just think about the last time you used the chat facility with a brand – you were probably able to tell from the language whether it was a chatbot or a person. We instinctively know what sounds right, and how real people speak and write.

Most people tend to message in a fragmented way, so if you’re getting only full and formal sentences in response, that’s probably a sign - as is getting the same answer more than once, or superfast replies. Even when we’re LOLing or communicating in emoji, humans tend to take longer than a split second to compose and send what we want to say. Especially when we’re looking for an emoji that doesn’t exist.

What is the World Economic Forum doing on cybersecurity

The World Economic Forum's Centre for Cybersecurity is leading the global response to address systemic cybersecurity challenges and improve digital trust. We are an independent and impartial global platform committed to fostering international dialogues and collaboration on cybersecurity in the public and private sectors. We bridge the gap between cybersecurity experts and decision makers at the highest levels to reinforce the importance of cybersecurity as a key strategic priority.

Our community has three key priorities:

Strengthening Global Cooperation - to increase global cooperation between public and private stakeholders to foster a collective response to cybercrime and address key security challenges posed by barriers to cooperation.

Understanding Future Networks and Technology - to identify cybersecurity challenges and opportunities posed by new technologies, and accelerate forward-looking solutions.

Building Cyber Resilience - to develop and amplify scalable solutions to accelerate the adoption of best practices and increase cyber resilience.

Initiatives include building a partnership to address the global cyber enforcement gap through improving the efficiency and effectiveness of public-private collaboration in cybercrime investigations; equipping business decision makers and cybersecurity leaders with the tools necessary to govern cyber risks, protect business assets and investments from the impact of cyber-attacks; and enhancing cyber resilience across key industry sectors such as electricity, aviation and oil & gas. We also promote mission aligned initiatives championed by our partner organizations.

The Forum is also a signatory of the Paris Call for Trust and Security in Cyberspace which aims to ensure digital peace and security which encourages signatories to protect individuals and infrastructure, to protect intellectual property, to cooperate in defense, and refrain from doing harm.

For more information, please contact us.

What should leaders do?

Deepfake attacks are probably the most sinister thing we’ve experienced yet, and if you’re not familiar with the person being impersonated, how are you to know it’s a fake? To me this gives organizations an even more important responsibility than any cybersecurity measures they have in place.

It’s for anyone who has the authority to make financial or other important business decisions to make themselves known in a very authentic way to employees. That means getting out there and talking with people, being in their company, listening to one another and letting them get a feel for who you really are. It means not hiding anything, or trying to be something you’re not. And it means fostering a culture of genuine openness, so that people feel comfortable questioning something that doesn’t feel right, even if it has seemingly come from an authority source.

Lots of research has already shown the benefits of this type of approach anyway, but the better employees get to know the real you, the less the risk that they’ll be duped by any pretenders. Because while mimics are clever, and deepfakes are designed to be convincing, at the end of the day they’re not the real deal. And we are.