How can we combat the worrying rise in the use of deepfakes in cybercrime?

Newspapers rolling off the printing press, illustrating the dangers of deepfake news

Deepfake news is a growing global concern Image: Photo by Bank Phrom on Unsplash

Gretchen Bueermann
Knowledge Lead, Centre for Cybersecurity, World Economic Forum
Natasa Perucica
Research and Analysis Specialist, Cybersecurity Industry Solutions, World Economic Forum
Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Tech and Innovation

This article is part of: Centre for Cybersecurity

Listen to the article

  • In 2022, 66% of cybersecurity professionals experienced deepfake attacks within their respective organizations.
  • Researchers predict that as much as 90% of online content may be synthetically generated by 2026.
  • With the development of new detection technologies and a continued focus on education and ethical considerations, we can work together to combat deepfakes and ensure that deepfake technology is used for the greater good.

In recent years, we have seen a rise in deepfakes. Between 2019 and 2020, the number of deepfake online content increased by 900%. Forecasts suggest that this worrisome trend will continue in the years to come – with some researchers predicting that “as much as 90% of online content may be synthetically generated by 2026.” Oftentimes misused to deceive and conduct social engineering attacks, deepfakes erode trust in digital technology and increasingly pose a threat to businesses.

In 2022, 66% of cybersecurity professionals experienced deepfake attacks within their respective organizations. An example of deepfake crime includes the creation of fake audio messages from CEOs or other high-ranking company executives, using voice-altering software to impersonate them. These manipulated audio messages often contain urgent requests for the recipient to transfer money or disclose sensitive information.

Research shows that the banking sector is particularly concerned by deepfake attacks, with 92% of cyber practitioners worried about its fraudulent misuse. Services, such as personal banking and payments, are of particular apprehension and such concerns are not baseless. To illustrate, in 2021, a bank manager was tricked into transferring $35 million to a fraudulent account.

The high cost of deepfakes is also felt across other industries. In the past year, 26% of smaller and 38% of large companies experienced deepfake fraud resulting in losses of up to $480,000.

Deepfakes also have the potential to undermine election outcomes, social stability and even national security, particularly in the context of disinformation campaigns. In some instances, deepfakes have been used to manipulate public opinion or spread fake news leading to distrust and confusion among the public.

Have you read?

AI's impact on the risk of deepfakes

The development of artificial intelligence (AI) has significantly increased the risk of deepfakes. AI algorithms, including generative models, can now create media that are difficult to distinguish from real images, videos or audio recordings. Moreover, these algorithms can be acquired at a low cost and trained on easily accessible datasets, making it easier for cybercriminals to create convincing deepfakes for phishing attacks and scam content.

As deepfakes evolve, so does the technology and tools to detect such threats. Now, deepfake detectors can help determine from biometric features, such as a heartbeat or human voice frequency, whether video or audio content is authentic or not.

That said, as a dual-use technology, AI can also further complexify things by generating synthetic content that is specifically designed to evade detection by current deepfake detection tools.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

The compounding risks of deepfake scams and identity theft

While deepfake scams pose significant risks, they can also compound the risks of other cybercriminal activities, such as identity theft. Deepfakes, for example, can be used to create fake identification documents, making it easier for cybercriminals to impersonate individuals or gain access to secure systems. Moreover, deepfakes can be used to create fake audio or video recordings of individuals, which can be used to blackmail or extort them.

Identity theft, in turn, can exacerbate the risks posed by deepfake scams. For instance, cybercriminals can use stolen identities to create more convincing deepfakes, or use deepfakes to perpetrate further identity theft.

To mitigate these compounding risks, we must take a multi-pronged approach. This includes investing in more sophisticated deepfake detection technologies, as well as improving identity verification systems, including the use of biometric and liveness verification, to prevent the misuse of deepfakes in identity theft.

Potential Solutions

To address these emerging threats, we must continue to develop and improve deepfake detection technologies. This can involve the use of more sophisticated algorithms, as well as the development of new methods that can identify deepfakes based on their context, metadata or other factors.

Another potential solution is to promote media literacy and critical thinking. By educating the public on the dangers of deepfakes and how to spot them, we can reduce the impact of these malicious campaigns. Incorporating a digital trust framework into everyday use can help reassure individuals that digital technologies and services – and the organizations providing them – will protect all stakeholders’ interests and uphold societal expectations and values.”

Finally, we must consider the ethical implications of AI and deepfake technology. Governments and regulatory bodies can play a significant role in shaping policies that regulate deepfake technology and promote transparent, accountable and responsible technology development and use. By doing so, we can ensure that AI does not cause harm.

In conclusion, deepfake technology is a growing threat, particularly in the hands of cybercriminals. With the rise of AI, the risks posed by deepfakes are becoming more significant. However, with the development of new detection technologies and a continued focus on education and ethical considerations, we can work together to mitigate these risks and ensure that deepfake technology is used for the greater good.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum