AI and cybersecurity: How to navigate the risks and opportunities

 Ai is being used to bolster cybersecurity.

AI is being used to bolster cybersecurity. Image:

Giulia Moschetta
Research and Analysis Specialist, Centre for Cybersecurity, World Economic Forum
Joanna Bouckaert
Community Lead, Centre for Cybersecurity, World Economic Forum
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:


This article is part of: Centre for Cybersecurity
  • In the race for AI dominance, cybersecurity considerations take centre stage.
  • Although AI will augment the effectiveness of cyberattacks, their impact can be offset by using AI technologies to enhance cyber defence capabilities.
  • Harnessing the advantages brought by AI will require global public-private cooperation to ensure its applications can be translated in an equitable and secure manner across society.

New technological developments in artificial intelligence (AI) have taken the world by storm, prompting a race for governments to gain strategic advantage and for tech companies to develop and commercialize new AI systems.

Emerging AI applications have the potential to bring numerous benefits to society but can also have severe security implications, ranging from national security, democracy destabilization to large-scale economic disruption.

These risks are amplified in this election year, when over 4 billion people will head to the ballot box. Cyberattacks are a key risk highlighted in both the Munich Security Report 2024 and the World Economic Forum’s Global Risks Report 2024, the latter also highlighting the emergence of AI-generated misinformation and disinformation as the second most severe global risk anticipated over the next two years.

Have you read?

The implications of AI for cybersecurity are numerous and evolving, with threat actors leveraging these new technologies to their advantage, augmenting their capabilities for cyberattacks.

AI will lead to the evolution and enhancement of existing tactics, techniques and procedures, and lower the access barrier for cybercriminals, reducing the technical know-how required to launch cyberattacks. Social engineering is also being boosted by new large language models (LLMs), with threat actors creating increasingly sophisticated spear-phishing campaigns.

As the technology evolves, the difference between synthetic media and human-generated content is becoming harder to discern even for detection technologies, making deepfakes more realistic, more targeted and dangerous than ever before.

Recently, the Hong Kong branch of a multinational company was affected by a deepfake-enabled fraud. Malicious actors used a deepfake to pose as the company’s chief financial officer in a video conference call, prompting one employee to pay out $25 million.

How can AI enhance cybersecurity?

The increasing threats brought by the advancement of AI should not overshadow the great benefits new AI models can also bring to cybersecurity. As these tools are exploited by malicious cyber actors, cyber defenders can in turn use AI to improve their cybersecurity capabilities.

Although AI technologies have been used to develop cybersecurity solutions in the past few years, the advent of generative AI has incited more organizations to bolster investments in AI technologies for cybersecurity.

As Cisco’s Jeetu Patel asserted “it is a great time for tipping the scales in favour of the defenders”. This notion is widely shared across the tech community, as demonstrated at the Munich Security Conference where over 15 leading enterprises pledged to help prevent deceptive AI content from interfering with this year’s global elections.

Moreover, Google’s Sundar Pichai announced a new workstream to bolster cyber defences by speeding up detection and response teams. Microsoft is leading the effort to detect and block malicious threat actors’ use of Microsoft’s services and also launched a new tool to enable users to digitally sign and authenticate media. Meta announced new technological standards to identify and mark AI-generated content.

AI technologies can also bolster cybersecurity training, be it to educate the general public, or to help train the next generation of cyber defenders. The estimated lack of 4 million cybersecurity professionals from the ISC2 Cybersecurity Workforce Study is alarming.

To help tackle this gap, in April 2023 the Forum launched a multi-stakeholder initiative entitled “Bridging the Cyber Skills Gap” . Its aim is to create a strategic cybersecurity talent framework and devise actions to help individuals enter and thrive in the cybersecurity workforce.


How is the World Economic Forum creating guardrails for Artificial Intelligence?

What's being done to ensure responsible use of AI?

Governments and international organizations are also playing a crucial role in establishing guidelines and regulatory frameworks for the development of safe AI. Such regulations will guide the development, use and implementation of AI technologies in a way that will benefit societies while limiting the harm they may cause.

Some recent examples include: the development of the European Union’s AI Act; the establishment of the UN’s advisory body on AI governance; the UK’s Guidelines for secure AI system development; the White House Executive Order on AI Safety and the creation of the US AI Safety Institute.

To foster public-private cooperation, the Forum launched the AI Governance Alliance in April 2023, uniting industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

In the race for AI dominance, the intricate relationship between cybersecurity and these cutting-edge technologies takes centre stage. Although AI will augment the effectiveness of cyberattacks, their impact can be offset by the benefits of using AI technologies to enhance cyber defence mechanisms.

As organizations navigate the complex interplay between AI implementation, security threats, and defence strategies, a comprehensive understanding of the risks and rewards will be paramount for unlocking the full potential of AI while ensuring growth.

Building on the work of the Forum’s AI Governance Alliance, the Centre for Cybersecurity is teaming up with the University of Oxford to steer global leaders’ strategies and decision-making on cyber risks and opportunities in the context of AI.

The rapid technological advancements in the development of AI have unleashed a transformative era, with both unprecedented benefits and challenges. Harnessing the advantages brought by AI will require global public-private cooperation to ensure its applications can be translated in an equitable and secure manner across society.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
CybersecurityArtificial Intelligence
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

AI and cyber, cancer-care, trade tech, and green skills: Top weekend reads on Agenda

Gayle Markovitz

March 1, 2024

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum