Cybersecurity

Securing innovation: A leader’s guide to managing cyber risks from AI adoption

Cybersecurity must be considered across the whole path of AI adoption

Cybersecurity must be considered across the whole path of AI adoption Image: Adobe

Sadie Creese
Professor of Cybersecurity, University of Oxford
Akshay Joshi
Head, Centre for Cybersecurity, World Economic Forum
This article is part of: World Economic Forum Annual Meeting
  • Leaders must embed cybersecurity at every stage of artificial intelligence (AI) adoption to safeguard sensitive data, ensure resilience and enable responsible innovation.
  • A risk-reward approach aligns AI adoption with organizational goals by identifying vulnerabilities, mitigating risks and reinforcing stakeholder trust.
  • Multistakeholder collaboration among AI experts, regulators, and policymakers is essential to addressing AI-driven vulnerabilities and building confidence in AI technologies.

In the digital-first world, using artificial intelligence (AI) systems has become a cornerstone of organizational innovation and operational efficiency. However, as leaders drive transformation, cybersecurity must remain paramount.

AI systems are not immune to vulnerabilities, including adversarial attacks, data poisoning, and the hacking of sensitive algorithms. Leaders must recognize that integrating AI magnifies their organization’s attack surface, making robust cybersecurity measures non-negotiable.

In 2024, the World Economic Forum’s Centre for Cybersecurity joined with the University of Oxford’s Global Cyber Security Capacity Centre University of Oxford, on the AI & Cyber: Balancing Risks and Rewards initiative to steer global leaders’ strategies and decision-making on cyber risks and opportunities regarding AI adoption.

The research culminated in the white paper Industries in the Intelligent Age - Artificial Intelligence & Cybersecurity: Balancing Risks and Rewards, published in January 2025. This paper is a guide for managing the cyber risks of AI adoption. It empowers leaders to invest in and innovate in AI with security and resilience in mind to exploit emerging growth opportunities.

To unlock AI’s full potential, developing a comprehensive understanding of the related cyber risks and required mitigation measures is essential.

Have you read?

The critical need for cybersecurity

The Global Cybersecurity Outlook 2025 reveals that 66% of organizations expect AI to significantly impact cybersecurity in the coming year. Yet, only 37% have processes to evaluate the security of AI systems before deployment.

This gap highlights a risk that organizations adopt AI systems without fully assessing and addressing the related cybersecurity risks, potentially exposing vulnerabilities in their environments.

Leaving AI systems susceptible to data breaches, algorithm manipulation or other hostile activity could lead to significant operational and reputational damage. By assessing and mitigating cyber risks, leaders can align AI adoption with organizational goals and resilience needs.

Moreover, the data that fuels AI models is often proprietary or sensitive, and compromise could mean financial loss or penalties and negatively reflect on the organization. From ensuring secure data pipelines to implementing stringent access controls, cybersecurity should be embedded at every stage of the AI lifecycle.

A clear guide to addressing these risks is essential for informed decision-making and ensuring strategic choices are secure and regulatory compliant. This also reinforces trust among stakeholders, allowing sustainable and responsible AI-driven growth

Leaders must champion a culture where cybersecurity is not considered a barrier to innovation but a foundational pillar for sustainable growth. Senior risk owners also have a critical role in implementing oversight and control of AI-related cyber risks and proactively managing them.

Strategically aligning AI initiatives with a robust cybersecurity framework also reassures stakeholders, from customers to investors, of the organization’s commitment to safeguarding digital assets.

By prioritizing these considerations, top executives protect their enterprises and position them as trusted, resilient and forward-thinking players.

Loading...

A risk-based approach

Taking a risk-based approach is critical for secure AI adoption. Organizations must assess potential vulnerabilities and risks that AI might introduce in light of the opportunities it brings, evaluate the possible negative impacts on the business and identify the necessary controls to mitigate these risks.

This approach ensures that AI initiatives align with the organization's overall business goals and remain within the scope of its risk tolerance.

Embedding cybersecurity throughout AI deployment

All organizations should address AI-related cyber risks regardless of where they are in the AI adoption journey. Businesses already using AI should map their implementations and apply bolt-on security solutions.

Other scenarios may require a risk-reward analysis to determine whether AI implementation aligns with operational and business goals. This approach fosters security by design, ensuring AI adoption aligns with innovation and resilience.

Taking an enterprise view

AI systems do not exist in isolation. Organizations must consider how business processes and data flows around AI systems can reduce the impact of a cybersecurity failure.

This involves integrating controls into wider governance structures and enterprise risk management processes.

Collaborative responsible innovation

To harness AI’s benefits, organizations must adopt a multistakeholder approach toward prioritizing risk-reward analysis and cybersecurity. This ensures resilience, safeguards investments and supports responsible innovation.

Collaboration between AI and cybersecurity experts, regulators, and policymakers is crucial to aligning tools, sharing best practices, and establishing accountability. This joint approach can address AI-driven vulnerabilities while fostering trust and confident innovation.

This work was developed in collaboration with the AI Governance Alliance – launched in June 2023 – to provide guidance on the responsible design, development and deployment of artificial intelligence systems. Read more on its work here.

Additional contributors to this article include Louise Axon, Research Fellow, Global Cyber Security Capacity Centre, University of Oxford; Joanna Bouckaert, Community Lead, Centre for Cybersecurity, World Economic Forum; and Jamie Saunders, Oxford Martin Fellow, University of Oxford.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

1:10

Cybersecurity 2025 | AI and cybercrime: The rising threats in numbers

Cybersecurity 2025 | How businesses can sure up their supply chains in the era of cyber threats

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum