Artificial Intelligence

Navigating AI: What are the top concerns of Chief Information Security Officers?

The crucial distinction between immediate priority risks and long-term concerns of AI must be assessed.

The crucial distinction between immediate priority risks and long-term concerns of AI must be assessed. Image: Google Deepmind/Unsplash

Sheryl Bunton
Senior Vice-President and Chief Information Officer, Gulfstream Aerospace
Janus Friis Bindslev
Chief Digital Risk Officer, PensionDanmark
Joanna Bouckaert
Community Lead, Centre for Cybersecurity, World Economic Forum
Luna Rohland
Community Coordinator, Centre for Cybersecurity, World Economic Forum
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

  • The cyber risk landscape has changed significantly with the release of generative AI systems, prompting organizations to assess the cybersecurity implications thereof.
  • The crucial distinction between immediate priority risks and long-term concerns of AI must be assessed.
  • We talked to CISOs (Chief Information Security Officers) to identify their immediate top-of-mind concerns related to AI and the actions they are taking to tackle these.

As the transformative potential of generative AI unfolds, associated risks and apprehensions regarding future developments of the technology are being discussed widely. Society’s increasing reliance on AI, the potential loss of control over systems, the misalignment of the system’s goals with human values, and the prospect of resource competition with AI systems are among the intensely debated topics.

While discussions about these long-term risks are essential, cybersecurity leaders confront different, immediate risks linked to the widespread adoption of generative AI systems. Several cases of data breaches through those systems, such as the incident earlier this year when Samsung engineers unintentionally leaked internal source code by uploading it to ChatGPT, are first examples of how generative AI advancements materialize into novel cybersecurity vulnerabilities.

Given that cybersecurity leaders must take prompt action to address these immediate concerns, it becomes imperative to pinpoint those pressing issues and, as perhaps often overlooked, differentiate them from the more long-term risks.

Discover

How is the Forum tackling global cybersecurity challenges?

CISO’s immediate concerns

We talked to CISOs to identify the key immediate concerns they are encountering regarding the recent AI advancements.

  • Enhancement of Adversarial Capabilities: Generative AI capabilities decrease the cost of developing and launching cyber-attacks - such as phishing, malicious code generation and deep fakes - and enhance capabilities for automating and scaling such attacks. Therefore, generative AI makes it easier for cybercriminals to detect and exploit vulnerabilities and security gaps.
  • Improper Use of Generative AI Systems & Data Leaks: The improper use of generative AI systems poses a range of challenges, including inadvertent disclosure of sensitive information. This also implies that confidential information can intentionally or unintentionally be utilized to train generative AI systems, raising severe confidentiality concerns. There is also a risk of inadvertently making generative AI systems do what they are not supposed to, e.g. through prompt injection – or using it for purposes that it was not meant for initially, leading to adverse results.
  • The black box effect: The integrity of AI models is difficult to evidence. The complex codes used to develop the models are not always fully understood, including by the developers themselves. This leads to a lack of transparency of those systems, making them harder to trust. In addition, a holistic security by design approach and adequate security controls are also often lacking in the development of generative AI systems.
  • Cost of Implementing and Securing Generative AI Systems: The cost implications of generative AI advancements are multi-faceted, encompassing the expenses associated with purchasing, developing, and implementing generative AI systems. Moreover, increased energy expenditure is tied to the utilization of AI systems. Among further cost implications are also those regarding the cybersecurity team, including additional costs for upskilling employees, launching awareness campaigns, and developing novel ways of securing those systems.
Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Concrete actions in the face of novel AI-related challenges

Although significant efforts are still needed to understand the ramifications of generative AI systems and address them holistically, organizations are taking initial steps to navigate emerging issues, regarding implications in the field of cybersecurity and beyond:

  • Organization-Wide Consultations: Encourage employees to test, discuss the technology, and share their related concerns. In order to help a successful adoption, fostering a culture of engagement and innovation related to AI advancements is key.
  • Working Groups: Establish targeted working groups to address priority challenges, from intellectual property protection to insider threats.
  • Governance Boards: Define guiding principles and guardrails for the utilization of AI systems by cross-disciplinary teams, which may involve cybersecurity, IT, legal, HR, data protection, and client-facing departments.
  • Education and Awareness Programs: Implement organization-wide AI education and awareness programs fostering the responsible use of tools and the comprehension of associated risks.
  • Adaption of Cybersecurity Strategies, Policies and Practices: Review existing cybersecurity strategies and policies to cater to emerging cyber threats associated with AI advancements. This encompasses, for instance, mandating employees to disclose their use of generative AI tools, restricting their use or providing guidelines on how to employ them when handling company data on personal devices.
  • Review of Technical Controls: Adapt cybersecurity controls in light of the shifting threat landscape and the adoption of novel AI systems.
Have you read?
  • Global Risks Report 2023

A call for multi-stakeholder action

While initial actions have been taken across organizations, these efforts have primarily been characterized by tactical and isolated approaches. The journey to comprehensively address cybersecurity implications posed by generative AI has just begun. Establishing cybersecurity fundamentals for generative AI is part of the necessary groundwork for establishing trustworthy, ethical and responsible AI.

Loading...

In this landscape, cybersecurity leaders are confronted with opportunities for action, encompassing tasks such as identifying and harnessing AI's defensive potential, clarifying the governance of cybersecurity risks, and providing inputs on regulation intended to ensure the safe adoption and utilization of generative AI.

A holistic, multi-stakeholder approach is undoubtedly essential to navigating the implications of generative AI. As organizations, regulators, and technologists collaborate, the transformational power of those technological advancements can be harnessed responsibly, thereby securing our digital future.

In this context, the World Economic Forum launched the AI Governance Alliance in June 2023. It aims to provide guidance on the responsible design, development and deployment of artificial intelligence systems. Read more on its work here.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Causal AI: the revolution uncovering the ‘why’ of decision-making

Darko Matovski

April 11, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum