Artificial Intelligence

Cybersecurity is on the frontline of our AI future. Here's why

Cybersecurity is one of AI's core applications, against a backdrop of a fast-evolving threat landscape.

Cybersecurity is one of AI's core applications, against a backdrop of a fast-evolving threat landscape. Image: Getty Images/iStockphoto

Gary Steele
President and Chief Executive Officer, Splunk
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

This article is part of: World Economic Forum Annual Meeting
  • Cybersecurity is one of AI's core applications, against a backdrop of a fast-evolving threat landscape.
  • Over the next year, AI will transform cyberthreat detection and risk assessment.
  • Keeping a human in the loop will be essential to responsible AI-powered cybersecurity.

Three innovations have revolutionized the digital world in the last few decades: the internet, cloud and AI. Each technology took time for organizations to understand and adapt in the most productive and impactful way. To that end, AI is still in its infancy. But in just over a year, generative AI is on a trajectory from limited and mostly experimental applications to rapidly becoming an essential core technology.

As of 2023, there are more than 60,000 AI-focused companies, all seeking to ride the boom. While many of these organizations will have a benign impact on the broader technology landscape, others will introduce capabilities that dramatically simplify, accelerate and strengthen the ways in which technology is embedded in society and business. Similar to cloud, AI itself doesn't revolutionize; what is crucial is the purposeful application of it rather than widespread, undirected use. As with the internet and cloud, organizations that figure out how to quickly adapt to and apply AI in the right way will have a tremendous advantage in achieving their missions.

One of the most essential applications of AI – and, therefore, a significant opportunity for continued investment – is the domain of cybersecurity. The threat landscape is vast, complex and rapidly evolving. Security teams are up against relentless bad actors and nation-states that unleash an increasingly sophisticated volley of phishing attacks, ransomware and breach attempts. In 2023, global cybercrime was estimated to amount to $8 trillion in damages.

Have you read?

While AI doesn't necessarily expand an organization's attack surface, it makes the existing attack surface more vulnerable and accelerates the productivity of bad actors. A recent survey of Chief Information Security Officers or CISOs found that 70% believe that AI gives an advantage to attackers over defenders.

We’re already seeing how generative AI applications are making phishing attacks more convincing and sophisticated, increasing the volume and efficiency of attacks, making it even harder for defenders to protect people and assets. This even extends to AI being used to create translations of deceptive emails, enabling attackers to scale operations across the globe virtually at will. Fragmentation and larger volumes bring the most problems.

Harnessing AI for security resilience

The solutions to increasing attacks are focused, directed applications of AI. Government agencies and industry leaders must act swiftly and prudently to apply AI-powered solutions and methodologies to strengthen the effectiveness and efficiency of their security teams to contend with this challenging landscape. In the same CISO survey, 35% reported that they are already experimenting with AI for cyber defence, including malware analysis, workflow automation and risk scoring.

I fundamentally believe that AI will transform how the world’s largest and most complex organizations keep their digital systems secure and reliable. Over the coming year, we will see AI bring enormous value by automatically detecting anomalies, leveraging predictive models to provide better ways for security teams to distill information, find patterns and prioritize threats, as well as recommending actions and focusing users’ attention where it’s most needed based on intelligent assessment of risk.

AI is already reinforcing cybersecurity.
AI is already reinforcing cybersecurity. Image: Splunk: 2023 CISO report

Having a foundation of AI built into an organization’s cybersecurity strategy will be crucial in its journey to become more digitally resilient. It is likely to be one of the first frontiers within the business that adopts AI to drive real impact. This is an important moment as society’s reliance on digital services will only increase over the coming years, and a greater focus on keeping digital systems secure and reliable is now a minimum expectation from citizens and consumers. To stay on top of these expectations in a world full of disruptions, organizations are going to have to look to AI to play a significant role in shouldering the burden.

Responsible, ethical and fit for purpose

There is an abundance of opportunity for industries and governments worldwide to integrate AI across their systems and services to improve outcomes and drive progress. But leaders ought to be thinking soberly about whether an equivalent abundance of strategy and responsibility are also present. Responsible and ethical use of AI is paramount, and that starts with transparency. At Splunk, this means AI systems and the use of data should be explainable, transparent and understandable to our customers and stakeholders. And, AI systems by design should be unbiased and respectful of personal and organizational data.

It should also be fit for purpose to solve a specific problem set. Consider the use of machine learning to improve our ability to detect potential problems and help remediate them; this is an immediate and valuable application of the technology. For AI to deliver truly meaningful and positive impact, we have to be crystal-clear and intentional about where it can be applied to catalyze a more resilient digital world.

Keeping humans in the loop

Finally, it’s important to note that AI should be considered an accelerator to human decision-making, not a replacement. AI is fallible and lacks the emotional context, human understanding and common sense to fully supplant humans. Our path forward must be a human-in-the-loop approach. It is essential that AI assist human decision-making, not dictate it.

Just as we would not (and should not) trust solely in an airline’s autopilot to deliver passengers from departure to destination, we also can’t hand over the responsibility of cybersecurity preparedness and response to AI. Cybersecurity expertise in the global workforce is more essential than ever, and keeping a human in the loop represents a best-of-breed opportunity. Technology empowers us to scale to meet the growth and evolution of cyberthreats, while at the same time leveraging on contextual human strengths that AI isn’t ready for yet.

Doubtless, the cybersecurity landscape will continue to evolve with new risks to business and society, and more rewards for threat actors when organizations lack resilient digital systems. How we approach the practical, responsible use of AI in cybersecurity, set realistic goals for industry and government, and measure our progress over the coming year will help shape how we manage its use across society to drive resilience for the long haul.

Discover

How is the Forum tackling global cybersecurity challenges?

We have a lot of work to do to stay the course and drive progress, and it isn’t easy. However, I’m optimistic that continued focus on partnerships, information-sharing and discourse across industries and governments through forums like WEF can help us think and act in new ways, and further guide technologies to have widespread and lasting positive impact worldwide.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceCybersecurityCybercrimeDavos Agenda
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum