AI agents could tip the cybersecurity balance towards defenders
AI must be harnessed to enhance cybersecurity.
Image: Getty Images/iStockphoto
Stay up to date:
Cybersecurity
- AI is becoming a powerful shield and a potential attack vector for cybersecurity.
- AI presents an opportunity to resolve vulnerabilities before code is ever deployed.
- Agentic AI has the potential to establish a new era of cyber resilience, but only if we seize this moment and shape the future of cybersecurity together.
AI is fast becoming one of the linchpins of modern business – and with it, modern IT and cybersecurity. In a few short years, our use of AI has shifted from experimental to essential, transforming the way we work and think about work. The overlap here is significant.
The emergence of AI-powered systems is reshaping the nature of cyber defence and the rise of Agentic AI introduces both unprecedented opportunities and complex new risks. As AI becomes a powerful cyber shield and a potential attack vector, security leaders must evolve their thinking and tooling to match.
How is the Forum tackling global cybersecurity challenges?
AI’s role in shifting the cybersecurity balance
Historically, cyber defence has always played catch-up. Threat actors have been able to innovate faster, coordinate better and exploit gaps before organizations can patch them. In the cat and mouse game of cybersecurity, the advantage has been on the attackers’ side; after all, they only need to be successful once, while defenders must successfully block threats every time to avoid a breach.
AI presents a unique opportunity to flip the script. Imagine a future where vulnerabilities are flagged and resolved before code is ever deployed, where systems can autonomously correct security flaws as they arise and where every endpoint and agent participates in a global, self-healing defence network.
If attackers are still leading the innovation curve a few years from now, we’ll have missed the moment. Agentic AI promises to play a leading role in this shift.
Agentic AI represents a breakthrough and a burden. On one hand, these autonomous agents can respond to threats faster than any human, collaborate across environments and proactively defend against emerging risks by learning from a single intrusion attempt. It's cyber defence at machine speed. On the other hand, these same capabilities can be weaponized. Adversarial AI may soon launch highly targeted attacks that evolve in real time. Using agents, it could execute without human input and bypass traditional defences entirely. When both attackers and defenders operate at microsecond intervals, the nature of cyber conflict transforms. The line between shield and sword has never been thinner.
A new class of risk
With AI workloads, traditional cybersecurity risks still apply, but they’re now compounded by entirely new threats. Prompt injection, LLM jailbreaking, model integrity manipulation and unpredictable agent behaviours are fundamentally shifting how we prepare for, monitor and detect and respond to attacks. Securing AI agents is fundamentally harder than securing traditional systems, because they don’t operate on static logic. They learn, evolve and act based on dynamic inputs.
This learning scale is only accelerating. Unlike human users, AI agents will perform millions of operations continuously and autonomously. That means managing and protecting vast new populations of non-human identities and transactions. The volume, velocity and variety of this activity demands new security models built for real-time orchestration and adaptability.
Before we can secure this new AI-powered environment, we must first see it clearly. The rise of Shadow AI, which are unauthorized or unmanaged AI deployments, makes visibility our first priority. Discovery must become continuous, dynamic and comprehensive, spanning endpoints, networks, cloud workloads and every enforcement point. Once we have visibility, the next step is intelligent control – understanding which models are in use, what data they interact with and whether sensitive information is adequately protected. Data loss prevention, encryption and contextual access controls must evolve to match the fluidity and autonomy of AI.
The need for an AI operating system
What we truly need is an AI operating system for cybersecurity. In essence, this is an intelligent platform with real-time situational awareness of users, assets, applications and threats across the entire enterprise. It should not just detect change, but anticipate it, acting with context and precision. Think of it as a virtual administrator that understands every employee’s intent, behavioural history and risk profile and can make instant decisions to protect the environment.
But internal context isn’t enough. This AI operating system must also plug into the broader world, ingesting global threat intelligence, adapting to emerging risks and reconfiguring defences based on external events. Imagine a future where security is autonomous, adaptive and always-on. The development of AI-native protocols, such as MCP (model context protocol) and A2A (agent-to-agent communication) is the first step. These standards will allow AI systems to reason collectively and operate as a unified, secure defence fabric.
Collaboration is the catalyst
The biggest barrier to this future isn’t technology – it’s fragmentation. Today, too many organizations still operate in silos, deploying point solutions that don’t talk to each other. That’s a losing strategy when adversaries are more coordinated than ever.
To truly harness AI’s potential, we need radical collaboration, shared intelligence across cloud platforms, cybersecurity tools and AI systems. Vendors, customers and even competitors must work together to close the gaps and eliminate blind spots. Ultimately, we must reason together, combining human insight and machine intelligence to anticipate threats before they materialize.
Real-time resilience for a real-time world
In the world of AI-driven attacks, time is the most precious commodity. Traditional patch cycles and response protocols are too slow. We need infrastructure designed for machine-speed resilience. The speed of innovation will spark fresh thinking around trust models, governance and ethics. Ultimately, the promise of AI won’t be realized unless we can trust the platforms supporting it. Building that trust requires shared standards, transparent policies and relentless focus on securing data, identities and outcomes.
The convergence of AI, cybersecurity and cloud computing is reshaping the digital landscape. The challenges are immense, but so is the opportunity. By embracing collaboration, prioritizing real-time observability and developing intelligent, adaptive systems, we can tip the balance in favour of defenders.
Agentic AI can learn from every attack, adapt in real time and prevent threats before they spread. It has the potential to establish a new era of cyber resilience, but only if we seize this moment and shape the future of cybersecurity together.
Accept our marketing cookies to access this content.
These cookies are currently disabled in your browser.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on CybersecuritySee all
Spencer Feingold
June 18, 2025
Leo Simonovich and Filipe Beato
June 18, 2025
Marco Túlio Moraes
June 18, 2025
Kaushal Rathi and Mandanna Appanderanda Nanaiah
June 16, 2025
Akshay Joshi
June 12, 2025
Stéphane Graber, Margi Van Gogh and Luna Rohland
June 4, 2025