Is the AI-cyber bubble about to burst?

Cyber leaders must expand mandates beyond AI hype towards cyber resilience and information integrity. Image: World Economic Forum
- AI's market boom seems to be on course for a correction, exposing over reliance on a few highly visible firms.
- This concern is moving firms and governments to a greater focus on digital resilience and investments in diversified infrastructure.
- Long-term value for cybersecurity lies in investing in fundamentals over speculative AI tools. This will depend on developing resilient and adaptive cybersecurity strategies.
After a year of euphoric AI valuations, the warnings have begun. Earlier this month, the Chair of the Bank of England warned of a "sudden correction" in AI-driven asset valuations. Jamie Dimon, CEO of JPMorgan Chase, echoed the warning last week: "a lot of assets look like they're entering bubble territory." Former Meta executive Sir Nick Clegg went further, calling current AI valuations "crackers."
Nowhere do these assessments matter more than in the rapidly consolidating cybersecurity industry. As one of the most technically advanced industries, cybersecurity has made AI adoption central to its entire value proposition — for both defenders and attackers. Last week, Palo Alto Networks and CrowdStrike reached new record highs, with Palo Alto's market capitalization hitting $145 billion — larger than the GDP of Kazakhstan. If the AI economy is in a bubble, cybersecurity might sit at its centre.
Any coming correction is not a catastrophe — it's a clarification. Rather than thinking of a cyber bubble bursting, it’s actually focusing. As inflated expectations are now meeting cyber reality, three fractures are opening up opportunities for cyber leaders. Durable advantage will come from mastering three post-hype realities: sovereign resilience, psychological defence and security fundamentals.
In this article we'll take a look at three possible fractures.
Fracture 1. Geopolitics: Diversify before the regulators force you to
The industry's eleven largest providers — all US- or Israeli-owned — have captured the lion's share of AI-driven valuations. As the sector goes through its “shake-out” consolidation phase, these players are gaining market share on the assumption that AI-enabled security requires immense computational resources and vast datasets. It has positioned hyperscalers as the primary architects of global defence.
The logic is sound: more data improves capability for operational teams, fewer providers decreases complexity for CISO and ultimately delivers cost benefits to the Board. But cybersecurity is not a commodity or consumer staple — it is a strategic asset for governments. It is deeply entwined with national security, concerns over systemic risk and defence against state-backed adversaries.
The current market concentration reflects a dangerous bet: that a handful of geographically concentrated providers can serve as the world's primary defence architects. History suggests otherwise. From Reagan's 1987 semiconductor tariffs on Japan through the UK and EU’s Huawei exclusions, concentration in strategic sectors has consistently triggered political intervention.
Governments are already acting decisively to counter sovereign cyber risk. Europe's Cyber Resilience Act mandates supply-chain diversification. China's 2025 National Cyberspace Strategy emphasizes self-reliance in critical AI infrastructure. Japan's Economic Security Promotion Act requires domestic alternatives for critical infrastructure by 2027. Across Africa, from Kenya to South Africa, sovereign-cloud initiatives are localizing digital-defence capacity.
This shift creates opportunities for cyber leaders. Enterprises that act now to diversify their technology stacks — building relationships with regional providers, validating alternative architectures, stress-testing for geopolitical resilience — will capture advantage as regulatory mandates undoubtedly accelerate.
Fracture 2. Cognitive security: Stop fighting the wrong AI-Cyber war
For years, the cybersecurity industry has warned that AI will spawn undetectable technical threats overnight. The evidence is starting to tell a different story.
In 2025, the leading AI providers released unprecedented analyses of malicious use of their systems. OpenAI's October report found most misuse involved fraud and phishing. Multiple state-affiliated groups focused on scaling disinformation operations — OpenAI identified over a dozen coordinated campaigns producing multilingual propaganda at scale. Anthropic's August analysis showed AI primarily enabling similar state-led information operations and helping cybercriminals improve social engineering attacks for financial gain. Google's Gemini study concluded that fewer than 2% of observed incidents involved direct attacks on software vulnerabilities, while reporting a marked rise in deepfake-enabled financial scams across platforms.
A pattern is clear. The most tangible malicious use cases lie in cognitive manipulation. The World Economic Forum's own 2025 Global Risks Report confirms this shift, ranking misinformation and disinformation as the most severe short-term risk facing global leaders. AI-driven automation has made phishing, deepfake identity creation and online fraud exponentially faster. Yet most cybersecurity budgets remain squarely focused on traditional information security — not where AI-driven threats will now take their leadership.
This creates the second opportunity for cyber leaders. Organizations that expand their security mandates now — building capabilities in deepfake detection, identity verification, information operations monitoring — will defend against the AI threats that exist. The shift requires moving beyond traditional CISO responsibilities to encompass fraud operations, counter-subversion and reputation defence. These capabilities sit outside conventional cyber operations but will now become critical for protecting organizational integrity and wider resilience.
Fracture 3. “Boring” has not gone away: Don’t ignore the fundamentals
Within the cybersecurity industry many have branded AI systems as an entirely new paradigm demanding revolutionary tools. The operational reality is more prosaic.
Securing AI agents — along with its models, data pipelines and APIs — relies on the same cyber principles that have protected mission-critical systems for decades. Input validation and sanitization remain essential, preventing prompt injection in much the same way SQL injection once threatened databases. Rigorous access control and authentication define who can query, modify or retrain models, ensuring that privileges are carefully governed. And vulnerability management, coupled with timely patching, preserves the integrity of the underlying infrastructure and software.
The recent high-profile Salesloft–Drift breach illustrates the issue: attackers exploited a third-party integration, not the AI system itself. Containment was achieved through standard practices — environment isolation, credential rotation and infrastructure hardening — underscoring that foundational practices like third-party risk management remains central, even in an AI-driven environment.
As organizations deploy AI at scale — McKinsey reports that over three-quarters now use AI in at least one business function — practical experience is confirming what disciplined security teams already knew. AI introduces new attack surfaces, but not new physics. While this demands capability adjustments and is driving welcome innovation, it also demands the evergreen leadership principles of strategic foresight and measured governance.
This creates the final opportunity for cyber leaders. Organizations that continue to invest in foundational capabilities and principles as part of a sustained, holistic approach to security will prove more resilient than any strategy focused on chasing speculative security solutions.
A new mandate for cyber leadership
History shows that market corrections are not endings but inflection points. The dot-com bubble gave rise to more resilient digital infrastructure. The 2008 financial crisis reshaped capital markets and led to a more resilient and integrated global economy. Any recalibration of the AI-cyber ecosystem will follow the same pattern — not catastrophe, but ultimately a course correction toward greater cyber resilience.
Three strategic imperatives are already emerging: invest in sovereign resilience before regulatory enforcement accelerates; expand security mandates, including to encompass the challenge of cognitive resilience and organizational integrity; and focus on foundational capabilities rather than speculative AI-security tools. Ultimately the winners of any post-cyber bubble era won’t be those who buy the most ‘AI’. It will be the organizations disciplined to ignore the hype and continue to master the fundamentals.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Cybersecurity
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on CybersecuritySee all
Akshay Joshi
November 5, 2025






