Cybersecurity

Anthropic’s Mythos moment: How frontier AI is redefining cybersecurity

AI has entered its security-first era.

AI has entered its security-first era. Image: Unsplash

Chiara Barbeschi
Specialist, Cyber Resilience, World Economic Forum
Tarik Fayad
Strategic Integration, MENA, Centre for AI Excellence, World Economic Forum
This article is part of: Centre for Cybersecurity
  • Anthropic's decision to limit access to its latest model reflects a
    growing focus on the safe, secure and responsible deployment of
    advanced AI.
  • Frontier AI is expanding both defensive capabilities and potential cyber
    risks, requiring organizations to adapt how they manage security.
  • This moment highlights the need for clearer guidance and coordination
    to support the secure adoption of advanced AI.

On 7 April, Anthropic announced Claude Mythos Preview, a frontier AI model so powerful (or risky) that the company decided not to release it to the public. This decision signals a critical shift in AI landscape: constraints on deployment are no longer commercial, but security-driven.

According to Anthropic, Mythos can autonomously identify previously unknown vulnerabilities, generate working exploits and carry out complex cyber operations with minimal human input. Testing identified a large number of related weaknesses across widely used systems, though these results remain subject to further validation and vary in terms of severity and real-world exploitability.

This reflects a broader turning point; frontier AI systems are becoming more autonomous and powerful, but also harder to control once deployed. The cautious way forward is to treat these models less as consumer products and more as strategic assets. Ultimately, it underscores a new reality: AI capability is advancing faster than our ability to safely govern it, making security the primary gatekeeper for release.

Global-systemic risks of frontier AI

This is less about one model, and more about a new reality for cybersecurity and societies alike. The implications are already being felt by governments, regulators and companies worldwide. A new concern is emerging: Companies can build advanced AI systems but are not yet confident they can deploy them safely, without unintended consequences.

Tasks that once required highly specialized teams working for weeks or months can now be performed in hours. This has two immediate consequences. First, it could significantly strengthen defences by accelerating the discovery of vulnerabilities. Second, it could lower the barrier for launching sophisticated cyberattacks, enabling a wider range of actors to operate at a higher level.

This is not just a cybersecurity issue. It is a resilience issue for global stability. Critical infrastructure, financial systems and supply chains all depend on digital systems that could be exposed to faster, more scalable forms of attack.

When AI risks move markets

The market reaction has been equally striking. Reports suggest that fears linked to Mythos and similar frontier AI systems have contributed to significant volatility in global technology stocks, highlighting investor concern about disruption to cybersecurity, business models and the stability of the digital economy.

Recent developments underscore how quickly this issue is moving from theory to practice. Reports indicate that US officials have begun urging major financial institutions to actively test advanced AI systems like Mythos in controlled environments, reflecting growing concern at the highest levels about both the risks and defensive potential of such tools.

This moment reinforces warnings already highlighted in the World Economic Forum’s Global Cybersecurity Outlook 2026, which points to a growing gap between the pace of cyberthreats and organizations’ ability to respond. Frontier AI could widen that gap further – at least in the short term.

Against this backdrop, these questions are expected to be central to discussions at the World Economic Forum’s Annual Meeting on Cybersecurity in May 2026, where leaders will assess how AI is reshaping global cyber risk and what coordinated responses are needed.

Three questions for leaders

For non-specialists, the Mythos episode raises three practical and urgent questions:

Could AI make cyberattacks easier to launch?

Yes – but unevenly. By automating complex technical tasks, systems like Mythos
can lower the barrier to entry for attacks on simpler systems, which can be carried out with limited human input. For more complex, well-secured systems, attacks likely still requiring steering from experienced attackers. This could increase the frequency of cyber incidents, while enabling more sophisticated attacks primarily in the hands of skilled actors.

Are organizations ready to respond at AI speed?

In most cases, not yet; for many, not even close. Even today, organizations struggle to keep up with the fast-evolving sector, with 87% of leaders identifying AI-related vulnerabilities as the fastest-growing cyber risk. If AI systems dramatically increase the number of vulnerabilities identified, the challenge will shift from discovering problems to fixing them fast enough. Patch cycles measured in weeks may no longer be sufficient in a world where vulnerabilities can be identified and exploited within hours.

Who controls access to these capabilities?

This remains an open question. Anthropic has chosen to restrict access and work with a small group of trusted partners, rather than releasing the model broadly. But there are no globally agreed rules for who should have access to such powerful systems or how their use should be governed.

From scarcity to overload

One of the less obvious implications of systems like Mythos is that they could create a new kind of bottleneck. For years, cybersecurity has struggled with limited visibility in that organizations did not know where their vulnerabilities were. AI changes that, enabling rapid, large-scale discovery of weaknesses across systems.

This, however, creates a different problem: overload. If thousands of vulnerabilities can be identified quickly, organizations may not have the capacity to address them all. Prioritization becomes critical, and errors become more costly. In this environment, more visibility does not automatically mean more secure.

As offensive capabilities become more automated, defensive systems must match that speed and sophistication. Static, rule-based approaches will not keep pace. Instead, organizations will need adaptive systems that continuously monitor and respond in real time.

Cybersecurity is shifting from defending fixed perimeters to managing dynamic, intelligent systems, fundamentally reshaping how risk is understood and controlled.

What leaders should do now

Anthropic’s response has been to limit access and emphasize collaboration, working with a small group of trusted organizations to secure critical systems before such capabilities spread more widely. But this approach alone is not enough.

These capabilities are unlikely to remain confined to a single organization. Similar systems are expected to emerge across the industry, increasing the urgency for action.

For business and policy leaders, four priorities stand out:

  • Elevate cyber risk to the strategic level: AI-driven cyberthreats should be treated as a board-level issue, with clear accountability and oversight.
  • Invest in AI-native defence: Organizations will need capabilities that match the speed and scale of AI-enabled attacks, including automated detection and response.
  • Strengthen public-private collaboration: No single organization or government can manage this risk alone; coordinated action across sectors will be essential.
  • Prepare for compressed timelines: Response cycles – from detection to patching – must accelerate significantly to keep pace with AI-driven threats.

Cybersecurity is no longer just a technical function. It is a core component of economic resilience, trust and stability.

Discover

How the Forum helps leaders understand cyber risk and strengthen digital resilience

A turning point for digital trust

Anthropic’s Mythos offers a preview of a near future in which AI both strengthens and destabilizes the digital systems that underpin the global economy.

The transition may not be smooth. Defensive capabilities are improving, but unevenly. At the same time, offensive capabilities may spread more quickly, creating a period of heightened risk before a new equilibrium is established.

As the speed of AI development continuously outpaces governance, coordination and security practices, the key challenge is not just technological. It is institutional and increasingly geopolitical. As countries and companies race to develop and deploy frontier AI capabilities, there is a risk that approaches to access, control and security diverge. Without coordination, this could lead to fragmented standards, uneven levels of protection and greater systemic vulnerability.

The World Economic Forum’s Centre for Cybersecurity is advancing this effort by fostering holistic collaboration through its Cyber Frontiers: AI and Cybersecurity initiative. Its upcoming report, to be released in May 2026, explores how AI can strengthen cyber defence and resilience. The insights from this work and the discussions it informs will be critical in shaping how leaders respond to this next phase of cyber risk. The next phase of the initiative will explore cyber risks introduced by agentic systems and develop guidance on how to secure the agentic AI economy.

The question is no longer whether such capabilities will emerge, but whether institutions can adapt quickly enough to manage them. The answer will shape not only the future of cybersecurity, but the resilience of the digital systems on which societies and economies increasingly depend.

Loading...
Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Cybersecurity

Related topics:
Cybersecurity
Artificial Intelligence
Share:
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Cybersecurity
See all

Is collective cyber defence the future of port security? Learnings from a Dutch initiative

Marijn van Schoote, Irene Varoli and Chiara Barbeschi

April 17, 2026

Cyberattacks target US infrastructure, and other cybersecurity news

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum