How we enhance cybersecurity defences before the attackers in an AGI world

Artificial general intelligence will deliver highly autonomous systems that will reshape cybersecurity. Image: Unsplash/Adi Goldstein
- Artificial general intelligence will deliver highly autonomous systems that will reshape cybersecurity and our threat landscape.
- Countering cyber threats will involve responding to campaigns of malicious attacks using the same tools; however, success depends on whether attackers or defenders adopt the tools more quickly.
- For secure AI adoption, all stakeholders in the AI ecosystem, including governments, industry, academia and experts, will need to take intentional, proactive and coordinated action.
Artificial general intelligence (AGI) refers to highly autonomous artificial intelligence (AI) systems that can perform most cognitive tasks as well as humans. Unlike today’s AI, often built for task-specific purposes such as chatbots or image recognition, AGI would be able to demonstrate intellectual reasoning, learning, and adaptability across multiple domains.
And unlike current AI tools, which respond only when prompted, we expect AGI systems to initiate actions, pursue goals and sustain operations without direct supervision (although not to be confused with artificial superintelligence, which is yet to come, where machines could accomplish any cognitive work far beyond the human level).
However, AGI remains more of a moving target than a settled definition, with academics and industry offering different interpretations. What is clear is that if it emerges, it will reshape cybersecurity and our threat landscape.
An uneven contest
There is growing concern about how this technology intersects with cybersecurity. In the digital realm, attackers enjoy an inherent advantage: they only need to succeed once, whereas defenders must continuously defend every potential weakness. Advanced AI models have already amplified this asymmetry. AGI will magnify these asymmetries further unless policy-makers, industry and researchers act decisively.
If today’s frontier AI already challenges defenders, AGI raises the stakes further. Security operations will need to counter campaigns, not just discrete incidents.
Adversaries are moving faster and experimenting freely with new tools, while defenders are often slowed by bureaucracy, legacy processes and risk aversion.
”In the past, organizations worried about single incidents – a phishing email or a piece of malicious code. AGI could potentially allow attackers to run campaigns driven not by a human attacker behind the keyboard, but by machine intelligence that learns and adjusts in real-time.
Imagine a data breach timed to coincide with a disinformation campaign and a disruption to supply chains. This can cause a coordinated pressure on leaders and infrastructure across the cyber, physical and information domains. Responding would require a much more integrated view of the threats, intelligence and remedies across these domains.
Equally worrying is how AGI shortens the stages of an attack. Cyber operations that once unfolded over weeks could be completed in hours. AGI systems could chain together different stages of an attack (e.g. reconnaissance, vulnerability discovery and exploit development) and run thousands of cheap attempts until something returns positive.
A double-edged sword
Yet, AGI is not destined to be an attacker’s tool alone. Studies show that AI systems are often better at defensive tasks, such as patching, than at exploit development. Used well, AGI can be a potent force multiplier. It can relieve overworked security teams of routine triage and remediation.
Months-long patching cycles might be cut to days, narrowing the window of opportunity for adversaries. AGI can also shift cybersecurity from reactive firefighting to proactive resilience. Systems can be made more resilient by continuously scanning for misconfigurations, simulating fixes and flagging the most critical exposures before they are exploited.
By taking over routine tasks and providing support in decision-making, AGI can give human analysts more space to focus on complex investigations and strategic challenges.
However, the real challenge is not whether AGI can serve defensive capabilities better but how quickly defenders can adopt it compared to attackers. Adversaries are moving faster and experimenting freely with new tools, while defenders are often slowed by bureaucracy, legacy processes and risk aversion.
This pace differential is exacerbating the offence-defence asymmetry. To shift the balance, defenders must accelerate the use of AI, embedding it into practice or face ceding the initiative to attackers.
The wider international context
Already, AI is central to critical infrastructure, economic systems and national security. It has become a means and a goal of national and international advancement, and the global AI landscape is poised to reshape economic markets and security paradigms.
Yet, small and developing countries without access to AI are already missing out on the boost in efficiency, innovation and economic growth that AI brings. The entrance of AGI will likely exacerbate this gap, causing them to risk being left behind.
It is thus imperative that we continue to close all digital divides and advance an equitable digital environment for all, as outlined in the United Nations Global Digital Compact.
A call to action
As AGI becomes embedded in our digital infrastructure, it will not only shape the threat landscape but also become a direct target within it.
Opportunities to compromise AI systems will continue to grow; stakeholders should, therefore, consider opportunities for securing the AI tech stack (data, model, infrastructure, applications and governance) and taking a life cycle approach to securing AI systems now, before we realize the full development of AGI and for using AI in our common digital infrastructure.
To ensure the secure adoption and use of AI, all stakeholders in the AI ecosystem, including governments, industry, academia and experts, will need to take intentional, proactive and coordinated action today.
First, having a good understanding of the technical aspects of AGI is no longer optional for policy-makers and cybersecurity professionals but a necessity. Policy-makers need to understand the technical intricacies of AI and AGI so as to develop frameworks and approaches that balance innovation with safe and secure adoption.
For small and developing countries, capacity-building could be one avenue to address this skills gap.
The task before us is clear and urgent: to elevate AI security as a shared global priority, to embed it in governance and design and to act together, decisively and early, so that AGI strengthens our digital resilience instead of undermining it.
”Next, there needs to be a concerted effort to raise the security baseline for AI. The autonomy and adaptability of AGI will demand new security paradigms and approaches. Clearer guidelines, harmonized standards and practical tools can help organizations make informed choices as they adopt frontier AI.
For example, the Cyber Security Agency of Singapore has developed guidelines and a companion guide on securing AI systems throughout their lifecycle. As AI systems become increasingly interconnected, security standards will emerge as the common language enabling safe and seamless interoperability.
Ultimately, international and industry cooperation will be crucial in mitigating the security risks associated with AGI. Given the transboundary nature of cyber, we are only as strong as our weakest link.
The international community must work together towards an open, secure, stable, accessible, peaceful and interoperable cyberspace even as we embrace new and emerging technologies.
International platforms, such as The World Economic Forum, are key to bringing attention to such issues, brainstorming solutions and encouraging international cooperation.
The road ahead
AGI will not simply accelerate today’s threats; it will reshape how campaigns are planned, executed and defended against. With the right investments and coordination, AGI can be secured and harnessed as a defensive equalizer rather than an attacker’s force multiplier. However, this is only the case if we adopt it and do so faster than our attackers.
- For policy-makers, this means moving beyond broad principles to concrete action: building technical literacy and developing frameworks that balance innovation with resilience.
- For industry and technical leaders, it means hardening the AI tech stack today and operationalizing strategies, testing and benchmarks for the safe and secure adoption of AI systems, which keep pace with frontier AI.
- At the international level, it requires strengthening international cooperation and ensuring nobody gets left behind.
The task before us is clear and urgent: to elevate AI security as a shared global priority, to embed it in governance and design and to act together, decisively and early, so that AGI strengthens our digital resilience instead of undermining it.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Cybersecurity
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.







