The real danger of AI lies in how it will enable attackers Image: REUTERS/Steve Marcus
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:
Artificial intelligence (AI) is quickly becoming a critical component in how government, business and citizens defend themselves against cyber attacks. Starting with technology designed to automate specific manual tasks, and advancing to machine learning using increasingly complex systems to parse data, breakthroughs in deep learning capabilities will become an integral part of the security agenda. Much attention is paid to how these capabilities are helping to build a defence posture. But how enemies might harness AI to drive a new generation of attack vectors, and how the community might respond, is often overlooked. Ultimately, the real danger of AI lies in how it will enable attackers.
Adversarial AI is the malicious development and use of advanced digital technology and systems that have intellectual processes typically associated with human behaviour. These include the ability to learn from past experiences, and to reason or discover meaning from complex data.
Changes in the threat landscape are already apparent. Criminals are already harnessing automated reconnaissance, target exploitation and network penetration end-to-end. Soon, technology will enable them to automate every element of their attack cycle, including currently very manual processes such as the ability to learn fraud controls and to industralize cash-out and money laundering tactics.
Based on core big data principles, adversarial AI will impact the threat landscape in three key ways.
1. Larger volume of attacks against a wider attack cycle
By introducing new scalable systems, which would typically require human labour and expertise for attacks, criminals will be able to invest resources and capacity into building and modifying new infrastructure against target sets.
2. New pace and velocity of attacks, which can modify to their environment
Technology will enable criminal groups to become increasingly effective and more efficient. It will allow them to finely tune attacks in real-time, adapt to their environment and learn defence postures faster. This will be reflected across all attack scenarios, industries and technology platforms.
3. New varieties of attacks, which were previously impossible when dependent on human interaction
Finally, and most importantly, the next generation of technical systems will enable attack methodologies that were previously unfeasible. This will alter the threat landscape in a series of shifts, bypassing entire generations of controls that have been put in place to defend against attacks.
Advances in a new generation of systems available to attackers will require a holistic and multi-tiered approach, based on solid security foundations. Modifications and an increase in attack vectors will require the evolution and adaptation of current defence frameworks. The possibility of a new generation of attack methodologies requires bypassing traditional detect and respond postures.
1. Know our current threat environment
It is impossible to defend an organization from all attacks across all channels all the time. Understanding and defending against adversarial AI will require a deep understanding of where adapting threats will challenge current postures, and where to focus limited resources.
2. Invest in skills and analytics
To adapt to this new security environment, government, business and education systems need to ensure they have a requisite skills base and talent pipeline. The ability to attract and recruit people, and retrain them in highly sought-after skills across multiple disciplines, including advanced computing, mathematics and orchestrated analysis, will soon be of paramount importance.
3. Invest in suppliers and third parties
Full-scale deep defence is based on an integrated set of services and third parties who actively manage and refine defences in line with the changing threat landscape at an operational and strategic level. The community needs to have a clear roadmap, strategy and requisite skill set to be able to adapt to the new forms of attack generated by adversarial AI. Investment in integrated cluster and machine learning analysis for host-based detection and network monitoring is increasingly becoming industry standard, but such advanced technology will soon be needed across an organization’s entire attack surface.
4. Invest in new operational policy partnerships
The introduction of new AI-driven systems will stretch and challenge traditional defence postures, and require a much deeper and wider partner base to defend an organization. Large-scale attacks that can be generated worldwide across multiple technologies and platforms will require proactive partnerships and alliances beyond traditional structures. Integral to this will be proactive strategies with cross-sector partners, regulators and governments, who can collaborate on strategy and actively respond to new security challenges. The need to harmonize rules for certification and communication between partners will be increased.
5. Continue to integrate critical business processes and capabilities
A new generation of attacks operating across an organization’s full attack surface and exploiting a lack of defensive cohesion will increasingly pressure structural operational silos.
Cyber security will continue to be the security challenge of the 21st century. It has already radically changed how business, government and citizens partner together to combat crime. But this is just the beginning. While criminals continue to operate largely unattributed in the margins of global cooperation, they will seize on new technology and launch new generations of attacks.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
The views expressed in this article are those of the author alone and not the World Economic Forum.
More on Artificial IntelligenceSee all
March 4, 2024
March 4, 2024
March 1, 2024
March 1, 2024
March 1, 2024
Dr Shaoshan Liu
February 29, 2024