Artificial Intelligence

What is adversarial artificial intelligence and why does it matter?

People walk past a floor graphic during the Def Con hacker convention in Las Vegas, Nevada, U.S. on July 29, 2017. REUTERS/Steve Marcus - RC1B13C62450

The real danger of AI lies in how it will enable attackers Image: REUTERS/Steve Marcus

William Dixon
Global Head, Research, ISTARI
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Artificial intelligence (AI) is quickly becoming a critical component in how government, business and citizens defend themselves against cyber attacks. Starting with technology designed to automate specific manual tasks, and advancing to machine learning using increasingly complex systems to parse data, breakthroughs in deep learning capabilities will become an integral part of the security agenda. Much attention is paid to how these capabilities are helping to build a defence posture. But how enemies might harness AI to drive a new generation of attack vectors, and how the community might respond, is often overlooked. Ultimately, the real danger of AI lies in how it will enable attackers.

The challenge

Adversarial AI is the malicious development and use of advanced digital technology and systems that have intellectual processes typically associated with human behaviour. These include the ability to learn from past experiences, and to reason or discover meaning from complex data.

Changes in the threat landscape are already apparent. Criminals are already harnessing automated reconnaissance, target exploitation and network penetration end-to-end. Soon, technology will enable them to automate every element of their attack cycle, including currently very manual processes such as the ability to learn fraud controls and to industralize cash-out and money laundering tactics.

Based on core big data principles, adversarial AI will impact the threat landscape in three key ways.

1. Larger volume of attacks against a wider attack cycle

By introducing new scalable systems, which would typically require human labour and expertise for attacks, criminals will be able to invest resources and capacity into building and modifying new infrastructure against target sets.

2. New pace and velocity of attacks, which can modify to their environment

Technology will enable criminal groups to become increasingly effective and more efficient. It will allow them to finely tune attacks in real-time, adapt to their environment and learn defence postures faster. This will be reflected across all attack scenarios, industries and technology platforms.

3. New varieties of attacks, which were previously impossible when dependent on human interaction

Finally, and most importantly, the next generation of technical systems will enable attack methodologies that were previously unfeasible. This will alter the threat landscape in a series of shifts, bypassing entire generations of controls that have been put in place to defend against attacks.

5 ways the cyber security community needs to respond

Advances in a new generation of systems available to attackers will require a holistic and multi-tiered approach, based on solid security foundations. Modifications and an increase in attack vectors will require the evolution and adaptation of current defence frameworks. The possibility of a new generation of attack methodologies requires bypassing traditional detect and respond postures.

1. Know our current threat environment

It is impossible to defend an organization from all attacks across all channels all the time. Understanding and defending against adversarial AI will require a deep understanding of where adapting threats will challenge current postures, and where to focus limited resources.

2. Invest in skills and analytics

To adapt to this new security environment, government, business and education systems need to ensure they have a requisite skills base and talent pipeline. The ability to attract and recruit people, and retrain them in highly sought-after skills across multiple disciplines, including advanced computing, mathematics and orchestrated analysis, will soon be of paramount importance.

3. Invest in suppliers and third parties

Full-scale deep defence is based on an integrated set of services and third parties who actively manage and refine defences in line with the changing threat landscape at an operational and strategic level. The community needs to have a clear roadmap, strategy and requisite skill set to be able to adapt to the new forms of attack generated by adversarial AI. Investment in integrated cluster and machine learning analysis for host-based detection and network monitoring is increasingly becoming industry standard, but such advanced technology will soon be needed across an organization’s entire attack surface.

4. Invest in new operational policy partnerships

The introduction of new AI-driven systems will stretch and challenge traditional defence postures, and require a much deeper and wider partner base to defend an organization. Large-scale attacks that can be generated worldwide across multiple technologies and platforms will require proactive partnerships and alliances beyond traditional structures. Integral to this will be proactive strategies with cross-sector partners, regulators and governments, who can collaborate on strategy and actively respond to new security challenges. The need to harmonize rules for certification and communication between partners will be increased.

5. Continue to integrate critical business processes and capabilities

A new generation of attacks operating across an organization’s full attack surface and exploiting a lack of defensive cohesion will increasingly pressure structural operational silos.

Why acting now matters

Cyber security will continue to be the security challenge of the 21st century. It has already radically changed how business, government and citizens partner together to combat crime. But this is just the beginning. While criminals continue to operate largely unattributed in the margins of global cooperation, they will seize on new technology and launch new generations of attacks.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceInternational SecurityEmerging TechnologiesCybersecurity
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum