AI raises the risk of cyberattack – and the best defence is more AI

A visitor steps over the projected word, "Cybertech" at the Cybertech 2019 conference in Tel Aviv, Israel January 29, 2019. REUTERS/Amir Cohen - RC1336A6ADB0

Our tendency to trust one another will forever be a barrier to perfect cyber defence Image: REUTERS/Amir Cohen

Our Impact
What's the World Economic Forum doing to accelerate action on Cybersecurity?
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:


This article is part of the World Economic Forum's Geostrategy platform

Most cyberattacks today do not occur instantaneously with the push of a button. They are resource and time-intensive endeavours (especially the reconnaissance phase), that are orchestrated by humans and unfold at human speeds; the time and resources needed by the adversary are directly proportional to the quality and level of defences employed on the network, argue Meg King and Jacob Rosen of the Wilson Center.

The need for persistent access to a network, whether it is to scan for further vulnerabilities, monitor behavior, move laterally within the network, or manipulate or exfiltrate data, increases the likelihood of strong defensive systems keying in on suspicious activity before too much damage is done. Cyberattacks persist, but time, money, effort, and the potential for failure can all act as deterrents against other would-be cyber criminals.

The application of artificial intelligence – in particular machine learning – to cyber operations, however, promises to upset this balance, offering more efficient and more effective tools for carrying out attacks that occur at machine speeds.

The most time-consuming cyberattack preparation tasks, like sifting through large amounts of data in search of vulnerabilities in software to exploit or creating better spear phishing campaigns, will no longer require time and deep pockets to pay for human labour, but will occur constantly and quickly.

Machines don’t need to pause for breaks and won’t suffer from fatigue or weariness that might allow them to miss a potential vulnerability or exploit. The ability to synthesize unstructured data will allow machines to make potential connections that might be blind to the human eye, or at a minimum would not be immediately obvious.

A well-coded algorithm may even predict the target computer’s response before it happens, and thus operate in a manner that will not trigger the machine or operating system’s defences

Not only will automation provide a significant advantage of speed over those who do not use it, but the adaptive nature of some tools will mean that any prolonged exposure on a network may allow an intelligent piece of malicious code to respond to and circumvent non-automated security software in real time.

A well-coded algorithm may even predict the target computer’s response before it happens, and thus operate in a manner that will not trigger the machine or operating system’s defences. This is particularly concerning when considering how few networks today are adequately equipped to defend against digital attacks carried out by humans alone.

Despite the security concerns raised by artificial intelligence, AI will also have critical defensive applications. As network defenders and policymakers consider this shifting technology landscape, four key trends should be considered carefully.

1. Spear-phishing works…and is easier than ever

Symantec’s 2018 Internet Security Threat Report found that spear-phishing was used by nearly three-quarters of all adversaries, making it the most used and successful cyberattack vector by far. Automating this process – as demonstrated over two years ago at the Black Hat conference—will only increase this pervasive problem.

Using a combination of natural language processing, histograms and the scraping of publicly available data (especially from social media sites like Twitter where the content is defined by more colloquial vocabulary and grammar syntax usage), adversaries will be able to create more credible-looking malicious emails while also reducing the required workload and increasing the speed with which they can conduct such campaigns.

Have you read?

The simultaneous increase in the quality with the reduction in effort, time, and resources, means that spear-phishing will remain one of the most intractable cybersecurity problems facing us.

Improving basic cyber hygiene through education and public service campaigns will certainly help make end-users more aware of the dangers of phishing emails and more attuned to suspicious incoming messages. But the human tendency to trust one another will forever be a substantial vulnerability towards perfect defence against phishing attacks. Artificial intelligence can help pick up the slack. Email filtering services that use AI to detect anomalies and suspicious looking content, such as those offered by RedMarlin, Inky, and, will help alert users in real-time, who can then make a more calculated and educated decision. The combination of human and machine will serve as a better defence.

2. Behavioural detection

Behavioural detection using machine learning is hailed as one of the most promising software solutions for defence-in-depth cybersecurity by industry experts. Behavioural detection systems work by monitoring network activity in order to provide a baseline of “normal” behaviour for a particular network. This baseline is then used to check against new behaviour on the network for any anomalies that might suggest abnormal or illicit behaviour that can then be flagged for review. Machine algorithms are far better suited to observe and analyze user behavior dispersed across large networks.

Unfortunately, these defensive tools are being reverse-engineered by cyber adversaries and reconfigured as effective cyber weapons. Last year a company in India suffered an attack on their network in which the adversary used machine learning to observe patterns of normal user behaviour and then mimicked those patterns to try to hide its presence on the system.

3. Shift to the Cloud

A little more than a decade old, cloud computing offers economies of scale to users as well as substantive advantages in security. Today, threat information about important tactics and procedures of adversaries (e.g. strains of malware and preferred attack methods) can be shared rapidly and at a scale not possible before. Leveraging this capability, the cybersecurity industry now offers commercial products that integrate machine learning into their endpoint and malwareprotection programs. By automating the threat collection and analysis process, defenders are able to more accurately and more quickly detect malicious activity and respond to any such threat.

Unfortunately, various clouds also present major targets for adversaries, and automated attacks from denial of service to those against cloud services using software with known vulnerabilities will be an ongoing problem. The scalability that makes using the cloud so advantageous for many users may also lead to devastating consequences if not adequately defended or configured.

4. Vulnerability discovery in software

Vulnerability discovery remains to this day a laborious and time-intensive process conducted by humans, minimizing the speed at which it occurs and the scope of potential discovery based on our own unavoidable blind spots. For this reason, it is an area primed for automation.

In 2016, DARPA hosted its Grand Cyber Challenge, in which teams competed in a capture-the-flag competition against one another. But humans were not the primary players. The Challenge consisted entirely of automated programs, running on supercomputers, which were tasked with scanning their own networks for vulnerabilities and patching them against hostile adversaries (the competition). ForAllSecure, the company behind the supercomputer known as MAYHEM, emerged the victor and has since commercialized its product, selling the automated vulnerability discovery capabilities to both the Department of Defense as well as commercial enterprises.

The results thus far are telling. In one public test-case of a widely-used open-source program, MAYHEM was able to find 10 exploits in just 60 hours of work; one of those vulnerabilities had the potential to bring down the entire program. The program in question had previously been scanned by other vulnerability researchers who felt relatively confident that they had found the most dangerous existing exploits. Adversaries will seek to develop similar tools as well.

What Next?

Artificial intelligence promises to accelerate the speed and success rate of cyber attacks by sophisticated actors and eventually by those less-skilled (if off-the-shelf tools are developed and made available). It will also further blur traditionally understood lines between cyber offence and defence.

Whichever side better deploys these automated technologies fastest will hold an advantage. AI will bring about attacks for which a majority of the public and many private sector companies will not be prepared. The good news is that the cybersecurity industry is using the same methods for defence. But these services require sustained investment and incentives for evolving cybersecurity defences that do not yet exist at scale.

In protecting networks against adversaries, humans will continue to be important players in defending their own networks. But, it is imperative that autonomous systems play a central role in any such strategy.

Effectively using artificial intelligence for defensive purposes will require a hybridization of various tactics and tools of both a proactive and responsive nature. Policymakers must encourage analysis of best practices for employing such tools and consider setting standards for their use.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
CybersecurityEmerging Technologies
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

3 things CEOs must prepare to unlock the power of generative AI

Patrick Tsang

June 25, 2024

About Us



Partners & Members

  • Sign in
  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum