Cybersecurity

The double-edged sword of artificial intelligence in cybersecurity

A smartphone used for two-factor identification for cybersecurity

Cybersecurity is a complicated business. Image: Dan Nelson/Unsplash

Deryck Mitchelson
Global Chief Information Security Officer, Check Point Software Technologies
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Cybersecurity?
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Cybersecurity

This article is part of: Annual Meeting on Cybersecurity
  • For businesses and individuals, AI is nothing short of a game-changer.
  • AI is also starting to have a profound influence on the offensive and defensive sides of cybersecurity.
  • The symbiotic relationship between AI and cybersecurity suggests that as one side advances, the other will adapt and innovate in response.

The pace of change in the field of artificial intelligence (AI) is difficult to overstate. In the past few years, we have seen a groundbreaking revolution in the capabilities of AI and the ease with which it can be used by ordinary businesses and individuals.

ChatGPT, a form of generative AI that leverages large language models (LLMs) to generate original human-like content, already has 180 million regular users and has had 1.4 billion visits to its website so far in 2023. Businesses are leveraging this kind of technology to automate tasks, run predictive analytics, derive insights for decision-making and even hire new employees – and that’s just scratching the surface.

For businesses and individuals, AI is nothing short of a game-changer. But where game-changing technologies can be used for good, they can also be used for evil. While AI is being utilized to enhance cybersecurity operations and improve network monitoring and threat detection, it is also being leveraged by threat actors to enhance their attack capabilities and methods. As we approach 2024, the race between offensive and defensive AI has never been closer or more important.

As detailed in Check Point’s 2023 Mid-Year Cyber Security Report, cybercriminals are harnessing AI to create more sophisticated social engineering tactics. By leveraging generative AI, they can create more convincing phishing emails, develop malicious macros in Office documents, produce code for reverse shell operations and much more. Even more concerning is that AI can be used to scale these operations more easily, allowing threat actors to target more victims in a shorter space of time. I think we can all agree that artificial intelligence is most definitely a protector, but it is also a threat.

Discover

How is the Forum tackling global cybersecurity challenges?

AI for – and against – cybersecurity

This year has witnessed AI's profound influence on both the offensive and defensive sides of cybersecurity. It has emerged as a potent tool in defending against sophisticated cyberattacks, significantly improving threat detection and analysis. AI-driven systems excel in identifying anomalies and detecting unseen attack patterns, mitigating potential risks before they escalate. For instance, real-time intelligence can be used by AI algorithms to monitor networks in real-time and accurately defend against threats as they emerge, reducing the occurrence of false positives.

However, the same AI that fortifies our defences is also being weaponized by cyber adversaries. Tools, such as ChatGPT, have been manipulated by malicious actors to create new malware, accelerate social engineering tactics and produce deceptive phishing emails that can pass even the most stringent scrutiny. Such advancements underscore the cyber arms race, where defence mechanisms are continually challenged by innovative offensive strategies.

With deep fake videos and voice-making capabilities now within reach, we should expect AI-powered social engineering tactics to get even more sophisticated. If the waters weren’t murky enough, ChatGPT can also be used to spread misinformation and the risk of 'hallucinations,' where AI chatbots fabricate details to answer user queries satisfactorily, making tools like these being seen purely as a force for good increasingly difficult.

Have you read?

The democratization of AI

One of the things that has made ransomware such a prevalent threat to businesses around the world is the rise of Ransomware-as-a-Service or RaaS. This refers to the creation of organized groups that operate almost like legitimate businesses, creating ransomware tools and selling intelligence around exploits and vulnerabilities to the highest bidder. That means even less experienced threat actors or those with limited resources can still orchestrate sophisticated attacks against large targets – all they need to do is buy access to the right tools.

Just as RaaS amplified the capabilities of threat actors by democratizing malicious software and making it more accessible, AI-as-a-Service (AIaaS) is amplifying capabilities around artificial intelligence. The democratization of AI tools, such as ChatGPT and Google Bard, has made them accessible to a broader audience.

While these tools hold immense potential for business and society, they are also being exploited for malicious purposes. For instance, Russian-affiliated cybercriminals have already bypassed OpenAI's geo-fencing restrictions and utilized generative AI platforms to craft sophisticated phishing emails, malware keyloggers and even basic ransomware code. In a white hat exercise, Check Point also achieved similar results with Google’s Bard AI, convincing the platform to eventually help with the creation of keylogging or ransomware code through a series of prompts – something any user with the slightest bit of knowledge could achieve.

Regulatory challenges

The evolving landscape of AI presents a host of regulatory challenges that underscore the importance of a well-rounded framework to govern its application. The ethical considerations at the forefront of these issues pivot around the notions of fairness, accountability and transparency. AI systems – in particular generative AI – are susceptible to inherent biases that could perpetuate or even exacerbate existing human prejudices. For instance, decision-making AI in hiring or lending could unfairly favour certain demographics over others, necessitating regulatory oversight to ensure equitable practices.

As AI becomes integral in various sectors, the potential for adverse outcomes, be it in cybersecurity, privacy infringements or misinformation campaigns, escalates. Regulatory frameworks are needed to ensure that the development and deployment of AI technologies adhere to one collective standard. This is important for 'good' AI, but regulation isn’t something that nefarious actors typically worry about. This could further widen the gap in the race between defensive and offensive AI.

Securing the present and the future

Amidst the focus on AI's future potential, it is crucial not to overlook the basics. Ensuring fundamental security practices, such as patching vulnerabilities, running regular scans and shoring up endpoints remains essential. While it is tempting to invest all efforts into threats of the future, addressing present-day challenges is equally as important.

As AI continues to evolve, its capabilities will undoubtedly expand, serving both defenders and attackers. While AI-driven solutions are enhancing defense mechanisms, cybercriminals are also harnessing AI to refine their tactics. The symbiotic relationship between AI and cybersecurity suggests that as one side advances, the other will adapt and innovate in response.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
CybersecurityEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

EU adopts cyber resilience act – and other cybersecurity news to know this month

Akshay Joshi

October 14, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum