Opinion
Forum Institutional

Could fear of AI pose the biggest risk of all to humanity? 

The head of a robot, illustrating AI technology

Keeping a close watch on the progress of AI. Image: Possessed Photography on Unsplash

Mika Lauhde
Head of Technology, International Committee of the Red Cross
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Forum Institutional?
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Tech and Innovation

This article is part of: Annual Meeting of the New Champions

Listen to the article

A version of this article originally appeared in Huawei’s magazine, Transform.

  • Few technologies generate the fear factor induced by artificial intelligence (AI).
  • But excessive caution creates another risk: while cyber criminals move full speed ahead to use AI for malign purposes, everyone else proceeds carefully, waiting until every last lawmaker, sceptic and late adopter is fully convinced that AI should be trusted, rather than feared.
  • The key to solving these issues is international cooperation, AI’s security implications don’t respect national borders.

Few technologies generate the fear factor induced by artificial intelligence (AI). Ever since Alan Turing introduced the idea in 1948, people have wondered what would happen if machines outsmarted their creators and took charge of the planet. Even AI researchers themselves are uneasy. In May 2023, a group, including Elon Musk, called for a six-month moratorium on all training of “AI systems more powerful than GPT-4.”

Legal protections could avert such a calamity and the first AI regulations have been published and are awaiting public comment. But some of these draft rules set impossibly high standards. A proposed EU regulation on AI released in 2021, for example, requires that all data sets used for machine learning be “free from error.”

Few data sets are. An MIT review of ten major data sets found an average error rate of 3.4%, which translates into tens of thousands of errors, including mislabeled images, text and audio.

Tech companies are already expressing concern about the EU regulations. Google was diplomatic, saying the company “is concerned that the opportunity cost of not using AI is not sufficiently reflected in policy debates.”

It’s understandable that legislators are cautious. But excessive caution creates another risk: while cyber criminals move full speed ahead to use AI for malign purposes, the well-intentioned players proceed carefully, waiting until every last lawmaker, sceptic and late adopter is fully convinced that AI should be trusted, rather than feared. If we take this two-track approach – bad actors moving quickly while good ones drag their feet – the results could be grim.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Hackers are already taking advantage of AI

Hackers are already using AI to create botnets, guess passwords, break CAPTCHA systems, make illegal robocalls and engage in other forms of cyber mischief. They don’t care about collateral damage and they don’t need to think about certification, testing or regulatory compliance. Unfortunately, this means that right now criminals are using AI in more advanced and innovative ways than law-abiding people are. That will likely cause some – perhaps many – to distrust AI even more than they do now.

But good actors outnumber bad ones and, over the long term, the odds are high that AI will be used in ways that benefit society. In the meantime, what can be done to build trust in AI?

The simple answer is that for now, we should not try to achieve full trust in AI. Instead, we need to build just enough trust to avoid over-regulating AI in a way that lets the criminals pull ahead. We can do that in several ways.

Have you read?

1. Increasing familiarity with AI

First, we must ensure that cyber security experts are familiar enough with AI to avoid unintended consequences. For example, in trying to use AI to solve a conventional security issue, one might inadvertently cause it to create a totally unforeseen and undesirable solution.

Again, criminals don’t have this issue. In fact, they are probing for loopholes in cyber defences against AI. For them, unintended consequences are a boon that could reveal hidden weaknesses to be exploited.

The need for AI-savvy cyber security people will compound an existing talent shortage: by some estimates, the world needs around three million more cyber security professionals than it currently has. But in addition to conventional skills – knowledge of network architecture, access control, encryption and so on – cyber security experts increasingly need the ability to work with AI to create trustworthy solutions.

2. Learning to defend against AI cyber attacks

Second, we will need to create the right IT environments to defend against AI-led attacks. AI is often considered to be a general-purpose technology – one with so many uses that it affects all aspects of society.

But AI will be less general purpose when operating within specific environments. For example, every corporate IT system is different. They have different password schemes, access controls and firewalls; their users behave differently. This means that, in a badly structured or poorly operated IT environment, AI will learn bad habits. It will generate false positives and false negatives. People will eventually conclude that AI can’t be trusted.

But in the right environment – one created using best practices, clear processes, good management and good tools – AI can be trained to spot anomalies and deviations from normal activity patterns that signal a security breach. AI will function like a well-trained guard dog that spots intruders and keeps them away. When it begins behaving that way, people will start to trust it.

3. Narrowing the digital divide

Third, we must work even harder to narrow the digital divide. Most people don’t link the issue of digital inequity with cyber security, but the connection is real. AI can rapidly harness computers for botnets or attacks. In some developing countries, companies may lack the capabilities to create a better-structured, more robustly protected IT environment. That makes these countries a rich hunting ground for cyber criminals.

Just because a problem isn’t in your network doesn’t mean it’s not your problem. Vulnerabilities can migrate – another reason to help poorer parts of the world start benefiting from more advanced technology.

The key to solving these issues is international cooperation. Like COVID-19 and climate change, AI’s security implications don’t respect national borders.

To be sure, there are significant barriers to trust among nations at the moment. But if we cannot establish a degree of trust sufficient to collaborate in this vital area, we will inevitably start to view AI not as a trusted tool to be utilised, but as a threat to be feared. If that happens, cyber criminals will have an insuperable advantage – not just for now, but forever.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How neurodiversity in the workplace can drive business success

Richard Jl Heron

October 8, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum