Fourth Industrial Revolution

Why regulating AI can be surprisingly straightforward, when teamed with eternal vigilance

Regulating AI requires careful planning

Regulating AI requires careful planning Image: Getty Images/iStockphoto

Rahul Tongia
Senior Fellow, Centre for Social and Economic Progress
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Fourth Industrial Revolution?
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
  • AI has arrived and we need to regulate it.
  • But, regulating AI is difficult because, like many technologies, AI is neither inherently good nor bad.
  • The inherent nature and rapid evolution of AI demands that regulations must continually evolve and focus on its outcomes, which requires eternal vigilance.

Artificial Intelligence (AI) is unlikely to destroy humanity, but it is already creating societal upheavals across industry, government, education and the creative arts. 'Deepfakes' are already being blamed for phoney political media. AI has arrived and we need to regulate it. But, regulating AI is difficult because, like many technologies, AI is neither inherently good nor bad. It depends on how it is used.

The EU recently issued rules on AI, segregating its use into tiers. These span unacceptably high-risk and banned uses down to minimal-risk and lightly regulated applications. While this is a useful framework and easy to conceptualize, determining risk levels isn’t easy. The best regulation focuses not on the AI tools, but on their users and usage. We apply this same method to the regulation of knives. Up-front regulations aren’t going to be enough. Instead, we need to continuously examine AI’s outcomes.

Many traditional regulations specify in advance what is allowed or disallowed. There are limits, for example, on how much of a chemical can be used in a particular process. These are easy for users to understand and comply with. But the nature of AI means we cannot rely on such regulation. AI isn’t just a black box technology – opaque to outsiders – its outcomes are unknown even to its creators. It relies on learning and produces a form of 'emergent behaviour' that cannot be known a priori. To make matters more complicated, the process of learning is also uncertain, leading to inherent risks of biases based on the training data.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Addressing these challenges requires AI regulation with at least three sets of principles:

1. Traceability

First, there should be traceability of AI, covering its applications and interactions. Even when companies outsource or have a large supply chain, each layer must comply individually and in aggregate.

If someone was inappropriately denied a loan and that process involved AI, we should know what caused that outcome. Traceability is linked to testability – was that the right outcome?

2. Testability

Testability is a complex issue that requires perpetual vigilance and cannot be tested against a pre-specified dataset. If this was the case, companies could game the system. And, if we worry AI will game the test system, the first mandate for AI should be honesty in reporting what it does. There are already instances of AI systems figuring out how to lie to increase their success towards objectives, without any programming or requests to lie from the creators or users.

3. Liability

Third, we need a system of liability that disincentivizes cheating or simply treating penalties as a cost of doing business. If AI regulations are financially tough enough, users will care about AI’s use and outcomes.

Have you read?

Aspects of these three regulatory tenets are partially covered elsewhere, but general-purpose regulatory frameworks aren’t geared for the speed, volume and opacity of AI. For example, we have laws over intellectual property, but most AI systems are trained on the public web. While the web is ostensibly public, the content owners may not want its use for training AI systems – media companies, for example, are now suing AI companies regarding this. As a subset of traceability, we may need regulations that extend to covering input data as well.

Regulating algorithms is a useful starting point for regulating AI, but only a few countries even regulate algorithms. Canada does this better than most countries. It extends algorithm regulation to include AI and focuses on impacts, examining “Type of automation (full or partial); duration and reversibility of the decision; and areas impacted (e.g., rights, privacy and autonomy, health, economic interests, the environment).”

Focusing on impacts, i.e., regulation after-the-fact, isn’t new. Medicines are regulated for approval and they are also heavily monitored subsequently. Similarly, algorithms are subject to scrutiny – or at least should be – for the outcomes they cause. For example, do banks deny loans or welfare systems deny benefits to single mothers discriminatingly?

Regulations won’t avoid AI-based problems, so we must also design AI systems to handle failures. Citizens must have recourse for actions taken through AI, e.g., a loan denial. Even without AI, denied loan applicants are often not given sufficient reasons. Fixing such power asymmetries requires broader regulation, such as improved consumer and citizen rights, a challenge that goes beyond AI.

Critics fear stringent regulations will stifle a fledgling industry. Not only is this fear unfounded, lack of regulation actually creates more uncertainty and open-ended risk. Liability doesn’t translate to inherently new regulations or burdens – we already impose liabilities for making threats, fraud, discrimination, copyright infringement, etc. Note that traceability and testability do not ask for explainability, a holy grail for AI systems, which would be burdensome in the visible future.

Companies using AI also want protection. A traditional framework has been offering technology companies 'safe harbour' for specific allowed activities. An Internet service provider isn’t liable for any end-user breaking the law using its Internet services, for example, using an online social platform to sell illegal copies of movies. AI technologists should enjoy such protections, but the deal is that once a violation is detected, the entity is obligated to act on it, like today with takedown notices for illegal content hosted by an Internet platform. The experience of the Electronic Frontier Foundation shows how takedown notices are often erroneous or subject to abuse, but the principle of takedown after the fact remains valuable.

Banks gave us 'too big to fail.' AI shouldn’t lead to 'too embedded to fix' (without shutting everything down). Companies using AI tools in their processes should provably make them modular so they can switch off or replace relevant components if there is a violation without jeopardizing the broader system.

Regulating AI faces other challenges similar to how we regulate the Internet, including issues of sovereignty, jurisdiction and domain. Transparency and harmonization of regulations can help to some extent. The fact that AI is evolving so rapidly might be a challenge, but it also reminds us that our regulations must also be evolving and we must accept eternal vigilance.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Fourth Industrial RevolutionEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

What to know about generative AI: insights from the World Economic Forum 

Andrea Willige

October 4, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum