Emerging Technologies

Rethinking risk and compliance for the Age of AI

AI artificial intelligence machine learning integrated audit software solution risk management compliance

Integrated audit solutions are needed to address the risks associated with Artificial Intelligence (AI).

Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
Lofred Madzou
Project Lead, Artificial Intelligence and Machine Learning, World Economic Forum
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Tech for Good

  • Artificial Intelligence (AI) is rapidly changing risk management and compliance.
  • However, AI can create new types of risks for businesses, such as amplifying bias or leading to opaque decisions.
  • Integrated audit software solutions are needed to manage existing and potential risks.

Artificial Intelligence (AI) has become an imperative for companies across industries. Despite the hype, AI is creating business value and, as a result, is rapidly being adopted around the world. Last year, the McKinsey Global Survey reported “a nearly 25 percent year-over-year increase in the use of AI in standard business processes”. The transformative power of AI is already affecting a range of functions, including customer service, brand management, operations, people and culture, and more recently, risk management and compliance.

Have you read?

This latter development should not surprise anyone. At its core, risk management refers to a company’s ability to identify, monitor and mitigate potential risks, while compliance processes are meant to ensure that it operates within legal, internal and ethical boundaries. These are information-intensive activities – they require collecting, recording and especially processing a significant amount of data and as such are particularly suited for deep learning, the dominant paradigm in AI.

Indeed, this statistical technique for classifying patterns – using neural networks with multiple layers – can be effectively leveraged for improving analytical capabilities in risk management and compliance.

AI systems create new types of risks

However, early experience shows that AI can create new types of risks for businesses. In hiring and credit, AI may amplify historical bias against female and minority background applicants, while in healthcare it may lead to opaque decisions because of its black box problem, to name just a few. These risks are amplified by the inherent complexity of deep learning models which may contain hundreds of millions of parameters. This encourages companies to procure third-party vendors’ solutions about which they know little of the inner functioning.

Businesses are employing AI for back office boost
AI is a business imperative, but it's bringing new risk. Image: Statista/Technalysis Research

Consequently, executives face a fundamental challenge: how to maximise the benefits of AI for various business functions without creating intractable risk and compliance issues?

Previously, we called for the introduction of risk/benefit assessment frameworks to identify and mitigate risks in AI systems. Yet, such frameworks are highly contextual and require high interdisciplinary expertise and multistakeholder collaboration. Not every organisation can afford such talents or have the required processes. Further, it's perfectly reasonable to assume that a given company has deployed different AI solutions for various use cases, each requiring a distinct framework. Designing and keeping track of these frameworks could quickly become an impossible task even for the most experienced risk managers. In this situation, an intuitive response would be to proceed with caution and limit the use of AI for low-risk applications to avoid potential regulatory violations. But, this can only be a temporary solution. In the long run, this would be a self-defeating strategy considering the immense potential of AI for business growth.

So, what is a sensible alternative?

The need for Enterprise Audit Software for AI systems

We argue that maximising the benefits of AI solutions for businesses white mitigating their adverse risks could be partially achieved by using appropriate audit software. There is already a plethora of audit software for ensuring that companies’ processes meet legal and industry standards across industries from finance to healthcare.

What’s needed now is an integrated audit solution which includes the management of risks related to AI. Such a solution should have three core functions:

1. Documenting the behavior of all AI solutions used by a company. This implies monitoring AI solutions and analysing their features distribution to investigate statistical dependencies. Consider the case of an AI solution for hiring: one should have clear insights into which features (e.g. attended university, years of experience, gender, etc.) have the most impact on recommendations.

2. Assessing compliance with a set of defined requirements. Once one understands the outcome of a model (i.e. why a hiring model is making a particular recommendation), it’s important to assess compliance with certain specifications that could range from legislation (such as the EU's Non-Discrimination Law) to organisational guidelines.

3. Enabling cross-department collaboration. This audit software should ease multistakeholder collaboration – especially between risk managers and data scientists who oversee AI solutions – by providing the appropriate information. For instance, risk managers need non-technical explanations about which requirements are met or not, while data science teams may be more interested in the performance characteristics of the model. When a non-compliance issue is identified, the audit software should provide recommendations for the appropriate interventions to the technical teams.

Discover

What is the World Economic Forum doing about the Fourth Industrial Revolution?

Developing such audit software for AI systems would go a long way in addressing the risks associated with AI. Yet, responsible AI cannot be fully automated. There is no universal list of requirements that one must meet to mitigate all existing and potential risks, because the context and industry domain will often determine what items are needed. As a consequence, risk managers and their ability to exercise judgment will remain essential. The rise of AI will only enable them to focus on what they do best: engage with other colleagues across departments to design and execute a sound risk-management policy.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesGlobal RisksFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why the Global Digital Compact's focus on digital trust and security is key to the future of internet

Agustina Callegari and Daniel Dobrygowski

April 24, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum