Forum Institutional

Regulation could transform the AI industry. Here's how companies can prepare

Lofred Madzou
Project Lead, Artificial Intelligence and Machine Learning, World Economic Forum
Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Forum Institutional?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Davos Agenda

  • A recent whitepaper from the European Commission (EC) could lay the groundwork for a regulatory framework for AI.
  • Such regulation could have widespread impact on the AI industry.
  • Companies can prepare for it by building robust auditing and reporting capabilities.

Given Artificial Intelligence’s rapid adoption in high-stakes domains, as well as the growing public concern with potential AI misuse, regulation is around the corner. In fact, The European Commission is currently developing a regulatory framework that could have a wide impact, similar to the GDPR, on any company looking to do business in the EU. In anticipation, companies should pre-emptively introduce a sound vetting process for AI products and services to experience the least disruption.

Discover

What is the World Economic Forum doing about the Fourth Industrial Revolution?

A framework in development

In February 2020, the European Commission (EC) released a White Paper on Artificial Intelligence, one widely seen by experts as a step toward a new regulatory framework. It formally outlined the Commission’s vision to support the development of a trustworthy and innovative AI ecosystem in Europe. This vision was guided by two core objectives: 1) to promote a human-centric approach to AI; that is, ensuring that AI primarily services people and increases their well-being, and 2) to leverage the EU market to spread the EU’s approach to AI regulation globally.

Crucially, the paper specified that any new regulatory regime will only be applicable within high-risk sectors (such as healthcare, transport, energy, and parts of the public sector) wherein “significant risks can be expected to occur.” Such regulations would also apply to specific AI applications, such as recruitment, facial recognition technology, and applications linked with workers' and consumers' rights. The advent of this formal AI regulatory framework within the EU will have a significant impact on companies which provide or procure AI systems, as well as their relationships with regulators.

The white paper proposes a tiered set of AI compliance guidelines, including both mandatory and voluntary compliance schemes. Mandatory requirements will exist for companies which operate in the above-mentioned “high-risk” sectors and/or use applications of AI. These relate specifically to “training data, record-keeping information to be provided about the AI application robustness and accuracy, human oversight specific requirements for facial recognition”. In turn, those operating in lower-risk sectors and/or applications may have the option to take part in a voluntary labelling scheme to signal that their AI-enabled products and services are trustworthy. These guidelines have already gained traction within the EU, as 14 member states recently endorsed a position paper advocating such compliance stratagems as well as a soft law approach (e.g. self-regulation) to AI regulation more broadly.

As yet, areas of ambiguity regarding different classifications remain. For example, the designation of operators and applications as either high or low risk will be performed by the European Commission, and these designations may evolve to include sectors and applications not yet deemed “high-risk,” but which are peripheral to higher risk AI operations. It is therefore prudent for sectors which rely heavily on AI to come into regulatory compliance of their own accord.

To be sure, no regulation has been announced in the EU and much is still being established. For instance, crucial components for enacting AI oversight remain undeveloped or unclear; for example, there does not yet exist a finalised list of specific compliance requirements, an enforcement mechanism overseeing the systematic adoption of these regulations, or a clear process for coordinating AI protocols between the European Commission and EU member states. Beyond this, however, it is certain that the groundwork is being laid. Indeed, during her state of the union address on September 16, 2020, The President of the European Commission, Ursula von der Leyen, stressed the need for “a set of rules that puts people at the centre” of AI and confirmed that “the Commission will propose a law to this effect next year”.

How companies can prepare now

This commitment to regulatory action is fueled by the feeling that various AI systems have been deployed without having been properly vetted. It has become clear that the emergent application of AI to an ever-widening number of life-decisions in high-stakes domains (e.g. employment, medicine, education) must not be allowed to reproduce, reinforce, or widen existing disparities. These are surely legitimate concerns, and they are concerns which forward-looking companies can and should proactively address through a sound “vetting process.” We argue that for AI products and solutions, this process should be structured around four key steps:

1. Document the lineage of AI products/services and behaviours while in operation. Such documentation should include information about aims/purposes, training datasets, safety and fairness testing results, performance characteristics, and usage scenarios. In recent years, this has become an industry-wide priority, and most major actors have created/adopted workable documentary models (from Google’s model cards to IBM’s Factsheets). Furthermore, the behaviours of AI systems should be actively monitored while operating. Here, companies should turn to the growing body of literature on the various tools available to help understand and interpret predictions made by machine learning models.

2. Develop an in-house interpretation of the proposed EC white paper requirements: It’s quite unlikely that the adopted regulatory framework will fully explain how to put requirements into practice, mostly because these will vary significantly depending on the specific use-context. For instance, if the law requires companies to use data sets that are sufficiently representative to prevent potential discrimination, it won’t specify what the exact composition of a given data set should be. Therefore, a cross-functional team will need to develop an in-house interpretation of these requirements. This can be done by asking relevant questions. For example, within an AI-powered recruitment app, one might consider the following question: how does the demographic composition of training datasets impact the system’s recommendations?

3. Assess compliance of AI products/services with the EC proposed EC white paper requirements: An independent cross-functional team, consisting of risk and compliance officers, product managers, and data scientists, should perform an internal audit to assess compliance with the EC requirements based on the defined in-house interpretation of the EC requirements.

4. Report findings to relevant stakeholders. These audit reports should be available upon request to:

Regulators. To streamline this process, reporting solutions used by companies should include an application programming interface (API) that facilitates interaction with regulators;

Demand-side (buyers). As part of their procurement process, companies should be able to access audit reports drafted by their technology providers, and should mandate a competent third-party actor to cross-check the information provided;

Supply-side (providers). When bidding for a contract, technology providers should include audit reports to situate their companies as leaders in responsible AI. These can also be used for liability purposes, to demonstrate that they have performed due diligence in implementing the right verification processes.

Consumers associations/civil society organizations. Through appropriate channels, consumers and citizens should be able to access reports from companies that deploy AI to make sure that they can choose to interact with trustworthy systems.

One way to ensure a smooth transition into compliance with new regulations would be to designate specific personnel to ensure conformity. Specifically, Risk and Compliance managers are likely to play a key role in this process, and, given appropriate audit software for AI systems (as argued here), much of this process could be automated.

Have you read?

Taking action

It is important that companies likely to be affected act promptly because implementing such a process does take some time. Failure to build such auditing and reporting capacities may bear a significant cost in terms of preventable harms, poorly defined lines of accountability, lower business resilience, and ultimately weak organizational capabilities to fulfil companies’ obligations.

Yet there are reasons to remain hopeful for companies willing to embrace responsible AI before they have to. Indeed, there is a growing body of evidence demonstrating that business leaders should embrace a form of technological social responsibility when it comes to AI, not only to protect their brand reputation but more importantly to build trust with their workers, consumers and society as a whole. This proactive attitude will make the difference between those adapting to win and those reacting to cope.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Forum InstitutionalEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Climate finance: What are debt-for-nature swaps and how can they help countries?

Kate Whiting

April 26, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum