Artificial Intelligence

A 5-step guide to scale responsible AI

"Deploying AI at scale will be problematic until companies engage in fundamental change to become ‘Responsible AI’-driven organizations'

"Deploying AI at scale will be problematic until companies engage in fundamental change to become ‘Responsible AI’-driven organizations' Image: shutterstock/metamorworks

Lofred Madzou
Project Lead, Artificial Intelligence and Machine Learning, World Economic Forum
Danny Lange
Vice-President, AI and Machine Learning, Unity Technologies
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

  • Deploying AI at scale will remain problematic until companies engage in a fundamental change to become ‘responsible AI’-driven organizations.
  • Companies should embrace this transformation as trust in AI systems will be the defining factor to determine who is worth doing business with.
  • Here is a guide to help them achieve responsible AI at scale.

Machine Learning is a revolutionary technology that has started to fundamentally disrupt the way that companies operate. Therefore, it is not surprising that businesses are rushing to implement it into their processes, as reported by the McKinsey & Company Global AI Survey. At the same time, a tiny percentage of these companies have managed to deploy Artificial Intelligence (AI) at scale – a process which seems harder to achieve given regular reports of unethical uses of AI and growing public concern about its potential adverse impacts.

These difficulties are likely to persist until companies engage in a fundamental change to become ‘responsible AI’-driven organizations. In practice, this requires addressing the governance challenges associated with AI, and then designing and executing a sound strategy. To help companies deploy responsible AI at scale, we offer a five-step guide.

Have you read?

AI creates unique governance challenges

We live in a world filled with uncertainty and the ability to build learning systems able to cope with this basic reality to a certain extent, by discovering patterns and relationships in data without being explicitly programmed, represents an immense opportunity.

However, there remain reasons for being concerned, because Machine Learning also creates unique governance challenges. For one thing, these systems are heavy-reliant on data, which incentivises companies to massively collect personal data, causing potential privacy issues in the process.

Second, collecting, cleaning and processing high-quality data is a costly and complex task. Consequently, business datasets often don’t accurately reflect the “real world”. Even when they do, they may simply replicate or exacerbate human bias and lead to discriminatory outcomes. That’s because the feedback loop in AI is likely to amplify any innate propensity embedded in the data.

Lastly, the power of massive computational systems with limitless storage capabilities eliminates the option of anonymity, as detailed personal behavioural information is taken into account to enable individual targeting at a previously unseen high level of granularity.

Risks that organizations consider relevant and are working to mitigate.
Risks that organizations consider relevant and are working to mitigate. Image: McKinsey & Company

More fundamentally, because AI-powered systems evolve with data and use, their behaviours are hard to anticipate; and when they misbehave, they are harder to debug and maintain. As opposed to classic software, one cannot simply correct the instructions given to the system to re-establish consistency with its intended functionality. Put simply, when something goes wrong, it’s harder to determine why it happened and implement corrective measures. In this context, an innocent objective such as maximising revenue could allow a highly capable AI learning system to develop deep and hard-to-detect ways to deceive the user into additional spending, which raises legitimate ethical concerns.

5-steps to deploy responsible AI at scale

As a company, how do you successfully deploy AI at scale while mitigating the risks discussed above? You should engage in a fundamental organisational change to become a responsible AI-driven company. To help navigate that change, we offer the following process as a starting point:

1. Define what responsible AI means for your company: To make sure the entire organization is pushing in the same direction, executives must define what constitutes responsible use of AI for their company through a collaborative process, involving board members, executives and senior managers across departments. This can take the form of a set of principles that guide the design and use of AI services or products. The drafting process of such principles should be structured around a practical reflection on how AI can create value for the organization and what risks (e.g. brand reputation, employee safety, unfair outcomes for customers, increased polarisation in the public discourse) need to be mitigated along the way.

Major industry actors, including Google and Microsoft, have already moved in this direction and released their responsible AI principles. More companies should follow their example. Drafting such principles provides two main benefits. First, it gives a chance to everyone, particularly top management, to get educated about responsible AI. Second, it could form the basis of a responsible AI business strategy, detailing how your organization plans to build a pipeline of responsible AI services and products.

2. Build organizational capabilities: Designing and deploying trustworthy AI systems should be an organization-wide effort. It requires sound planning, cross-functional and coordinated execution, employee training, and significant investment in resources to drive the adoption of responsible AI practices. To pilot these activities, companies should build an internal “Centre of AI Excellence”, which would concentrate its efforts on two core functions: training and driving adoption.

Indeed, to do their job, employees need to be trained to understand how risk manifests in their contextual interactions with AI systems and, more importantly, how to identify, report and mitigate them. That’s where even the most well-intentioned company can fall short if it focuses exclusively on technical teams. Also, the Centre should operate in close collaboration with business “champions” in charge of overseeing the implementation of trustworthy AI solutions and products.

3. Facilitate cross-functional collaboration: Risks are highly contextual, meaning diverse business functions have different risk perceptions. While designing your strategy, make sure to have complementary perspectives from various departments to develop a sound risk prioritisation scheme.

This will reduce top management “blind spots” and ensure stronger support from your workforce during the execution. Also, because learning systems tend to drive unanticipated behaviours, there will be risks that need to be addressed while the system is in operation. Here, close cross-functional collaboration, coordinated by risk and compliance officers, will be key for designing and implementing effective remedies.

4. Adopt more holistic performance metrics: Currently in the industry, AI systems are usually assessed based on their average performance on benchmark datasets. Yet AI practitioners and researchers acknowledge it is a rather narrow approach to performance assessment and are actively investigating alternative methods.

We suggest a more holistic approach: companies should, on a regular basis, monitor and assess the behaviour of their systems against their responsible AI principles. From that perspective, a system is deemed performant if its behaviour is consistent with the organizational definition of what is considered a responsible AI-powered service or product.

5. Define clear lines of accountability: Having the right training and resources is not enough to implement a lasting change if you fail to build the right lines of accountability. In other words, to do the right thing, employees must have the right incentives and be recognized for doing the right thing. Unsurprisingly, that’s one of the biggest challenges that Responsible AI practitioners are reporting. Here, we suggest two remedies.

First, you should introduce a vetting process, either as part of your AI products pre-launch review, or independent of it to make sure that ethical considerations have been addressed. This vetting process should be articulated with an organizational framework that maps the roles and responsibilities of each team involved and an escalation procedure to follow when/if there is persistent disagreement, for instance between product and privacy managers. Second, employees who have reported problematic use cases and took the time to introduce corrective measures should be rewarded as part of their annual performance assessment.

The way forward

There is an increasing awareness among business leaders that a responsible approach to AI is needed to ensure the beneficial and trustworthy use of this transformative technology. However, they are unsure about how to do this at scale while creating value for their companies. We want to reassure them that this is possible, but it requires profound organizational change.

As with any important change in life, the first steps are usually the hardest and we hope that our guide will help business leaders navigate that transition phase. We would also like to encourage them to persevere because, in the long run, responsible AI-driven companies are likely to be the most competitive. Indeed, the need for trust in AI systems is not a trend; it is the defining factor that will determine who is worth doing business with.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceCorporate Governance
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum