Artificial Intelligence

How to put AI ethics into practice: a 12-step guide

How can organizations best ensure they are deploying AI-powered systems responsibly? Image: Pete Linforth on Pixabay

Lofred Madzou
Project Lead, Artificial Intelligence and Machine Learning, World Economic Forum
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

  • As AI-powered services become ever-more utilised, there remains a lack of consensus about how to ensure these systems are deployed responsibly.
  • To address this issue, we are calling for the introduction of risk/benefit assessment frameworks.
  • Here are 12 considerations for organizations aiming to design such frameworks.

Over the past decade, artificial intelligence (AI) has emerged as the software engine that drives the Fourth Industrial Revolution, a technological force that affects all disciplines, economies, and industries.

AI-powered services are already being applied to create more personalized shopping experiences, drive productivity and increase farming efficiency. This progress is remarkable in important respects, but it also creates unique challenges. Various studies have established that without proper oversight, AI may replicate or even exacerbate human bias and discrimination, or lead to other unintended consequences. This is particularly problematic when AI is deployed in high-stakes domains such as criminal justice, healthcare, banking or employment.

Policy-makers and industry actors are increasingly aware of both the opportunities and risks associated with AI. Yet there is a lack of consensus about the oversight processes that should be introduced to ensure the trustworthy deployment of AI systems – that is, making sure that the behaviour of a given AI system is consistent with a set of specifications that could range from legislation (such as the EU's Non-Discrimination Law) to a set of organizational guidelines.

Have you read?

These difficulties are largely related to how deep learning systems operate, where classifying patterns using neural networks, which may contain hundreds of millions of parameters, can produce opaque and non-intuitive decision-making processes. This makes the detection of bugs or inconsistencies extremely difficult.

Ongoing efforts to identify and mitigate risks in AI systems

To address these challenges and unlock the benefits of AI for all, we call for the introduction of risk/benefit assessment frameworks to identify and mitigate risks in AI systems.

These frameworks could ensure the identification, monitoring and mitigation of the risks associated with specific AI systems by grounding them with assessment criteria and usage scenarios. This is a different approach to the current situation where the prevailing practice in the industry is to train a system using a training dataset and then test it on another set to reveal its average-case performance.

Various efforts to design such frameworks are underway, both within governments and in the industry. Last year, Singapore released its Model AI Governance Framework, which provides readily-implementable guidance to private sector organizations seeking to deploy AI responsibly. More recently, Google has released an end-to-end framework for internal audit of AI systems.

Key considerations for the design of risk/benefit assessment frameworks

Building on the existing literature, we have co-designed guidelines to support organizations that are interested in designing auditable AI systems through sound risk/benefit assessment frameworks:

1. Justify the choice of introducing an AI-powered service

Before considering how to mitigate the risks associated with AI-powered services, organizations willing to deploy them should clearly lay out their assigned objectives and how they are supposed to benefit various stakeholders (such as end users, consumers, citizens and society at large).

2. Adopt a multistakeholder approach

Project teams should identify the stakeholders both internally and externally that should be anchored to each particular project, and provide them with relevant information about the usage scenarios envisioned and the specification of the AI system under consideration.

3. Consider relevant regulations and build on existing best practices

When considering the risks and benefits associated with specific AI-powered solutions, include relevant human and civil rights in impact assessments.

4. Apply risks/benefits assessment frameworks across the lifecycle of AI-powered services

An important distinction between AI software and traditional software development is the learning aspect (that is, the underlying model evolves with data and use). Therefore, any sensible risk assessment framework has to integrate both the build-time (design) and runtime (monitor and manage). Also, it should be amenable for assessment from a multistakeholder perspective both at build-time and run-time.

Loading...

5. Adopt a user-centric and use case-based approach

To ensure that risks/benefits assessment frameworks are effectively actionable they should be designed from the perspective of the project teams and around specific use cases.

6. Clearly lay out a risk prioritization scheme

Diverse groups of stakeholders have different risk/benefit perceptions and levels of tolerance. Therefore, it is essential to implement processes explaining how risks and benefits are prioritized and competing interests resolved.

7. Define performance metrics

Project teams, in consultation with key stakeholders, should define clear metrics for assessing the AI-powered system’s fitness for its intended purpose. Such metrics should cover the system’s narrowly defined accuracy as well as other aspects of the system’s more broadly defined fitness for purpose (including factors such as regulatory compliance, user experience and adoption rates).

8. Define operational roles

Project teams should clearly define the roles for human agents in the deployment and operation of any AI-powered system. The definition should include clear specification of the responsibilities of each agent required for the effective operation of the system, the competencies required for filling the role and the risks associated with a failure to fill the roles as intended.

9. Specify data requirements and flows

Project teams should specify the volumes and nature of data required for the effective training, testing and operation of any AI-powered system. Project teams should map data flows expected with the operation of the system (including data acquisition, processing, storage, and final disposition) and identify provisions to maintain data security and integrity at each stage in the data lifecycle.

10. Specify lines of accountability

Project teams should map lines of responsibility for outcomes (both intermediate and final) generated by any AI-powered system. Such a map should enable a third party to assess responsibility for any unexpected outcome of the system.

11. Support a culture of experimentation

Organizations should advocate for a right to experiment around AI-powered services for deployment to encourage calculated risks. In practice, this requires setting up feasibility and validation studies, encouraging cross-collaboration across departments and fields of expertise, sharing of knowledge and feedback via a dedicated platform.

12. Create educational resources

Building a repository of various risks/benefits assessment frameworks, their performance and revised versions to develop strong organizational capability in the deployment of AI-powered services is key.

We hope that these guidelines will help organizations interested in putting AI ethics into practice to ask the right questions, follow the best practices, identify and involve the right stakeholders in the process. We don’t claim to give the final word on this vital conversation but to empower organizations in their journey to deploy AI responsibly.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum