Fourth Industrial Revolution

5 ways companies can adopt ethical AI

Does your company have an AI ethics officer?

Does your company have an AI ethics officer? Image: Alex Knight/Unsplash

Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
Our Impact
What's the World Economic Forum doing to accelerate action on Fourth Industrial Revolution?
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Fourth Industrial Revolution

This article is part of: World Economic Forum Annual Meeting

In 2014, Stephen Hawking said that AI would be humankind’s best or last invention. Six years later, as we welcome 2020, companies are looking at how to use Artificial Intelligence (AI) in their business to stay competitive. The question they are facing is how to evaluate whether the AI products they use will do more harm than good.

Many public and private leaders worldwide are thinking about how to address these questions around the safety, privacy, accountability transparency and bias in algorithms. For example, the incoming EU Commissioner has said she wishes to see legislation to ensure the production of ethical AI in Europe within the next few months.

One of the major risks of AI is that poor data and poorly constructed algorithms could give poor results, which could mean bad outcomes for businesses using them, not only internally because businesses do not get the required insights, but also externally with their customers if users think that decisions exclude or marginalise them. For example, an algorithm could make a biased decision against awarding a loan or in hiring.

So, how can your company get ahead and avoid the pitfalls? Here are five lessons for the ethical use of AI.

Employ a Chief AI Ethics Officer: Chief AI Ethics Officers would be able to guide companies in their use of AI, particularly some of the more controversial uses such as facial recognition and exploiting personal data. In 2017, in IEEE Spectrum, I suggested that companies should employ a Chief AI Ethics Officer. In 2014, an AI start-up recruited me to this position, which I then had to define from scratch. In addition to my being able to alert the Board to any concerns, I organised a Panel of Advisors and worked with product teams to ensure embedding ethical frameworks from inception of product. Similarly, Salesforce has appointed a Chief Technology Ethics officer and an AI ethicist to work with the AI production team.

Educate your leaders: Both your executives and your boards need to be educated about the benefits and challenges of using AI. To help companies do this, the World Economic Forum will release a toolkit for board directors at our Annual Meeting in Davos-Klosters, Switzerland, 21-24 January. At the heart of the Toolkit is an ethics module that enables directors to ask good questions of the C-suite. The creation of ethics advisory boards could also help companies navigate how to produce or sell AI.

Watch government regulation: Government regulations can affect how a company approaches its AI product offerings. For example, the Forum worked with the UK government to co-create ethical guidelines for the procurement by government of AI. These guidelines have also been piloted in UAE and Bahrain, and the objective is to scale their use globally. The key impacts are that it allows the government to set out what they expect of ethical AI development in their jurisdiction without having to go through the lengthy regulatory process, it gives companies a base-line understanding of the governments tolerance levels and so enables them to expend R&D money with confidence, and it increases the number of companies thinking about ethical design development and us od AI tools.

Identify risks: Companies should be aware of where particular AI risks arise, for example in the use of AI in human resources. Once they have identified those risks, it is useful to look at the developing standards and certifications in this area. To address this, the Forum is creating a toolkit for human resources departments to use when considering deploying AI solutions.

Look ahead: Companies should start thinking now about how they will retrain and educate employees as AI is introduced to work alongside them. They must also consider what new markets might be opened by ethical design development and use of AI, and plan for how they will check for changes in algorithms or design to ensure ethical approaches.

It is my hope that we can avoid a techlash which would cause companies to miss out on the benefits to be derived from AI. We must consider carefully and proactively the necessary governance mechanisms to be used to ensure ethical considerations in the deployment of AI tools. Building trust in technological solutions and tools must be our principle goal so that humans and the planet can benefit from its use.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How the Internet of Things (IoT) became a dark web target – and what to do about it

Antoinette Hodes

May 17, 2024

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum