Forum Institutional

How to ensure fair AI throughout the supply chain

On balance, fair AI is infinitely better for society and business than algorithms based on poor data.

Mark Brayan
Chief Executive Officer, Appen
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Forum Institutional?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Davos Agenda

This article is part of: The Davos Agenda
  • We need to go back to basics as we navigate the hype to make AI fair.
  • Basic fairness applies to all levels of the immense AI development lifecycle.
  • The fair treatment of the people who collect the data is being overlooked.

For some time now, there has been talk about how leaders developing AI applications need to build “fair AI”; this should be unbiased and equitable, ideally improving the quality of life of everyone it touches.

However, most of the thinking around ethical AI has been focused around models, explainability, technical teams and data governance.

But how about considering basic fairness in the AI development lifecycle? This lifecycle is immense. It entails the data collection and annotation contractors, the companies and individuals managing the data, the technology specialists building the AI models, the go-to-market experts building the AI applications, and the businesses and individuals using AI-powered products and services.

Such an approach is the only way we can ensure that technology continues to make the world a better place.

What is fair AI?

When an AI product is deployed in the real world, it must work as expected, deliver equitable results for all intended beneficiaries under all circumstances, and not harm anyone physically, mentally or emotionally. That's a tall order. And it starts with building an unbiased and comprehensive data set.

While this may seem obvious, it’s easy for development teams to put unbiased data to the bottom of the pile and instead focus on achieving results as quickly as possible. However, doing so creates a long-term responsibility that developers can easily forget to maintain.

When is AI unfair?

Unfairness in the form of bias can appear in AI in multiple ways.

Consider criminal risk assessment algorithms that use behavioural and demographic data to determine the risk of reoffending.

A computer-based annotation system can’t do it alone when it comes to interpreting complex situations and catching subtle errors that could have real-life consequences.

Mark Brayan

One recent study found one such algorithm to be racially biased. For example, an 18-year-old black woman was charged with petty theft, having stolen an $80 bicycle. Despite having only one prior juvenile misdemeanour, she was rated as having a higher risk of reoffending than a 41-year-old white male who was charged with a similar crime but had several prior offenses.

Moving forward in time, the woman committed no other offenses while the male is now serving a prison term for the theft of thousands of dollars of electronics. Clearly, the algorithm was based on poor data.

Image: Proceedings of the National Academy of Sciences of the United States of America

Unfair AI also lives inside everyday technology. If a company sells cars equipped with speech recognition in multiple countries but trains the product using only native male speakers for each language, the system may struggle to understand women or anyone with a different accent.

This could lead to drivers being taken to the wrong destination. Worse still, it could cause distracted driving, leading to accidents. In addition to being unfair to some users, biased data can saddle solution providers with substandard products that can damage their reputation.

Where humans triumph

Developing a comprehensive and unbiased dataset requires data diversity and breadth. This ensures the product is trained in every situation it is likely to encounter in real life, such as all of the accents, voice tones and languages that a car’s speech recognition system may encounter in its target markets.

Achieving this means working with people who resemble the entire customer profile to collect, annotate and validate the AI model training data.

Have you read?

It also means working with a diverse team on the model building itself. A computer-based annotation system can’t do it alone when it comes to interpreting complex situations and catching subtle errors that could have real-life consequences.

For example, a human annotating images or a video for a self-driving car application could interpret that a person with a certain posture walking between two cars may be pushing a buggy that will appear in traffic before the person does.

Even the best computer-based annotation systems would struggle to make this interpretation. Similarly, a human reading a product review is much more likely to detect sarcasm than a machine is.

The people behind the data

Leaders committed to fair AI must include another important link in the AI development lifecycle when building global AI products or services: the millions of people who collect and label the data. Engaging these people in a fair and ethical way is mission-critical and should be part of every organisation’s responsibility charter.

Fair treatment means committing to fair pay, flexible working hours, including people from any and all backgrounds, respecting privacy and confidentiality, and working with people in a way that they feel heard and respected.

Leaders should also inspire their contractors in a way that instils pride in working on the most impactful technology used by the global economy.

Why does fair AI matter, beyond the obvious?

Quite simply, it's good for society, and it’s good for business. Product teams, for example, are inspired when they’re building products that have a positive impact on their market and the world. But what else do fair products do?

  • They work for the entire target customer base: Products based on representative data will work for all users without bias, and so sell better, reduce frustration and lower returns.
  • They are safer: Comprehensive, unbiased training data will lead to safer, better-quality products, reducing the potential for failure.
  • They build loyalty: Great products and a great reputation are keys to increased customer loyalty.
  • They protect the brand: Products that work as expected often reduce the risk of serious and lasting brand damage.

According to one MIT Sloan study, only about one in ten enterprises currently report obtaining “significant” financial benefits from AI.

In 2021, as boards focus on closing the gap between AI’s potential and its reality, they will increasingly prioritise the adoption of the principles of fair AI. They know it will ensure projects work as designed, deliver expected benefits, and contribute to a better society.

Applications relying on AI are also infiltrating every industry, including the public sector. AI developers therefore have a certain responsibility to ensure their products are built on unbiased and comprehensive data sets that work for everyone.

Business and technology leaders should embrace fair AI as a core tenet to improve their businesses whilst helping society as a whole.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Forum InstitutionalEmerging TechnologiesFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why private capital is indispensable to close the great development and climate financing divide

Mahmoud Mohieldin and Manuela Stefania Fulga

October 3, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum