Emerging Technologies

What companies can do to build more inclusive AI

Visualisation of handshake between human and machine

Creating AI that’s inclusive requires a full shift in mindset throughout the development process

Mark Brayan
Chief Executive Officer, Appen
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

  • As the applications and influence of artificial intelligence (AI) grow, an ethical approach to AI development is a must-have;
  • Companies may struggle with the specific actions required to build AI that’s inclusive, responsible and ethical;
  • These actions relating to data, the AI model and the post-deployment phase can create more inclusive AI.

Conversations around responsible artificial intelligence (AI) are heating up as the ethical implications of its use are increasingly felt in our daily lives and society. With AI influencing life-changing decisions around mortgage loans, healthcare, parole and more, an ethical approach to AI development isn’t just a nice-to-have – it’s a requirement.

Have you read?

In theory, companies want to produce AI that’s inclusive, responsible and ethical – both in service of their customers and to maintain their brand reputation; in practice, they often struggle with the specifics.

Creating AI that’s inclusive requires a full shift in mindset throughout the entirety of the development process. It involves considering the full weight of every crucial decision in the build process. At a minimum, a full revamp in strategies around data, the AI model (programmes that represent the rules, numbers and any other algorithm-specific data structures required to make predictions for a specific task) and beyond will be needed.

Loading...

An action plan for inclusivity

It’s the responsibility of the people who build AI solutions to ensure that their AI is inclusive and provides a net-positive benefit to society. To accomplish this, there are several essential steps to take during the AI life cycle:

1. Data: At the data stage, organizations collect, clean, annotate, and validate data for their machine learning models. At this phase of the AI life cycle there’s maximum opportunity to incorporate an inclusive approach, as the data serves as the foundation of the model. Here are two factors to consider:

  • Data diversity: Data diversity refers to how wide of a net is cast by your dataset. Does it cover all of your potential use cases, including those that may be less common? Does it represent all of your end users and how they may interact with your model? Lack of coverage is one of the top reasons why companies fail in their endeavours to build more inclusive AI. It’s this step that requires some of the most careful consideration in the AI life cycle.
  • Data governance: Data governance needs to be approached with an inclusive lens. Develop guidelines and policies around data management that ensure you are: preparing training data of the highest quality and coverage, and supporting the ethical objectives of your business.

Without representative data, you can’t hope to create an inclusive product. Spend the majority of the time on your project making sure you’ve got the data right or partner with an external data provider who can ensure the data is representative of the group for whom your model is built.

The most lucrative use cases of A.I. until 2025
The most lucrative use cases of AI until 2025 Image: Statista

2. Model: While perhaps less weighty than the data element, there are critical opportunities during the building of the model through which to incorporate inclusive practices.

  • Model governance: A governance framework for model development refers to the policies that ensure your models are used ethically and that the right metrics are prioritized. If you optimize your model to be accurate 85% of the time, what does that mean for the other 15%? If the negative impact on end users is significant enough to represent an ethical failure, then you need to rethink your priorities. A governance plan will also indicate which projects should be deployed (and which shouldn’t).
  • Test with a diverse set of end users at scale: You can’t always anticipate how your end users will interact with your technology. To counteract the possibility of unpleasant surprises, incorporate a large, representative set of your end users early in the testing process. You may uncover gaps in model performance or new use cases, ones that would’ve been problematic to discover when the model is already in production.

Strategizing and delivering on the right objectives (for instance, a KPI that measures bias) will take you a long way toward building a responsible end product.

3. Post-deployment: Some teams feel their work is mostly done after they deploy their model, but the opposite is true: this is only the beginning of the model’s life cycle. Models need significant maintenance and retraining to stay at the same performance level and this can’t be an afterthought: letting performance dip could have serious ethical implications under certain use cases. Incorporate the following best practices as part of your post-deployment infrastructure:

  • Continuous monitoring and retraining: Designate a team to monitor the model post-deployment. Ask questions regularly: is it working as intended? Does it perform equally for each user group? Is it maintaining its performance? If the answer to any of these questions is no, it’s time to retrain your model on new data.
  • Create open feedback loops: You want to remain consistently aware of how users are experiencing your technology. You won’t necessarily have full or immediate insight into problems unless you create a protocol for collecting feedback. Create infrastructure to capture user feedback and most importantly take immediate action on any feedback that proves to be valid.

The above isn’t an exact blueprint, but offers a starting point for transitioning inclusive AI from a theoretical discussion to an action plan for your organization. If you approach AI creation with an inclusive lens, you’ll ideally find many additional steps to take throughout the development life cycle. It’s a mission-critical endeavour: for AI to work well, it needs to work well for everyone.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesStakeholder Capitalism
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Stanford just released its annual AI Index report. Here's what it reveals

James Fell

April 26, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum