Artificial Intelligence

10 steps to educate your company on AI fairness

Does your company have an AI fairness charter?

Does your company have an AI fairness charter? Image: Hitesh Choudhary/Unsplash

Nadjia Yousif
Managing Director and Partner and CoLeads the Financial Institutions practice for the UK the Netherlands and Belgium, Boston Consulting Group
Mark Minevich
Chief Digital Strategist, International Research Centre on Artificial Intelligence under the auspices of UNESCO, Sr. Advisor, Boston Consulting Group
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

  • As companies increasingly apply artificial intelligence, they must address concerns about trust.
  • Here are 10 practical interventions for companies to employ to ensure AI fairness.
  • They include creating an AI fairness charter and implementing training and testing.

Data-driven technologies and artificial intelligence (AI) are powering our world today - from predicting where the next COVID-19 variant will arise, to helping us travel on the most efficient route. In many domains, the general public has a high amount of trust that the algorithms that are powering these experiences are being developed in a fair manner.

However, this trust can be easily broken. For example, consider recruiting software that, due to unrepresentative training data, penalizes applications that contain the word “women”, or a credit-scoring system that misses real-world evidence of credit-worthiness and thus as a result certain groups get lower credit limits or are denied loans.

The reality is that the technology is moving faster than the education and training on AI fairness. The people who train, develop, implement and market these data-driven experiences are often unaware of the second or third-order implications of their hard work.

As part of the World Economic Forum's Global Future Council on Artificial Intelligence for Humanity, a collective of AI practitioners, researchers and corporate advisors, we propose 10 practical interventions for companies to employ to ensure AI fairness.

Have you read?

1. Assign responsibility for AI education.

Assign a chief AI ethics officer (CAIO) who along with a cross-functional ethics board (including representatives from data science, regulatory, public relations, communications and HR) should be responsible for the designing and implementing AI education activities. The CAIO should also be the “ombudsman” for staff to reach out to in case of fairness concerns, as well as a spokesperson to non-technical staff. Ideally this role should report directly to the CEO for visibility and implementation.

2. Define fairness for your organization.

Develop an AI fairness charter template and then ask all departments that are actively using AI to complete it in their context. This is particularly relevant for business line managers and product and service owners.

Example of an AI Fairness Charter
Example of an AI Fairness Charter Image: Global Future Council on Artificial Intelligence for Humanity

3. Ensure AI fairness along the supply chain.

Require suppliers you are using who have AI built into their procured products and services – for instance a recruiting agency who might use AI for candidate screening – to also complete an AI fairness charter and to adhere to company policies on AI fairness. This is particularly relevant for the procurement function and for suppliers.

4. Educate staff and stakeholders through training and a “learn by doing” approach.

Require mandatory training and certification for all employees on AI fairness principles - similar to how staff are required to sign up to codes of business conduct. For technical staff, provide training on how to build models that do not violate fairness principles. All trainings should leverage the insights from the AI fairness charters to directly address issues facing the company. Ensure the course content is regularly reviewed by the ethics board.

5. Create an HR AI fairness people plan.

An HR AI fairness plan should include a yearly review by HR to assess the diversity of the team working on data-driven technologies and AI, and an explicit review and upgrade of the competencies and skills that are currently advertised for key AI-relevant product development roles (such as product owner, data scientist and data engineer) to ensure awareness of fairness is part of the job description.

6. Test AI fairness before any tech launches.

Require departments and suppliers to run and internally publish fairness outcomes tests before any AI algorithm is allowed to go live. Once you know what groups may be unfairly treated due to data bias, simulate users from that group and monitor the results. This can be used by product teams to iterate and improve their product or service before it goes live. Open source tools, such as Microsoft Fairlearn, can help provide the analysis for a fairness outcome test.

7. Communicate your approach to AI fairness.

Set up fairness outcomes learning sessions with customer- and public-facing staff to go through the fairness outcomes tests for any new or updated product or service. This is particularly relevant for marketing and external communications, as well as customer service teams.

8. Dedicate a standing item in board meetings to AI fairness processes.

This discussion should include the reporting on progress and adherence, themes raised from the chief AI ethics officer and ethics board, and the results of high-priority fairness outcomes tests

9. Make sure the education sticks.

Regularly track and report participation and completion of the AI fairness activities, along with the demonstrated impact of managing fairness in terms of real business value. Provide these updates to department and line managers to communicate to staff to reinforce that by making AI platforms and software more fair, the organization is more effective and productive.

10. Document everything.

Document your approach to AI fairness and communicate it in staff and supplier trainings and high-profile events, including for customers and investors.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum