Artificial Intelligence

5 ways to avoid artificial intelligence bias with 'responsible AI'

The more we make responsible AI an expectation and a known commodity, the more likely we are to make it our reality.

The more we make responsible AI an expectation and a known commodity, the more likely we are to make it our reality. Image: Maxime Valcarce/Unsplash

Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
Miriam Vogel
President & CEO, Equal AI
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

  • AI is an immense opportunity for humankind and many organizations.
  • Having a clear AI governance framework based on clear principles will ensure that AI is used responsibly and optimally for an organization.

Over the last few years, responsible AI has gone from a niche concept to a constant headline. Responsible, trustworthy AI is the subject of several documentaries, books, and conferences. The more we make responsible AI an expectation and a known commodity, the more likely we are to make it our reality. This enables us to flourish with more accessible AI. This is our shared goal through the Responsible AI Badge certification programme for senior executives. Similarly, in our podcast, In AI We Trust?, we identify the key ingredients for responsible AI from which all organizations can learn and benefit.

We work with academics, organizations, and leading thinkers to understand best practices and processes for responsible AI. Cathy O’Neil has offered insight on the hazards of false reliance on AI, while Renée Cummings, founder of Urban AI, has shared thoughts on the impact of AI on our civil rights. Keith Sonderling, the US Equal Employment Opportunity Commission (EEOC) Commissioner, has shared guidance for employers on building, buying, and employing AI in HR systems. Rep. Don Beyer (D-VA) shared his enthusiasm for AI and the opportunities it offers for policy development.

Discover

How is the World Economic Forum fostering a sustainable and inclusive digital economy?

Best practices for achieving responsible AI governance

From these and other discussions, we’ve identified the top 5 best practices that are critical to achieving responsible AI governance:

1) Establish AI Principles

The management team must be aligned through an established set of AI principles. The leaders must meet and discuss AI use and how it aligns to your values. Too often, tech is relegated to the IT team. With AI we must reset this approach – work cultures must be upgraded to match new working models. In our recent podcast with Richard Benjamins, he shares how Telefonica ambitiously implemented AI principles that they will strive to achieve.

2) Establish a responsible AI governance framework

Organizational values and tech are linked and must be handled collectively. Management’s agenda should include how innovation heads share how they develop and use AI in key functions. HR heads can share where AI is used. General counsel can reflect on potential liabilities in crisis headlines and lawsuits. These discussions should lead to the establishment of a framework to guide future AI use and to shape AI culture. This checklist will offer guidance on how to do so.

An increasing number of frameworks can guide management’s efforts, including the BSA Framework, GAO Risk Framework, and the NIST AI Risk Management Framework. The EqualAI Framework provides five pillars for responsible AI governance and the World Economic Forum produced the AI C-Suite Toolkit for AI leadership.

3) Operationalize your governance framework

As important as it is to design a plan to establish the guiding principles of your AI game plan, these steps will only be as effective as their implementation. There are several important steps to take to operationalize responsible AI.

  • Designate a point of contact and support team

Clarifying c-suite accountability will reduce corporate liability. An organization must establish who will be responsible for their AI governance. This executive will coordinate and handle incoming AI questions and concerns, both internal and external. They will oversee the implementation of the framework and supporting systems, take responsibility for the accuracy and timeliness of responses, and ensure new challenges are identified and addressed.

To ensure accountability, evaluations of designated team members should include clear expectations of their role. The team supporting responsible AI implementation should be as diverse as possible. This should include technical experts, engineers, ethicists, designers, lawyers, product developers, sales, and consumer relations executives. Diversity makes AI safer as it envisions the broadest range of risks. It also makes it better, as it is crafted for a very broad consumer base.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

  • Communicate the stages of the AI lifecycle for testing procedures

An organization must establish good AI hygiene. This includes routine testing at various stages of AI development, deployment, and use. Testing will depend on the type of system and its level of risk. Similarly, given AI’s flexibility and adaptability, testing must adapt with it. A thorough checklist will be needed to ensure consistency.

Broadcast the testing plan far and wide so that it is adhered to throughout the organization. The organization is best served knowing that plan and goals are clear and that potential problems are being tested and sorted out. This reinforces your development of a responsible AI culture, one that supports and communicates trust and responsibility.

  • Document relevant findings at the completion of each stage

To promote consistency, accountability, and transparency, organizations should document findings, such as the origins and gaps in the training data used in AI systems. Documentation should include populations who are under or overrepresented for whom the AI system may have different success rates. This will put on notice those testing for gaps and harms at later stages as well as downstream users of the AI systems. This practice is akin to adhering to nutritional labels and ingredient lists, such as AI model cards, that documents what and who is part of datasets.

Have you read?
  • Implement routine auditing for responsible AI governance

We may not like them, but we have routine dentist visits. Likewise, corporations should establish a cadence and process for routine audits, where AI systems are queried with hypothetical cases. There is a growing body of outside experts and resources to help. This may soon be required as under the New York City AI Hiring bill and the Algorithmic Accountability Act. In addition, this audit establishes a record that may prove helpful in the event of a lawsuit or query from a regulatory body down the road.

4) Training

Provide an AI ethics course to explain principles and obligations across your organization in a consistent way. There is also the option of enrolling senior executives in the Responsible AI Badge certification programme which focuses on implementing best practices.

5) Questionnaires

Make the process simple, routine, and mandatory. A basic questionnaire can be used when a team is planning the design and/or launch of an AI-enabled product. If the scores are deficient, the issue can be brought to an AI ethics committee for troubleshooting. Such a committee can consist of a variety of experts with technical AI knowledge.

We know that bias can seep in through the AI product lifecycle and that it is constantly learning new patterns. Organizations must focus on powerful tools to learn patterns and offer recommendations to stay on top of AI. Sharing this understanding, through implementing thoughtful frameworks and other proposed measures, will allow executives to establish rules and safety protocols.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum