Emerging Technologies

How to build AI that society wants and needs

Implementing ethical and trustworthy AI requires different forms of staff training for organizations Image: Rawpixel/Teddy Rawpixel

Michael McCarthy
PhD, Assistant Professor of Data Science, Utica College
Michael Byrd
PhD, Founding team, Head of Product, So.Ai
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

  • The relative ambiguity of regulatory oversight prevents artificial intelligence from directly reflecting society's needs;
  • While some organizations place fairness, ethics, accountability and transparency at the heart of AI development, others "build until it breaks";
  • How can people, processes and technology best contribute to a responsible, ethical culture for AI development?

While AI holds great promise for society, the speed of its advancement has far outpaced the ability of businesses and governments to monitor and assess the outcomes properly. The relative ambiguity of regulatory oversight throughout the world prevents AI from directly reflecting society’s needs. It is important that organizations take steps to enable and showcase trustworthiness to all stakeholders and build the reputation of the organization’s AI.

Have you read?

Trust in AI starts with stakeholders understanding that a particular organization uses AI responsibly. It is unlikely that external stakeholders will identify individual AI systems as “trustworthy” or “untrustworthy”; rather, an organization is considered trustworthy or not and AI systems inherit the organization’s reputation. In the same way that an organization’s human staff showcases the organization’s values, the behaviours of the AI system are both a manifestation of and an influence on the organization’s reputation.

Training staff is a familiar challenge to most organizations, but the challenges of implementing ethical and trustworthy AI are new and different. They are, however, well documented with more than 90% of surveyed companies reporting some ethical issues. How can an organization do better?

The most lucrative use cases of AI until 2025
The most lucrative use cases of AI until 2025 Image: Statista

In the current ambiguous regulatory environment, the complexity of AI systems drives organizations to seek new means to support their AI development. Most sophisticated technology companies, at a bare minimum, indicate they have the fairness, ethics, accountability and transparency (FEAT) principles at the centre of AI development and more than 170 organizations have published AI and data principles. However, many organizations tend to “build until something breaks” without consideration for who is affected by the break.

We believe that tangible progress toward the responsible use of technology can be made by taking advantage of people, process and technology to bridge these gaps. Here’s how:

1. People

Organization leaders – from managers to the C-suite and board of directors – often have little understanding of the assumptions and decisions made throughout the development and implementation of AI. Regardless of leaders’ understanding of the AI, they own the reputational and financial outcomes (both positive and negative). Data scientists, on the other hand, can find it challenging to take all the guidelines, regulations and organizational principles into account during the development process.

In both cases, the challenge is not generally a lack of understanding of what it means to be responsible; it is a lack of insight into what factors are important at different levels of the organization and how they affect outcomes. Finding ways (processes and tools) to bring disparate groups together can be highly effective.

2. Process

Stakeholders expect AI to be overtly fair, aspiringly ethical, accountable and as transparent and open as possible. To maximize their external stakeholders’ trust, a systematic and visible process must be used to develop all responsible technology.

Guidelines for the development of responsible technology are plentiful and varied. It can be a challenge to know which to adopt. An organization must pick one and use it. It is better to be systematic and have a framework that is 80% aligned with your needs than no framework at all.

Loading...

When using a framework for the development of responsible technology, there must be some form of artifact (documentation or report) that can be shared with stakeholders. The process is then systematic and visible. These artifacts serve both as proof that a process was implemented and followed, and as a mechanism to guide iteration and feedback.

3. Tools and technology

Tools must be developed and adopted that can efficiently and effectively integrate these processes into AI development and deployment. Such tools must facilitate the employment of guidance frameworks, expedite analyses that assess potential problems like bias and generate useful artifacts for internal and external stakeholders.

AI developers need objective tools to help assess their work and allow organizational stakeholders to provide feedback. Organizational leaders need standardized assessments of the AI’s overall risk and financial metrics like return on investment (ROI). Additionally, tools that explain how AI systems make decisions can enable executives and boards of directors to have better oversight and understand risks and any financial impacts. These tools can provide a standardized structure and artifacts that benefit both data scientists and organizational leaders and help overcome the disconnect between the requirements to create and the requirements for ethical outcomes.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

To enable trust in an organization’s AI, people, processes and tools must allow internal stakeholders to build responsible technology. Without these concrete steps, the “build until it breaks” mentality will likely cause a breach of trust that could derail the organization. Empowering an ethical and transparent process, where colleagues and managers hold each other accountable and mutually enable desirable behaviour is how ethical people foster an ethical culture for responsible AI development.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesStakeholder Capitalism
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How venture capital is investing in AI in the top five global economies — and shaping the AI ecosystem

Piyush Gupta, Chirag Chopra and Ankit Kasare

May 24, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum