Emerging Technologies

Building trust in AI means moving beyond black-box algorithms. Here's why

Algorithms are exerting ever-growing influence in the world. We must have better understanding and control of their outputs.

Algorithms are exerting ever-growing influence in the world. We must have better understanding and control of their outputs. Image: Getty Images/iStockphoto

Eugenio Zuccarelli
Global Shaper, Genoa Hub
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

  • Building trust in AI, particularly for high-risk settings like healthcare, means understanding how and why algorithms make certain decisions.
  • But in the age of modern AI, these algorithms are too often black-boxes — not even their creators understand fully why they produce certain results.
  • To introduce AI into high-risk settings like healthcare, we must use algorithms that are interpretable and free from bias.

Algorithms are everywhere. From shaping social media feeds to influencing loan approvals, algorithms are an integral part of our lives, and have been so for a long time. Their pervasive presence is not a recent phenomenon; algorithms, as a set of instructions, have actually played a key role in determining outcomes for centuries.

This is especially true in today’s era, where technologies such as Artificial Intelligence (AI) are weaving algorithms into the fabric of each aspect of our daily lives.

Since algorithms shape our existence, it’s imperative to build trust in them. AI systems, for instance, used to be relatively simple algorithms. These pretty rudimentary pieces of software were able to make some approximate estimates, learning from the data they were fed to mostly identify patterns. These precursors to the more advanced AI we see today are often still used in less advanced applications such as sales forecasting or risk scoring.

In the past decade, though, we’ve seen AI improve to levels we could have only dreamed of.

Have you read?

Black-box algorithms

State-of-the-art AI, such as the one behind OpenAI’s ChatGPT, has become extremely powerful, being able to generate text, images and now videos that often rival expert writers, designers and videomakers. That came at a cost, though. While older models are less accurate, they are much more interpretable, and for this reason trustworthy, partly because of their simplicity, than the so-called “black-box” models that power most of current AI.

Modern AI’s complexity makes it highly effective in low-risk scenarios, but almost impossible to trust in high-risk scenarios, such as healthcare, criminal justice, finance and more. As OpenAI founder Sam Altman said during this year’s World Economic Forum Annual Meeting, “OpenAI’s type of AI is good at some things, but it’s not as good in life-or-death situations”.

Because of that, we won’t be able to trust such models to provide insights into critical and sensitive areas until we are able to fully trust them. We need to rebuild trust not only in each other but also in machines. It's not an easy task but, with enough cooperation, it is within reach.

Explainability: Moving beyond black-box algorithms

One of the first steps is developing and advocating for AI systems that are “interpretable”. With the current architectures that power Large Language Models such as ChatGPT, we are not able to peek under the hood of these algorithms. We’re not able to understand the decision-making process that led the AI to come up with a specific answer or recommendation. This poses significant issues in high-risk scenarios. In the healthcare sector we need to understand why the model diagnosed a patient with a specific disease. We need to understand the considerations behind such a diagnosis. The doctor reviewing the algorithm’s output must be able to verify that the model has reached a conclusion that follows a doctor’s approach. Generally speaking, we need to be able to ask the AI “why” it came up with such conclusions and understand the logical process it went through, especially in sensitive applications. By inspecting the flow, we might discover flaws in the thought process, which are commonly called “hallucinations”.

Bias removal: Building equity into algorithmic decisions

Model explainability is just a part of the quest for AI we can trust. Since AI systems are only as good as the data they use, it’s key that we ensure that the data used to train such models is of the highest quality. This means using high-quality data sources that can provide accurate and factual information. However, it might not always be easy to get such information in real-life scenarios.

Most data and information we and the algorithms digest in digital format are subject to various limitations. For this reason, AI systems tend to show bias not because of inherent malicious behaviors but rather because they have been trained on historically biased information. This proves potentially harmful in high-risk scenarios. For instance, a hypothetical AI system predicting a person’s likelihood of recidivism might discriminate based on ethnicity, gender or other protected attributes. There are several techniques that can be used to mitigate, and ideally remove, bias. As users we need to know that’s a possibility and have to ask probing questions to the algorithms-makers to make sure they have safeguards in place to avoid discrimination.

Policy framework: A unified approach to AI

Finally, we need to establish a common framework to provide shared guidance on AI development. Since AI is in its infancy, we still lack a unified framework that outlines best-practices and requirements for AI systems. This causes fragmentation. Private and public players are left to pick their own guidelines, or lack thereof, causing potentially harmful systems to be used by the broader population.

As Professor Luciano Floridi, Director of the Digital Ethics Center of Yale University, said in a recent interview, “We should design a win-win society, where technology operators can have a positive impact on the markets, society, and the environment.” To do so, we need to ensure consistency and shared guidance, by having governments and international organizations step in to create frameworks that are generally applicable across countries and industries, ensuring regulation without stifling innovation.

As users of such algorithms, we have a duty to advocate for equitable and responsible AI that we can trust. As organizations involved with the development and regulation of such systems, we need to foster public-private cooperation across different stakeholders to leverage a wide range of expertise.

Only by bringing experts, policymakers and users to the same table we can truly ensure success in rebuilding trust in AI systems that play a critical role even in high-risk scenarios, ensuring cooperation between humans and machines towards a better society.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Robot rock stars, pocket forests, and the battle for chips - Forum podcasts you should hear this month

Robin Pomeroy and Linda Lacina

April 29, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum