Emerging Technologies

Towards actionable governance on trustworthy AI

AI concept on diagram of brain superimposed on man at desk with laptop.

Organizations need to build a strong AI governance model that promotes trust and confidence. Image: WEF/iStockphoto

Matthias Muhlert
Chief Information Security Officer, HARIBO

Listen to the article

  • In 2022, the global artificial intelligence (AI) market was estimated to be worth $120 billion; forecasts expect that to reach $1.6 trillion by 2030.
  • As a dual-use technology, AI has profound economic and social benefits and challenges that governments or companies cannot ignore.
  • Organizations need to implement AI governance models that enable trust and confidence.

Today, artificial intelligence (AI) technologies are present in most spheres of our life, from driverless cars and finance to healthcare solutions. In 2022, IBM found 35% of companies were using AI in their operations, while 42% of businesses indicated that they are exploring the use of AI.

Moreover, Gartner predicts that by 2025 more than 30% of new medications and materials will be discovered using generative AI techniques.

Have you read?

This rise in AI across industries is also reflected in its projected market growth. In 2022, the global AI market was estimated at $120 billion; forecasts expect its worth to reach almost $1.6 trillion by 2030.

Whereas AI technologies hold numerous benefits which contribute to advances of society, as a dual-use technology it also contributes to profound economic and societal disruptions that can impact our privacy, safety and wellbeing and, as such, cannot be ignored by governments, the private sector or by individuals.

To amplify the benefits and mitigate the risks associated with AI, in recent years public and private organizations have developed governance frameworks intended to guide the development and implementation of ethical, fair and trustworthy AI.

Governance on trustworthy AI

To illustrate, the OECD Council Recommendation on Artificial Intelligence, adopted in May 2019, identify the five key principles for responsible stewardship of trustworthy AI that should be implemented by governments, organizations and individuals. Today, these principles serve as a reference framework for international standards and national legislation on AI.

Other examples of international and regional efforts on AI governance include UNESCO's Recommendation on the Ethics of AI, the Council of Europe’s proposal for a legal framework on artificial intelligence based on the Council`s standards on human rights, democracy and the rule of law, as well as a series of documents that define the EU’s approach to AI. The EU’s “Artificial Intelligence Act”, for instance, proposes a legal framework for trustworthy AI.

At the national level, governments have developed AI strategies, which, among other things, outline approaches for trusted and safe AI.

From the private sector perspective, a number of businesses, including Google and Microsoft, have developed governance tools and principles for building AI systems responsibly that also outline practical approaches how to ensure that the unintended consequences of the technology are avoided.

However, despite these different public and private governance efforts, research shows that a majority of organizations are yet to take actionable measures to ensure the development and use of trustworthy and responsible AI. For instance, more than 70% of organizations have not adopted the necessary steps to eliminate bias in AI, while an estimated 52% are not capable of safeguarding data privacy through the entire AI lifecycle

Moving from policy to practices

To move from policy to practice on trustworthy AI, organizations should devise and implement clear and structured programmes that guide the implementation of AI governance frameworks. Among other things, these programmes should:

  • Clearly define the purpose of the AI. The purpose of the technology will help determine which data it should process in order to make decisions and predictions. The data used by the AI system must be obtained rightfully and match the system's purpose.
  • Identify and train the right algorithm to achieve the desired results. The training process must be designed carefully to avoid invoking human biases and getting into ethical pitfalls. Moreover, a monitoring mechanism should be introduced to ensure that the algorithm is learning effectively.
  • Consider human interaction during the AI decision-making process. The faster a decision needs to be made; the less likely human interaction will be designed into the process. However, it's crucial to ensure openness and transparency in the decision-making process.
Discover

How is the World Economic Forum ensuring the responsible use of technology?

As a technology that transcends industries and geographies, the development and implementation of trustworthy AI is not a one-man job and as such requires collaboration amongst researchers, developers, businesses and policymakers. Society at large has the responsibility to ensure that AI systems align with social values and promotes the public good.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Emerging TechnologiesGlobal Cooperation
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How digital twins are transforming the world of water management

Anja Eimer

November 1, 2024

Balancing innovation and governance in the age of AI

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum