Emerging Technologies

This is what the world's CEOs really think of AI

A robot waiter serves customers at a cafe in Budapest, Hungary, January 24, 2019. Picture taken January 24, 2019. REUTERS/Bernadett Szabo - RC141F8D1790

OK Computer: AI must be explainable to be trustable, according to CEOs Image: REUTERS/Bernadett Szabo

Anand Rao
Global Leader, Artificial Intelligence, PwC
Flavio Palaci
Global Data & Analytics Leader, PwC Australia
Wilson Chow
Technology, Media & Telecom Leader, PwC China
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

This article is part of: Annual Meeting of the New Champions

From automation to augmentation and beyond, artificial intelligence (AI) is already changing how business gets done and opening up virtually limitless potential to benefit the whole of society.

Businesses in every sector are eager to claim their piece of the potential AI windfall, which PwC research estimates could add $15.7 trillion to the global economy by 2030.

AI concerns differ between consumers and business leaders

Amid this promise, the rapid pace and significant scale of change resulting from ever smarter AI systems and increasingly pervasive human-machine interactions are also giving rise to markedly differing concerns among business leaders and consumers.

Consumers want the convenience of services tailored to their needs, together with the peace of mind from knowing that companies are not biased against them; and that their government will protect them with laws regulating how their data can be used.

Businesses, meanwhile, are in many cases still exploring the opportunities AI presents and, at the same time, educating themselves about the possible risks.

PwC’s most recent Global CEO Survey found that the risks as well as opportunities around AI are a key focus for top executives. Eighty-five percent of CEOs agree that AI will significantly change the way they do business in the next five years. But on questions around how much AI can be trusted, opinions are less clear-cut. Over three quarters of CEOs think AI is “good for society,” but an even higher proportion – 84% – agree that AI-based decisions need to be explainable in order to be trusted.

CEOs' views on the impact of AI on the world

PwC’s recently released Responsible AI Diagnostic surveyed around 250 senior business executives to date, finding that the level of understanding and application of responsible and ethical AI practices among respondents significantly varied across organizations, and in most cases was immature. The findings also highlighted challenges around access to the skills needed to adopt responsible AI practices.

There is a clear need for those in the C-suite to review the AI practices within their organizations, ask a series of key questions, and, where necessary, take steps to tackle a variety of potential risks from AI, by addressing any areas where controls or processes are found to be lacking or inadequate.

Responding to AI challenges across five key dimensions

Alongside the risks, the rise of AI also brings inherent challenges around trust and accountability. To tackle these effectively, organizations should both understand the challenges and risks around AI and take these fully into account in its design and deployment.

PwC has developed a comprehensive Responsible AI Toolkit to help organizations address five key dimensions when designing and deploying responsible AI applications:

1. Governance

The foundation for Responsible AI is end-to-end enterprise governance. At its highest level, AI governance should enable an organization to answer critical questions about results and decision-making of AI applications, including:

● Who is accountable?

● How does AI align with the business strategy?

● What processes could be modified to improve the outputs?

● What controls need to be in place to track performance and pinpoint problems?

● Are the results consistent and reproducible?

The ability to answer such questions and respond to the outcomes of an AI system requires a more flexible and adaptable form of governance than many organizations may be accustomed to.

2. Ethics and regulation

Organizations should strive to develop, implement, and use AI solutions that are both morally responsible and also legal and ethically defensible. More than 70 documents have been published in recent years to describe relevant ethical principles for AI.

While it is hard to dispute the ethical principles, businesses find it challenging to translate them into concrete actions that impact day-to-day decisions. For principles to become actionable, they must be contextualised into specific guidelines for front-line staff. This involves contextualising the ethical considerations for each AI solution, identifying and mitigating ethical risks.

3. Interpretability and explainability

At some point, any business using AI will need to explain to various stakeholders why a particular AI model reached a particular decision; these explanations should be tailored to the different stakeholders, including regulators, data scientists, business sponsors, and consumers.

A lack of interpretability in AI decisions is not only frustrating for end-users or customers, but can also expose an organization to operational, reputational, and financial risks. To instil trust in AI systems, there is a need to enable people to look “under the hood” at their underlying models, explore the data used to train them, expose the reasoning behind each decision, and provide coherent explanations to all stakeholders in a timely manner.

4. Robustness and security

To be effective and reliable, AI systems need to be resilient, secure, and safe. In terms of resilience, next-generation AI systems are likely to be increasingly “self-aware”, with a built-in ability to detect and correct faults and inaccurate or unethical decisions.

In terms of security, the potentially catastrophic outcomes of AI data or systems being compromised or “hijacked” make it imperative to build security into the AI development process from the start, being sure to cover all AI systems, data, and communications. For example, if the image of a ‘Stop’ sign is manipulated to be misinterpreted as a ‘30mph’ sign, this will potentially result in a disastrous situation for an autonomous vehicle.

Above all, though, AI systems must be safe for the people whose lives they affect, whether they are users of AI or the subjects of AI-enabled decisions.

5. Bias and fairness

Bias is often identified as one of the biggest risks associated with AI. But eliminating bias is a more complex task than it may appear.

The public discussion about bias in AI often assigns blame to the algorithm itself, without taking the human component into account. And people perceive bias through the subjective lens of fairness – a social construct with strong local nuances and many different and even conflicting definitions.

In fact, it’s impossible for every decision to be fair to all parties, whether AI is involved or not. But it is possible to tune AI systems to mitigate bias and enable decisions that are as fair as possible and adhere to an organization’s corporate code of ethics, as well as following anti-discrimination regulations.

One thing is certain: for any organization to realise the full promise of AI, it must ensure that its use of AI is responsible by addressing the dimensions described above.

Put simply, if AI isn’t responsible, it isn’t truly intelligent. Organizations must bear this in mind as they plan and build their AI-enabled future.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Future of the internet: Why we need convergence and governance for sustained growth

Thomas Beckley and Ross Genovese

April 25, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum