Emerging Technologies

Trust in AI: Why the right foundations will determine its future

AI needs ethical guardrails to ensure it's used responsibly.

AI needs ethical guardrails to ensure it's used responsibly. Image: Getty Images.

Niraj Parihar
CEO of Insight & Data, Capgemini
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

This article is part of: World Economic Forum Annual Meeting
  • AI is revolutionizing the way we work, do business, innovate, consume, communicate and even socialize.
  • However, AI's rapid proliferation is creating fundamental concerns around trust – which could make or break the uptake of this technology.
  • Regulation, transparency and ethical frameworks are critical to ensure AI is used responsibly.

Artificial intelligence (AI) is redefining the relationship between humans and technology. It is influencing how we perceive, consume, and interact with information. It is reshaping the way we work, do business, innovate, and communicate. As a result, AI will likely remain at the forefront of global discourse for decades to come.

However, its growing influence brings with it deeper concerns that could make or break the technology, in particular: can we trust it?

Have you read?

According to a Capgemini Research Institute study published in June 2023, 73% of consumers worldwide say they trust content created by generative AI; 53% trust it for financial planning, 67% for medical diagnoses and advice, and 66% for personal relationships or life and career planning.

In addition, despite the potential for cyberattacks and deepfakes, consumer awareness of the risks is low; nearly half (49%) of consumers are not concerned about the prospect of generative AI being used to create fake news stories, and only 34% are concerned about phishing attacks.

Image: Capgemini Research Institute, Generative AI consumer survey, April 2023, N = 8,596.
Image: Capgemini Research Institute, Generative AI consumer survey, April 2023, N = 8,596.

In parallel, 70% of organizations are currently in “exploration mode” when it comes to generative AI innovation. They are right to be, as the opportunities and advantages of deploying AI in a controlled B2B environment, such as using AI-powered intelligent shipping and warehouse operations, are clear: a recent IDC survey found that business leaders are realizing an incredible average return of $3.5 for every $1 invested in AI.

The level of consumer trust in AI is both encouraging and concerning. Encouraging because AI platforms will only reach their potential if the output can be trusted. Concerning because AI, in particular generative AI, has the potential to be exploited for unethical purposes, can leak data, is a significant carbon emitter, and has been known to "hallucinate" and produce incorrect results.

With these concerns in mind, it’s important to remember that Large Language Models (LLMs) may talk causality, but they are not causal; they are statistical, built on the quantity of textual data fed into the model. The quality of the data determines their accuracy; if that quality is poor, LLMs can produce factually incorrect information; if the data is taken without permission, it can cause reputational damage or copyright infringement.

While AI is not a new technology, the launch of ChatGPT produced its “iPhone moment” in late 2022. Like the iPhone, the technology behind it will only get better, however trust in AI cannot be taken lightly or for granted. Early adopters may have a higher tolerance for risk, but enthusiasm can fade quickly. To maintain and build trust in responsible AI, policy-makers, innovators, and corporations must establish strong levers of control.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Key foundations for AI success

The global potential of AI depends on the free flow of high-quality data. Yet, divergent data policies around the world – on privacy, government access, security, harmful content, and intellectual property rights – present a formidable challenge to society, consumers, and governments in managing information, ensuring data security, protecting consumer rights, and preserving democratic values.

In digital health, for example, the challenge is exemplified by the need for interoperability and governance, balancing access with strong privacy measures to increase trust and unleash innovation for better patient outcomes from prevention to treatment.

The bottom line is that data has value, and organizations have a responsibility to protect their data and that of their customers. Uncontrolled use of AI could result in data being unexpectedly available for public consumption, as we have already seen.

Recognizing the complexities surrounding AI, the European Union's AI Act (proposed in 2021, and just approved) has prompted the first large-scale public debate on the uses and risks of this technology. The exact technical application of this may still evolve in the next few weeks but in any case, by classifying AI systems in different types of risks and requiring the transparency of data sources, it can lay the groundwork for the responsible use of AI.

Given that any regulation can take time to implement, business and technology partners have an important role to play in providing education and enforcing safeguards that address concerns about the ethics and misuse of generative AI in practice.

In 2019 Capgemini released its Code of Ethics for AI to help guide organizations and ensure AI is always adopted in a way that delivers clear benefits from AI technologies within a trusted framework. There are seven key principles:

  • Have a carefully delimited impact: AI designed for human benefit, with a clearly defined purpose setting out what the solution will deliver, to whom. The purpose of AI needs to be clear to all stakeholders.
  • Be sustainable: AI developed to benefit the environment and all current and future members of our ecosystem, human and non-human alike, and to address pressing challenges such as CO2 emissions, improved health, and sustainable food production. AI system’s own carbon footprint to be minimized by lean data management and AI guardrails, for training and consumption of AI models
  • Be fair: AI produced by diverse teams using sound data for unbiased outcomes and the inclusion of all individuals and population groups. Usage of automation tools to detect biases in algorithms on an ongoing basis.
  • Be transparent and explainable: AI with outcomes that can be understood, traced and audited, as appropriate.
  • Be controllable with clear accountability: Enable humans to make more informed choices and keep the last say.
  • Be robust and safe: Always make sure there is a human in the loop, including fallback plans where needed.
  • Be respectful of privacy and data protection: Consider data privacy and security from the design phase, for data usage that is secure, and legally compliant with privacy regulations.

These principles on AI aim to create a culture for AI that is emphatically responsible and human centered. Generative AI brings a new set of risks, such as loss of intellectual property, or misinformation, but also new challenges, reaffirming the need for organizations to enforce their codes of ethics. Human ethical values should never be undermined by the uses made of AI technologies, and all corporates have a fundamental responsibility to build trust with responsible AI. In the absence of proper regulation, taking a “wait and watch” approach is not viable.

Time will tell in terms of how all these elements will unfold. What is clear is that AI technology is part of our reality today and will continue to shape the world we live in. If organizations are to unlock the incredible potential of AI to accelerate innovation, now is the time to work together to establish the necessary foundations that will help to build lasting trust in this transformative technology.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesForum Institutional
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

The future of learning: How AI is revolutionizing education 4.0

Tanya Milberg

April 28, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum