Forum Institutional

How to trust systems with AI inside

AI on electronic circuit board.

DNV’s new Recommended Practice for “Assurance of AI-enabled Systems” will help build trust for stakeholders impacted by the AI-enabled system. Image: DNV Group

Frank Børre Pedersen
Vice President and Programme Director Group Research and Development, DNV
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Forum Institutional?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Tech and Innovation

This article is part of: World Economic Forum Annual Meeting

Listen to the article

  • As self-learning systems become responsible for making more decisions over people and environment, ensuring the safe use of AI is a priority.
  • The upcoming EU AI Act will likely set a de facto global standard for how to regulate the use of AI.
  • The aim of DNV’s Recommended Practice for “Assurance of AI-enabled Systems” is to demonstrate conformity to the EU AI Act.

Artificial Intelligence (AI) technologies have vast potential to advance business, improve lives, and tackle global challenges. Being able to learn from data, they offer opportunities for self-learning systems as new data becomes available. This ability to dynamically learn and improve performance offers advantages and opportunities not easy to achieve with conventional software programs.

Have you read?

Early uses of AI focused primarily on systems like chatbots and automated consumer recommendation systems not considered to pose a high risk if the AI failed to make good decisions. However, as self-learning systems become more and more responsible for making decisions that may ultimately affect the safety of personnel, assets or the environment, the need to ensure the safe use of AI has become a priority.

AI also creates new ethical challenges. Being data-driven, AI may reinforce unethical behaviour or bias represented by the data. Unintended usage like reward hacking and deepfakes highlight the need of addressing the ethical and responsible use of AI.

Recent advances in Generative AI, like ChatGPT, show almost human-like creativity, which increases the potential impact it has on stakeholders. In such a landscape, conventional quality assurance and testing regimes are not sufficient.

To ensure safe and trustworthy use of AI, several aspects need to be in place:

  • Relevant regulations and appropriate requirements on the AI;
  • Understanding systems with AI inside;
  • Appropriate assurance.

Relevant regulations on the AI

The EU AI Act will likely set a de facto global standard for how to regulate the use of AI. The main objective of the AI Act is to accelerate the development and uptake of AI, and to ensure that its use is according to EU values. With the anticipation of AI being regulated at some point, this upcoming regulation therefore removes regulatory uncertainty and creates a level playing field for the industry.

The AI Act contains specific requirements for both the developer and user of the AI during the lifecycle of the AI. The focus is on ethical, legal, and technical aspects of its use. For high-risk AI applications there are additional requirements also for a conformity assessment. However, the AI Act has no specific guidelines on how such conformity shall be demonstrated in practice.

DNV has therefore developed a new Recommended Practice (RP) entitled “Assurance of AI-enabled Systems”. This RP comes as a response to the EU AI Act and customer’s need for trust in systems with AI inside. The RP takes a broad and deep view on how trust in AI can be built: It addresses the different dimensions of trust addressed in the AI Act (technical and non-technical) and describes a practical way of building trust by using evidence-based reasoning about the claims. The RP will be sent out for external hearing in the first quarter of 2023.

Understanding systems with AI inside

A key part of the RP is to take a systems approach to understanding AI and what potential consequences its use can have in the real world. Systems with AI inside can be cyber-physical systems (ships, windmills, power grids, medical devices, etc.) or purely digital systems (digital twins, blockchains, etc.). In either case, we are ultimately interested in the consequences in the real world for people, assets, and the environment. Notably, our AI RP is part of a broader suite of mechanisms that DNV uses to assure the building blocks of industrial digitalization applications that includes RPs addressing the quality of data, algorithms, sensor systems, simulation models and the robustness of cyber security.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Systems with AI inside are often complex, where the behaviour of the system is difficult to fully explain. And the interplay between technical, ethical, and legal aspects of the technology creates additional complexity. Its dynamic nature creates practical challenges in keeping our understanding valid during the use of the AI. This necessitates considering the entire system in which the AI works to better understand potential consequences (both positive and negative) during use.

Appropriate assurance

The assurance approach described in our RP is generic and not tied to any specific legal jurisdiction or regulatory framework, but one aim has been to enable demonstration of conformity to the EU AI Act requirements through the use of the RP.

Assurance means that we establish justified confidence that requirements are met in such a way that trust is built for the stakeholders that are impacted by the AI-enabled system. We have used a claims-and-evidence based approach in the RP such that regulatory and other requirements are formulated through claims. These claims are then validated as per the interest of the stakeholders involved. In this way, unjust or irrelevant claims can then be identified and rectified. Evidence is collected to verify if a claim is true with sufficient confidence, and that forms the basis of the conformity assessment.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Forum InstitutionalEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Climate finance: What are debt-for-nature swaps and how can they help countries?

Kate Whiting

April 26, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum