Artificial Intelligence

Why trust in automotive AI will determine the future of mobility

Cockpit of driverless car driving on highway viewed from rear seat.

Cockpit of a driverless car driving on the highway viewed from rear seat. Image: Getty Images/iStockphoto

Stan Zurkiewicz
Chairman and Chief Executive Officer, Dekra
Mark Thomä
Executive Vice-President, Strategy and Corporate Development, Dekra
This article is part of: World Economic Forum Annual Meeting
  • Technology has often advanced faster than society was ready to adopt it and automotive AI is at that transition point.
  • We already have advanced automotive AI systems, but how can we deploy them in a way society trusts and adopts them?
  • Global leaders must prioritize digital trust as a prerequisite for AI-enabled mobility to become mainstream.

Technology has often advanced faster than society was ready to adopt it. Electricity only scaled once it earned public trust; aviation was technically feasible before passengers felt safe; and digital finance expanded only after security and transparency were firmly established.

Today, automotive AI is at that same transition point.

The question is no longer whether we can build advanced and intelligent automotive AI systems – we already do. The real question is whether we can deploy them in a way that society trusts to adopt them.

Timing now matters because confidence, not capability, will determine adoption.

Safety benefits of automotive AI have wider societal impact

Mobility systems worldwide are expected to support decarbonization, reduce congestion and improve road safety. According to the World Health Organization, 1.19 million people die each year in road traffic collisions. If AI-enabled mobility can prevent even a fraction of those incidents, the societal impact is significant.

Yet the digital trust gap around automotive AI remains wide. One UK study showed that just one in six people said they would feel safest in an autonomous vehicle, compared to nearly two-thirds that said they would feel safest in one driven by a human.

Have you read?

If we do not close this trust gap now, we risk slowing deployment at a moment when cities, regulators and industries depend on smarter mobility to meet safety, sustainability and efficiency goals.

Unlike past eras, where a single type-approval signalled that the vehicle “is safe”, AI-enabled mobility requires a different approach. Trust cannot be certified once; it must be monitored and continuously demonstrated.

This challenge comes down to three essential questions.

1. Can we understand and explain what the system does?

For more than a century, vehicle safety was based on deterministic, mechanical systems. Failures were predictable. Behaviours could be modelled, validated and certified.

AI changes that foundation. Vehicles now rely on data-driven functions, neural networks, sensor fusion and continuous learning. These systems adapt and evolve in ways even experts cannot fully anticipate.

Explainability is becoming a safety requirement. To ensure reliability, traditional automotive safety must converge with:

  • AI transparency and model interpretability
  • Cybersecurity
  • Functional safety

This convergence is becoming known as digital trust – the ability to reliably demonstrate that intelligent systems behave correctly, securely and consistently.

Practically, this requires applying standards such as ISO 21434, ISO 26262, ISO/SAE 8800 and UNECE R155 from the earliest stages of development. It also requires secure hardware architectures for long-term protection, including technologies such as hardware security modules and ARM TrustZone.

These elements represent the baseline for public confidence.

2. Can we verify that the system remains trustworthy over time?

In traditional mobility, validation was a one-time event: a crash test, a type approval, a compliance check. But AI-enabled systems evolve after deployment through over-the-air updates, additional datasets, changing environments, or supplier modifications.

This shift requires continuous assurance, not static certification.

At the same time, the cybersecurity exposure of vehicles is increasing. In 2023 alone, the European Agency for Cybersecurity (ENISA) reported more than 200 automotive cybersecurity incidents and the trend is accelerating.

Loading...

To meet this challenge, safety and security evaluations must cover the entire supply chain – including electric control units, components, cloud services, and infrastructure – across original equipment manufacturers (OEMs), tier 1, and tier 2 suppliers.

Lifecycle monitoring frameworks, such as ISO/SAE 8800, represent a significant step: they introduce an operational safety lifecycle requiring monitoring, risk management and adaptation as systems evolve.

Trust in automotive AI will depend on long-term validation and continuous oversight.

3. Who is accountable when the system fails?

Accountability is shifting toward a distributed ecosystem of developers, integrators, infrastructure operators and regulators.

As AI-driven functions assume more decision-making, responsibility shifts from individuals to a complex technical ecosystem.

This raises difficult questions. Who is accountable – the OEM, the software provider, the component manufacturer, the cloud operator, or the combined responsibility between all of them?

Traditional testing cannot answer these questions for systems that evolve over time.

Regulators are beginning to respond. The EU AI Act and Japan’s Road Safety and Ethics Protocol (RSEP) introduce requirements for algorithmic transparency, human oversight and real-time decision logging.

These mechanisms ensure that AI behaviour is explainable, traceable and auditable across the entire lifecycle.

Without clear accountability, trust will not scale. And without trust, autonomous mobility will not scale either.

Why digital trust is key to the future of AI-enabled mobility

We are living through the largest mobility transformation since the shift from horse to engine – this time from mechanical control to intelligent decision-making.

The question shaping the coming decade is not whether automotive AI works. It already does.

The question is whether society will trust it. Decision-makers in industry, government and civil society must therefore prioritize digital trust now – as a pre-requisite for AI-enabled mobility to become mainstream.

Discover

How is the World Economic Forum promoting sustainable and inclusive mobility systems?

The conversation has moved beyond technology optimism. It needs reliability. It needs transparency. It needs accountability.

Only then will intelligent mobility fulfil its potential.

Across the value chain, independent third parties play a critical role in embedding this trust – from AI validation and cybersecurity assurance to functional safety evaluation and lifecycle monitoring.

As one of the world’s leading independent testing, inspection and certification organizations, DEKRA supports this transition by providing independent assurance and lifecycle oversight, enabling responsible, safe and trusted deployment at scale.

Innovation builds capability, yet it is trust that builds adoption.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Cybersecurity

Related topics:
Artificial Intelligence
Built Environment and Infrastructure
Share:
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

How business AI can return humanity’s most precious resource – time

Umesh Sachdev

January 9, 2026

The next wave of intelligence: How human purpose must guide the future of AI

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum