Artificial Intelligence

Physical AI: Lessons in building trust from self-driving cars

3D rendering of a city street featuring cars driving along the road amidst urban buildings and sidewalks.

How do we build trust between people and intelligent systems operating in the physical world? Image: Unsplash/Valeria Nikitina

Dave Ferguson
Founder and Co-Chief Executive Officer, Nuro
This article is part of: World Economic Forum Annual Meeting
  • Physical AI will increasingly share our public space but we still need to tackle the challenge of trust.
  • No field has grappled with this question as intensely (or as publicly) as autonomous vehicles. Self-driving cars are not only one of the earliest large-scale deployments of physical AI, they are also among the most scrutinized.
  • The World Economic Forum’s 2026 Annual Meeting in Davos, Switzerland, will underscore the growing importance of open, impartial dialogue under the theme, "The Spirit of Dialogue."

Physical AI is reshaping the world around us. In the coming decades, autonomous physical agents such as self-driving cars, delivery bots and home robots will increasingly share our streets, workplaces and living spaces.

Their potential is enormous: safer roads, cleaner air, more time for what matters, and more equitable access to services. But to realize this potential, in addition to myriad technical challenges, we must also solve the issue of trust: how do we build trust between people and intelligent systems operating in the physical world?

No field has grappled with this question as intensely (or as publicly) as autonomous vehicles. Self-driving cars are not only one of the earliest large-scale deployments of physical AI, they are also among the most scrutinized.

The lessons emerging from this dialogue among communities, policymakers, automakers and technologists offer a powerful blueprint for how society can guide all forms of physical AI toward responsible, human-centred progress.

Have you read?

What problem are autonomous vehicles solving?

Over a million people around the world are killed in motor vehicle accidents each year. Congestion drains billions of hours of productivity annually. And a billion people lack access to safe and reliable transportation, limiting their ability to work, learn or receive care.

Autonomous vehicles can meaningfully bend each of these curves. They’re designed to avoid distractions, follow the rules and have a 360 degree viewpoint at all times. They don’t get tired, impaired or emotionally overloaded.

In industry trials, including commercial deployments, autonomous systems have demonstrated the ability to save lives, give us back time and expand mobility options for people who can’t drive.

However, none of these benefits can be fully realized without public trust. And trust, especially for intelligent physical systems operating in public spaces, is earned through dialogue.

How to build trust in autonomous vehicles through dialogue

To lay the groundwork for safe and responsible deployment, two forms of open dialogue are necessary.

1. Dialogue among humans

Every new technology operating in public space must align with social expectations and norms. For self-driving vehicles, this means early, ongoing engagement between technology developers, automakers, regulators, emergency responders, disability rights advocates and community groups. The goal is not just compliance but mutual understanding.

Many cities have already established joint working groups where technology developers share safety data, simulation results and learnings from real-world driving. Policymakers, in turn, express their priorities – whether that’s protecting vulnerable road users, reducing emissions, improving transit connections or ensuring equitable service coverage.

This dialogue also requires transparency around safety. The most credible safety frameworks today rigorously combine scenario-based testing, accountability against industry standards, and independent validation. This level of clarity allows regulators and communities to evaluate claims and build trust.

2. Dialogue between people and AI systems

The second form of dialogue is not among institutions but between people and the technology itself.

Thanks to recent advances in explainable/interpretable/reasoning AI, we have reached a new level of communication with AI systems. Using vision-language models and natural-language interfaces, autonomous systems can increasingly describe what they are doing and why they are acting a certain way in natural language.

For example, a passenger could ask: “Why did we slow down here?” and the vehicle could respond: “A cyclist ahead signalled a lane change and I’m giving them extra space.”

This capability not only enhances user experience but also builds confidence by making the system’s reasoning observable, interpretable and predictable. In other words, AI is no longer a black box. Like a person, an AI system can rationalize and justify its decisions.

This combination of human-to-human and human-to-machine dialogue is exactly the kind of cooperation needed in our contested world.

Discover

How the Forum helps leaders make sense of AI and collaborate on responsible innovation

What does a framework for trust in physical AI look like?

Autonomous vehicles are the first wave. Soon, intelligent machines will be an integral and normal part of everyday life – maintaining infrastructure, operating factory equipment, folding laundry and caring for the young and elderly.

From autonomous vehicles, we can derive a framework for facilitating open dialogues that foster trust and can be generalized to other forms of physical AI.

1. Demonstrate utility

When an autonomous system measurably improves safety outcomes, efficiency or access to essential services, the value becomes intuitive. Early deployments should prioritize clear, shared public benefits, not novelty.

2. Demonstrate reliability

Demonstrating reliability means publishing performance data, sharing testing methodologies and aligning with independent standards. Responsible innovation requires not just breakthroughs but interpretable, predictable and repeatable results.

3. Ensure transparency

Transparent systems – those that can communicate their observations, explain their decisions and show how they learn – invite engagement rather than suspicion. This transparency must extend to governance as well: how data is used, how safety is enforced and how public concerns shape product design.

Loading...

This framework helps ensure that physical AI is designed for real people living real lives – not idealized models or theoretical environments.

Importantly, we must recognize that intelligent machines are just that – intelligent machines. They have no inherent morality or ethics. What they do have is an unlimited capacity to learn. It is our job to instil in them and train them to exhibit the behaviour we want to see.

The technology’s growing ability to explain itself offers us a chance to deeply examine and refine our biases and assumptions about the way the world should work. In this way, AI systems can learn to make meaningful and positive contributions to society.

The path forward

Accessible autonomy will transform everyday life. It will make our cities safer, our infrastructure more efficient and our communities more connected.

As autonomous systems grow more capable, we now have an unprecedented opportunity to not only build smarter machines but to build smarter relationships with them.

With genuine dialogue as our guide, physical AI can prove itself to be more than a technological milestone – it can become a collective leap in how people live, move and thrive, propelling us toward a better world.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Cybersecurity

Related topics:
Artificial Intelligence
Technological Innovation
Emerging Technologies
Share:
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

How we can achieve the responsible development of brain-computer interfaces

Carolina Aguilar

January 18, 2026

What is planetary intelligence and how could it move AI from the internet to the real world?

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum