The trust gap: why AI in healthcare must feel safe, not just be built safe

Verifiable clinician checkpoints are helpful for building trust in AI healthcare. Image: Shutterstock
- Trust in healthcare AI is lagging behind adoption, particularly in sensitive domains.
- Key metrics, such as clinician override, can introduce the minimum viable assurance patients require to build trust in AI systems.
- South-East Asian health systems provide case studies for making safe AI healthcare an observable reality.
AI is rapidly becoming the first layer of engagement in healthcare, powering everything from symptom-checkers to therapy bots. Yet adoption is outpacing trust. In Singapore, a highly digitized and well-regulated environment, AI is ubiquitous (80% of residents use it) – but trust drops sharply once the advice moves into sensitive or emotionally charged domains like mental health. This pattern holds across South-East Asia, where one in four people in Indonesia and Hong Kong has tried an AI mental-health tool, yet concerns about safety and empathy persist. Globally, nearly 60% of Americans feel uneasy about AI-aided diagnosis. The bottleneck is no longer technical accuracy; it is a crisis of emotional assurance.
This trust deficit arises from three systemic gaps, visible even in advanced implementations:
1. Patients and clinicians often cannot see how AI risk scores are generated, creating a foundational trust gap. This structural opacity isn't merely an inconvenience; it can have direct clinical consequences and be a key factor in patient harm. For instance, the OECD AI Incidents Monitor documents cases in healthcare where flawed AI design led to biased outcomes, such as one system that unintentionally prioritized white patients over black patients by using healthcare costs as a proxy for medical needs. This real-world risk is confirmed by regulatory audits, such as a npj Digital Medicine 2025 study that found that over 90% of FDA-approved AI devices fail to report basic information about their training data or architecture. When the "why" behind a decision is invisible, safety becomes a matter of guesswork.
2. Accountability. The ITU’s AI Governance Report 2025 documents how responsibility fragments across developers, hospitals and ministries when private AI models enter public health workflows. This creates a tangible accountability gap: When an error occurs, there is no clear owner. The result is that patient grievances can enter a bureaucratic void, with no single party obligated to investigate, explain or provide redress, eroding trust following system failures.
3. Human control. When AI suggests a diagnosis first, it can reverse the traditional clinical workflow. Recent research confirms that when clinicians engage with AI-proposed diagnoses, their role is shifting toward verification, with their acceptance hinging on the model's ability to explain its reasoning. But the assumption of human oversight means little without explicit, operationalized checkpoints, such as mandating that clinicians must explicitly read, then accept or override AI diagnoses.
Why perceived safety is the next KPI
In healthcare, digital trust is a prerequisite for effectiveness. Evidence from digital mental-health deployments shows that unease with AI leads to lower engagement and earlier dropout – even when clinical accuracy is high. Users disengage not because the model is wrong, but because the experience feels unsafe.
Since AI cannot genuinely experience empathy, trust cannot be built on its capacity for human-like connection. Instead, as research into human-AI interaction clarifies, trust is established through respect for the patient's vulnerability; a dynamic defined by predictability, clarity and user control. This requires making three elements clear:
- Which data is used. A 2024 study in Nature Medicine provides direct evidence that belief in AI involvement decreases trust in medical advice, demonstrating how unclear data flows significantly reduce willingness to disclose sensitive information.
- How decisions are made. A recent transparency audit by Nature confirmed systemic structural opacity, finding that over 90% of FDA-approved medical AI devices fail to report basic information about their training data or architecture, which fundamentally undermines the perceived accuracy of their recommendations.
- When humans are involved. Recent work on AI responsibility gaps shows that most “gaps” are actually problems of diffused accountability across many hands and institutions rather than a lack of any responsible agent at all. In that context, meaningful appeal pathways – clear routes for patients to request explanation, review, and where necessary revision of an AI-supported decision – become the practical mechanism for restoring accountability at the point of care. The availability of such recourse strongly shapes users’ perceptions of fairness and trust.
Trust by demonstration in South-East Asia
Health systems are moving from designing trust on paper to demonstrating it in practice. These reforms make safety observable – shifting trust from a promise to proof:
- Singapore’s latest reforms, including the HPRG Innovation Office, consolidate agile pathways for AI diagnostics, requiring demonstrable audit trails and cybersecurity postures before deployment through initiatives like the SaMD Change Management Programme. Cross-border collaboration is also increasing, with the Singapore-Malaysia Medical Device Regulatory Reliance Programme accelerating evaluations using shared oversight.
- Indonesia is laying the groundwork to embed assurance principles into frontline care. Its BPJS Digital Health Transformation Strategy is creating the integrated digital infrastructure necessary for future AI-supported triage.
- Malaysia’s rapid digitization, including cloud-based systems that support 156 public clinics, creates a data backbone for observable performance. During its ASEAN chairmanship, it has prioritized regional cooperation on ethical AI, promoting frameworks that make safety and traceability core to the user experience.
- Hong Kong is establishing the foundational infrastructure for trusted data-sharing, a critical technical backbone for auditable and traceable AI.In January this year, a consortium led by the Chinese University of Hong Kong and the Hong Kong Science Park announced the region's first cross-border medical data space. This initiative is explicitly designed to ensure secure and credible data handling through decentralized operations and cryptographic solutions.
Introducing minimum viable assurance
Building trust at scale requires minimum viable assurance – turning governance principles into visible, user-facing signals of safety. Three metrics, already within reach, do this:
- Clinician override rates. Evidence from real-world deployments confirms that tracking how often clinicians reject AI recommendations provides a practical signal of model reliability. A 2025 Diagnostics study that developed a framework for AI trust found that override patterns were a direct measure of clinician scepticism, with override rates of just 1.7% for trustworthy, transparent AI predictions compared to over 73% for opaque ones. This demonstrates that override rates function as a tangible, real-world safety indicator.
- Audit trail visibility. Guidelines from the WHO on AI for Health mandate mechanisms for audit and human oversight, creating the foundational requirement for model-level logging and verifiable accountability. This principle is echoed in the EU AI Act and operationalized in platforms like Singapore's MOH TRUST environment, making accountability something users can verify.
- Patient comprehension scores. Clarity directly affects whether patients follow recommended actions – from medication instructions to self-management steps in digital care. Simple "teach-back" checkpoints, where patients confirm their understanding, can transform this principle into a measurable signal of assurance. Verifying comprehension before a patient acts on an AI recommendation provides a tangible checkpoint for safety and trust.
A new policy agenda for leaders
As nations deploy AI across health systems, a new priority must guide governance: patient trust as a core performance indicator. This requires a fundamental shift from measuring only technical efficacy to evaluating human confidence.
Current assessments still prioritize model accuracy and efficiency. Yet real-world adoption hinges on whether tools feel safe and fair to use. To close this gap, policy must mandate continuous trust assurance alongside technical validation, moving beyond one-time audits to ongoing monitoring of real-world impact.
Critical actions include:
- Operationalizing equity: Systematically tracking and addressing lower uptake among older and lower-income populations, even for free tools.
- Building visible recourse: Establishing clear pathways for patients to question or challenge AI outputs, turning regulatory compliance into tangible user control.
- Addressing digital discomfort: Specifically in mental health, where provider and patient hesitancy can transform AI from a bridge into a barrier, widening access gaps.
What is the World Economic Forum doing to improve healthcare systems?
The next phase of digital health will be defined not by algorithmic performance in labs, but by confidence in clinics. Governance must be experienced, not just documented. For AI to fulfill its promise, trust must be engineered into every interaction.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Cybersecurity
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Health and Healthcare SystemsSee all
Naoko Tochibayashi
December 2, 2025



