Trust in healthcare AI can't just be designed – it must be felt by clinicians and patients

Human intervention in healthcare is not a system error, but data that needs to be further incorporated. Image: Shutterstock
- Trust in healthcare AI currently over-relies on system design, not lived medical realities.
- Continuous feedback loops are necessary to embed trust in healthcare AI that is responsive to clinician and patient needs.
- Initiatives in South-East Asia show how trust in technology can be extended from policy to practice.
Artificial intelligence (AI) is reshaping healthcare – from diagnostics to hospital operations. Yet governance lags behind. Trust in the technology, which is essential for clinical adoption, remains inconsistent. Most frameworks focus on system design or compliance, overlooking the lived realities of care where trust is built through everyday use by patients and clinicians.
Singapore’s approach in healthcare AI reflects both progress and challenges. In 2018, the Singhealth cyberattack, one of the country’s worst data breaches, compromised the records of nearly 1.5 million patients and exposed critical infrastructure gaps.
A 2023 YouGov survey found that only 14% of Singapore residents would engage with AI-enabled mental health counselling. In 2021, the Ministry of Health released the AI in Healthcare Guidelines, promoting safeguards such as explainability, human oversight and risk communication; a step toward embedding trust, not just enforcing compliance.
These developments signal a broader imperative. Governance must move beyond fixed standards to embrace deployment-phase trust loops: dynamic, context-aware mechanisms that evolve alongside clinical realities. Governance in healthcare must be responsive, observable and embedded – not only legislated.
Trust cannot be retrofitted
A radiologist hesitates before accepting an AI-generated interpretation of a scan. A nurse overrides an AI-generated triage alert. A patient asks: “Who’s seeing my data?” These three examples are not system errors per se; they are signals. Trust friction emerges when systems do not align with real-world needs. “Trust by design” helps, but governance must be continuous and guided by feedback, not assumptions.
A 2025 npj Digital Medicine perspective found that continuous-monitoring pipelines and retraining workflows are rapidly becoming a major operating cost, while adjustments for shifting data patterns (“data drift”) place added strain on institutions without dedicated engineering teams. These findings reinforce the need for phase trust loops that are scalable and context-sensitive, or we risk widening capability gaps between well-resourced centres and smaller hospitals.
What does operational trust look like?
Operational trust refers to how well AI systems perform under real-world clinical conditions, in ways that are transparent, accountable and usable by frontline staff. It encompasses:
Contextual explainability: Can junior clinicians understand and challenge AI outputs in real time?
A 2025 AMA survey found that 68% of physicians saw value in AI tools, and 66% were already using them. But nearly half (47%) cited increased oversight from medical practitioners as the most important regulatory step to build trust in AI-generated recommendations.
Singapore General Hospital’s AI2D model (Augmented Intelligence in Infectious Diseases) exemplifies contextual explainability in clinical AI. The tool helps doctors determine whether antibiotics are necessary for conditions like pneumonia using real-time patient data and has achieved 90% accuracy in early validations. By recommending treatment decisions before lab results are available, AI2D supports clinician judgment without replacing it, reinforcing trust through timely, interpretable outputs.
Auditability in motion: Are predictions traceable during patient handovers?
A 2022 Nature Medicine study reported a 17% performance drop in a sepsis model within months of deployment due to environmental drift. This highlights why auditability matters: It enables real-time traceability of model performance as conditions evolve.
Singapore’s aiTriage and CARES 2 tools, forecasting cardiac and post-surgical events respectively, earn confidence with predictions that are logged, time-stamped and embedded in clinical records, allowing clinicians to verify past recommendations during handovers or follow-up care. This traceability supports accountability while reinforcing human judgment.
End-user feedback mechanisms: Can frontline users flag unsafe patterns and help evolve the algorithm?
Usability enhancements: SingHealth’s Note Buddy transcribes multilingual consultations live, reducing administrative burden and enabling more attentive care – a tangible example of trust through interface alignment.
Mental health platforms like mindline.sg and Holmusk’s NeuroBlu embed trust-aware oversight by design. Mindline.sg uses a clinician-reviewed triage framework with category-based routing, offering self-care tools or professional referrals without retaining identifiable data. While not built for ongoing clinical override, it is updated iteratively to maintain relevance and safety. Holmusk, by contrast, links predictive models to clinician-facing dashboards via its NeuroBlu platform, enabling care teams to monitor risk and refine interventions through continuous data updates.
South-East Asia: A trust lab in motion
South-East Asia is pioneering governance through practice, not just policy. In Singapore, the MOH-led TRUST exchange now hosts around 40 anonymized national healthcare datasets and supported 17 approved data requests from 107 users in its first year. Meanwhile, AI Verify, Singapore’s national framework for testing and evaluating AI systems, is being adopted for clinical settings through the 2025 Global AI Assurance Pilot. It combines technical testing with process checks to assess AI systems for fairness, robustness and transparency. In this pilot, 17 real-world GenAI deployments, including a workflow-summarization tool at Changi General Hospital were paired with 16 specialist testing firms across 10 industries.
Beyond Singapore, ASEAN has moved from principles to pilots, with member states field-testing language-specific safety tools. Most notably, Thailand’s Typhoon2-Safety classifier has been released to help enforce cultural and contextual guardrails on Thai-language LLM outputs. In healthcare, such tools are increasingly relevant for applications like mental health support, triage assistants and patient-facing generative AI systems, where linguistic precision and cultural sensitivity are essential to safety and trust.
Bridging these pilots to sustained impact requires governance infrastructure – co-designed by vendors, healthcare institutions and regulators – that can embed accountability and trust into daily workflows. Public procurement can lock in trust by linking bonuses or repayment penalties if targets are not met to a core set of KPIs – such as clinician-override rate, diagnostic-error reduction and patient-reported experience (PREM) scores – as outlined in the AI for IMPACTs evaluation framework.
Principles for trust
Instilling trust at the heart of AI operations requires institutional leadership. The conditions – transparency, accountability, user alignment – do not emerge by default. Leaders should set the terms for how AI is deployed, evaluated and improved, via the the following actions:
- Build trust: Embed operational trust metrics in public-sector AI tenders (such as require vendors to track clinician-override rate, PREM scores and diagnostic-error reduction). Strengthening accessible channels for users to safely flag risks and influence system evolution is also important.
- Co-design with users: Trust must be co-developed, not imposed. This means embedding granular privacy controls and patient advisory input into every design iteration.
- Coordinate regionally: Use ASEAN and APEC platforms to align deployment governance, as seen in Singapore’s MOU.
- Invest in governance literacy: Initiatives like Singapore’s Centre of AI in Medicine (C-AIM) and AI for Science Initiative are embedding governance principles that support safety, ethics and real-world implementation.
AI can transform healthcare, but only if it is trusted. Trust is not static; it is built through context, feedback and use. Context means aligning AI systems with the clinical environment, including workflows, data sources and frontline roles. Feedback refers to continuous input from users and performance monitoring to catch risks and improve functionality. And use emphasizes that trust must be earned through repeated, safe interaction – not assumed at launch.
To shape the future of healthcare AI, we must govern from the ground up – with trust that is earned through lived experience, not just designed.
What is the World Economic Forum doing to improve healthcare systems?
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Health and Healthcare SystemsSee all
Adriana Banozic-Tang
December 5, 2025



