Health and Healthcare Systems

AI in healthcare risks could exclude 5 billion people; here’s what we can do about it

A doctor reads medical images on a screen during a diagnostic competition between an AI machine and human experts at the China National Convention Center in Beijing, China, June 30, 2018. Picture taken June 30:  Without representative data, the use of AI in healthcare could deepen inequalities

Without representative data, the use of AI in healthcare could deepen inequalities. Image: REUTERS

Gabriel Onuh
This article is part of: Centre for AI Excellence
  • Most artificial intelligence (AI) health systems are trained on data from high-income countries, leaving billions of people in the Global South invisible in diagnostic models, risk assessments and treatment algorithms.
  • Without representative data, AI-powered tools can misdiagnose or fail to recognize conditions in underrepresented populations, deepening global health inequalities rather than reducing them.
  • Bridging the gap demands global cooperation by building diverse global health datasets, investing in local digital infrastructure and creating fair governance frameworks that ensure AI health systems benefit all.

Artificial intelligence (AI) is rapidly transforming healthcare. From early cancer detection to personalized treatment recommendations, AI systems seek to make medicine faster, more precise and more affordable.

In high-income countries, these tools are already being tested in hospitals, research centres and clinics, and the results are impressive. Yet, for most of the world’s population, nearly 5 billion people living in low and middle-income countries, the benefits of medical AI remain out of reach.

Instead of narrowing health inequality, current approaches to AI development risk widening it. Most AI health systems are built on data sourced from a narrow set of populations, which means billions of people, mainly from the Global South remain largely invisible in diagnostic models, risk assessments and treatment algorithms.

Without corrective action, AI may inadvertently reinforce existing structural inequities in global health rather than helping to overcome them.

Have you read?

Data problems at the heart of medical AI

It is well known that data is central to AI development. In healthcare, this typically refers to electronic health records, imaging scans, genomic information or biometric signals collected from millions of patients.

According to a report by Deutsche Welle, more than 80% of genetics studies include only people of European descent, which represents less than 20% of the world’s population, a situation many experts find highly disproportionate.

Building local innovation ecosystems around AI in healthcare ensures solutions are not only imported but also created within and for the communities they serve.

Similarly, most health data used to train AI models comes from patients in the United States, parts of Europe and China. Even though it is only with comprehensive and diverse datasets that AI health systems learn to recognize patterns and make accurate predictions across demographics.

AI systems trained on biased data often perform poorly when applied to populations with different genetic backgrounds, disease prevalence or environmental exposures. For example:

  • Skin cancer detection algorithms trained primarily on images of lighter skin tones have been shown to perform less accurately on darker skin tones.
  • Cardiovascular risk calculators built on European and American cohorts may underestimate or overestimate risks for African, South Asian or Latin American populations.

This creates a troubling situation, namely that the populations who stand to gain the most from scalable, low-cost diagnostic tools are the least represented in the systems being built.

The risk of exclusion

The dangers of exclusion posed by biased AI health systems are not just technical; they have tangible, real-life consequences. A cancer detection algorithm that misses tumours on darker skin isn’t just a technical error; it is a life-or-death issue. If medical AI systems are deployed without addressing these issues, it could lead to:

  • Misdiagnosis and harm: Patients in underrepresented regions may receive less accurate or inappropriate diagnoses, leading to delayed or ineffective treatment.
  • Erosion of trust: Communities that experience systematic errors from AI tools may come to distrust not only digital technologies but also healthcare institutions.
  • Deepening inequality: As advanced AI systems help improve outcomes in the Global North, the Global South may be left further behind, compounding existing global health disparities.

AI has the potential to transform entire economies. But especially in healthcare, its power will only be realized equitably if it is built on foundations that reflect global diversity.

Building inclusive AI for global health

Building inclusive AI health systems is no mean feat. It requires coordinated action across governments, international organizations, private sector innovators and local communities. To achieve this, these priorities should be taken into consideration:

1. Building diverse global health datasets

Representative data is the core of inclusive AI. To ensure AI systems serve populations in Africa, Asia and Latin America, we must invest in creating and curating datasets that capture diverse demographics. This could involve supporting local hospitals and clinics in digitizing records. Rwanda’s Digital Health Initiative is an exemplary step in this direction.

Also, encouraging collaborations that make datasets interoperable across borders while respecting data sovereignty is vital. International initiatives such as the Global Alliance for Genomics and Health have shown that cross-border cooperation on data sharing is possible.

Similar frameworks are needed to ensure that broader health data informs AI models, reflecting global realities.

2. Investing in digital infrastructure, local capacity and governance

Even when inclusive datasets exist, they are only useful if local healthcare systems can deploy and adapt AI tools effectively. Many medical facilities in the Global South lack the digital infrastructure from reliable internet connectivity to secure data storage required to support advanced AI.

Equally important is human capacity. Clinicians, data scientists and policymakers need training and resources to evaluate AI tools, customize them for local contexts and ensure they align with clinical workflows.

Building local innovation ecosystems around AI in healthcare ensures solutions are not only imported but also created within and for the communities they serve.

Alongside investments in digital infrastructure and local capacity, it is imperative to establish governance systems and regulatory frameworks that would ensure fairness in the design, engineering, deployment and use of these systems.

India’s National Health Blueprint is another strategic initiative aimed at addressing these gaps.

AI health systems for all

The risk of excluding 5 billion people from the future of AI-enabled healthcare is real but it is not inevitable. The choices made today by policymakers, innovators and healthcare leaders will determine whether AI deepens the existing divides or helps bridge them.

International organizations such as the World Health Organization, in its Global Digital Health Strategy, emphasise the importance of digital health equity. Technology companies and research institutions are increasingly recognizing the need for diverse data. However, progress remains uneven and the scale of the challenge demands more global cooperation.

AI holds extraordinary promise in healthcare. It can help clinicians detect diseases earlier, guide treatment decisions and extend quality care to communities far from medical centres. However, for these benefits to be accessible globally, we must address the structural inequities surrounding how AI health systems are currently developed.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Health and Healthcare Systems
See all

The trust gap: why AI in healthcare must feel safe, not just be built safe

Adriana Banozic-Tang

December 5, 2025

Andropause awareness helps move us towards a healthier society

2:50

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum