Health and Healthcare Systems

For technology to boost global health, these 3 obstacles must be overcome

The right treatment: Big data is fuelling the rise of precision medicine Image: Photo by Ibrahim Boran on Unsplash

Stefan Oschmann
CEO, Merck Group
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Health and Healthcare Systems?
The Big Picture
Explore and monitor how Global Health is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Global Health

This article is part of: World Economic Forum Annual Meeting
  • The age of big data and AI offers huge promise for improving healthcare.
  • We’ve gone from around 2,000 images per MRI scan of a human head to over 20,000.
  • Efficiency, ethics and collaboration will underpin how successfully we use technology.

“I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.”

When U.S. author Richard Brautigan wrote these lines of his poem, All Watched Over by Machines of Loving Grace, in 1967, he was poet-in-residence at the California Institute of Technology.

Only two years earlier, Gordon Moore – a graduate of the same university and co-founder of INTEL – had first formulated what is commonly known today as “Moore’s Law.” According to his prediction, the number of transistors that can fit onto a microchip would double every two years. For decades, this has resulted in a similar increase in computer speed and efficiency.

Having been a paradigm of computing progress for more than half a century, Moore’s Law of continual chip miniaturization is finally reaching its limits. At the same time, the questions Brautigan raised in his ambiguous poem could not be more relevant today. All Watched Over by Machines of Loving Grace – the title juxtaposes two fundamental poles of the debate around man-machine interaction: the incredible promise and intangible threat posed by future technology.

Disruptive tech, big breakthroughs

Looking at healthcare alone, there is no doubt that disruptive innovation can lead to unprecedented human progress. In combination with medical breakthroughs from biotech and genome editing through to gene and cell therapy, AI and Big Data can revolutionize how we diagnose, treat and monitor patients, increase overall efficiency via outcomes-based healthcare systems and enable access to health for remote communities.

Healthcare is also a compelling example of the immense computing power it will take to seize these opportunities.

In 2020, the volume of global healthcare data generated is expected to be 15 times higher than in 2013. In terms of ever more precise diagnostic imaging, we’ve gone from around 2,000 images per MRI scan of a human head to over 20,000. The deep learning needed to advance the use of Big Data and AI will catapult computing demand to entirely new dimensions. Today’s top-end supercomputer is already more than one million times faster than a high-end laptop. The supercomputing power that can give rise to considerable future advances in fields such as personalized medicine, carbon capture or astrophysics will be yet another 1,000 times faster than that. This shows that while much of the debate around digital transformation centers on software, hardware is an increasingly critical part of the picture. Many AI-related mathematical concepts already existed back in the 1960s, but computing capacity and memory clearly did not.

Including this underestimated fact in the equation, and going back to the question raised in Brautigan’s poem: How do we make sure our technological future is hardwired for human progress? To shape the future of disruptive innovation to everyone’s benefit, there are at least three major challenges we must tackle on a global, multi-stakeholder scale.

Three big challenges

First, there is the question of efficiency – both in computing capacity and energy consumption. Experts have calculated that training a single AI model can emit as much carbon as five cars in their lifetimes (including car manufacture itself). Information and communications technology already accounts for more than 2% of global emissions. According to the most alarming estimates, its share of electricity use could be more than 20% of the global total in around ten years’ time. This means that along the entire value chain of smart applications and products, it is essential to develop materials and technologies that enable considerable improvements in computing performance while driving energy efficiency.

Discover

What is the World Economic Forum doing about the Fourth Industrial Revolution?

Industry’s efforts to achieve this are considerable, but still mainly evolutionary, using new materials that enable more efficient processor, memory, sensor and display technologies. As Moore’s Law is reaching its limits, advancements will call for fundamentally new material solutions, empowering next-stage technologies such as neuro-morphic and quantum computing. Not least, the quest for efficiency is moving beyond IT – to nature and DNA, a 3.8-billion-year-old data source that has opened new worlds of scientific opportunity since the pioneering work of Gregor Mendel. Roughly 160 years after Mendel discovered what we know today as genetic inheritance, scientists are now working to make DNA usable as a future data storage tool. DNA’s amazing storage density would make it possible to store all the information currently available on the Internet in a shoebox – at a half-life of around 500 years and using practically zero energy to maintain.

computing power tech moore's law
A brief history of Moore's Law Image: By Max Roser

A number of hurdles remain before this promising tool can become a viable option. To that end, there is at least one major challenge that applies to DNA-based technology as much as it does to digital disruption: the ethical guidelines technological advancement urgently needs. From entrusting medical decisions to AI and our own safety to self-driving cars, disruptive technologies are potentially so powerful they can alter the human condition. At the same time, ethical standards including data safety and security still lag behind considerably. One question that is increasingly and rightly discussed in this context is human bias. By nature, the data used to train AI systems, or the algorithms employed, reflect such biases, including discriminatory assumptions about gender or race. What’s more, we all know that in the digital world, manipulation can be produced easily at a mass scale. If we fail to address these issues, AI could not only reinforce, but greatly increase inequality.

No doubt, sound ethics require clear legal frameworks. At the same time, globally harmonized standards are needed to foster innovation and ensure a globally level playing field. Not least, academic and industry innovators can and must do their part – by setting and following stringent ethical standards of their own and discussing critical ethical questions with externally acknowledged experts, for example in dedicated ethics boards.

Ethics is one of many reasons why there is a third and final major challenge: stepping up global, interdisciplinary collaboration. From mere market logic, the scaling it takes to meet the world’s gigantic computing demand will make regulatory debates on a purely national or even European level a lost cause. And while markets typically reward those who stick to their core competencies, political institutions should incentivize research that draws on cross-industry, interdisciplinary expertise in new innovation fields, such as DNA storage. Not least, given the fundamental impact technology can have on society, we must make sure that the world’s leading industrial nations follow a collaborative approach based on clear international guidelines – ideally on a UN level, despite geopolitical rivalries.

Efficiency, ethics, collaboration – three simple words, three great challenges on our path to making disruptive technologies what they are set out to be: not “machines” to watch over us (however “loving”) – but tools to help us build human progress, as forces for good. The risks that lie in technological disruption remain considerable. But leaving its opportunities untapped is a risk we should not take.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Health and Healthcare SystemsForum Institutional
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Market failures cause antibiotic resistance. Here's how to address them

Katherine Klemperer and Anthony McDonnell

April 25, 2024

2:12

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum