• The age of big data and AI offers huge promise for improving healthcare.
  • We’ve gone from around 2,000 images per MRI scan of a human head to over 20,000.
  • Efficiency, ethics and collaboration will underpin how successfully we use technology.

“I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.”

When U.S. author Richard Brautigan wrote these lines of his poem, All Watched Over by Machines of Loving Grace, in 1967, he was poet-in-residence at the California Institute of Technology.

Only two years earlier, Gordon Moore – a graduate of the same university and co-founder of INTEL – had first formulated what is commonly known today as “Moore’s Law.” According to his prediction, the number of transistors that can fit onto a microchip would double every two years. For decades, this has resulted in a similar increase in computer speed and efficiency.

Having been a paradigm of computing progress for more than half a century, Moore’s Law of continual chip miniaturization is finally reaching its limits. At the same time, the questions Brautigan raised in his ambiguous poem could not be more relevant today. All Watched Over by Machines of Loving Grace – the title juxtaposes two fundamental poles of the debate around man-machine interaction: the incredible promise and intangible threat posed by future technology.

Disruptive tech, big breakthroughs

Looking at healthcare alone, there is no doubt that disruptive innovation can lead to unprecedented human progress. In combination with medical breakthroughs from biotech and genome editing through to gene and cell therapy, AI and Big Data can revolutionize how we diagnose, treat and monitor patients, increase overall efficiency via outcomes-based healthcare systems and enable access to health for remote communities.

Healthcare is also a compelling example of the immense computing power it will take to seize these opportunities.

In 2020, the volume of global healthcare data generated is expected to be 15 times higher than in 2013. In terms of ever more precise diagnostic imaging, we’ve gone from around 2,000 images per MRI scan of a human head to over 20,000. The deep learning needed to advance the use of Big Data and AI will catapult computing demand to entirely new dimensions. Today’s top-end supercomputer is already more than one million times faster than a high-end laptop. The supercomputing power that can give rise to considerable future advances in fields such as personalized medicine, carbon capture or astrophysics will be yet another 1,000 times faster than that. This shows that while much of the debate around digital transformation centers on software, hardware is an increasingly critical part of the picture. Many AI-related mathematical concepts already existed back in the 1960s, but computing capacity and memory clearly did not.

Including this underestimated fact in the equation, and going back to the question raised in Brautigan’s poem: How do we make sure our technological future is hardwired for human progress? To shape the future of disruptive innovation to everyone’s benefit, there are at least three major challenges we must tackle on a global, multi-stakeholder scale.

Three big challenges

First, there is the question of efficiency – both in computing capacity and energy consumption. Experts have calculated that training a single AI model can emit as much carbon as five cars in their lifetimes (including car manufacture itself). Information and communications technology already accounts for more than 2% of global emissions. According to the most alarming estimates, its share of electricity use could be more than 20% of the global total in around ten years’ time. This means that along the entire value chain of smart applications and products, it is essential to develop materials and technologies that enable considerable improvements in computing performance while driving energy efficiency.

What is the World Economic Forum doing about the Fourth Industrial Revolution?

The World Economic Forum was the first to draw the world’s attention to the Fourth Industrial Revolution, the current period of unprecedented change driven by rapid technological advances. Policies, norms and regulations have not been able to keep up with the pace of innovation, creating a growing need to fill this gap.

The Forum established the Centre for the Fourth Industrial Revolution Network in 2017 to ensure that new and emerging technologies will help—not harm—humanity in the future. Headquartered in San Francisco, the network launched centres in China, India and Japan in 2018 and is rapidly establishing locally-run Affiliate Centres in many countries around the world.

The global network is working closely with partners from government, business, academia and civil society to co-design and pilot agile frameworks for governing new and emerging technologies, including artificial intelligence (AI), autonomous vehicles, blockchain, data policy, digital trade, drones, internet of things (IoT), precision medicine and environmental innovations.

Learn more about the groundbreaking work that the Centre for the Fourth Industrial Revolution Network is doing to prepare us for the future.

Want to help us shape the Fourth Industrial Revolution? Contact us to find out how you can become a member or partner.

Industry’s efforts to achieve this are considerable, but still mainly evolutionary, using new materials that enable more efficient processor, memory, sensor and display technologies. As Moore’s Law is reaching its limits, advancements will call for fundamentally new material solutions, empowering next-stage technologies such as neuro-morphic and quantum computing. Not least, the quest for efficiency is moving beyond IT – to nature and DNA, a 3.8-billion-year-old data source that has opened new worlds of scientific opportunity since the pioneering work of Gregor Mendel. Roughly 160 years after Mendel discovered what we know today as genetic inheritance, scientists are now working to make DNA usable as a future data storage tool. DNA’s amazing storage density would make it possible to store all the information currently available on the Internet in a shoebox – at a half-life of around 500 years and using practically zero energy to maintain.

computing power tech moore's law
A brief history of Moore's Law
Image: By Max Roser

A number of hurdles remain before this promising tool can become a viable option. To that end, there is at least one major challenge that applies to DNA-based technology as much as it does to digital disruption: the ethical guidelines technological advancement urgently needs. From entrusting medical decisions to AI and our own safety to self-driving cars, disruptive technologies are potentially so powerful they can alter the human condition. At the same time, ethical standards including data safety and security still lag behind considerably. One question that is increasingly and rightly discussed in this context is human bias. By nature, the data used to train AI systems, or the algorithms employed, reflect such biases, including discriminatory assumptions about gender or race. What’s more, we all know that in the digital world, manipulation can be produced easily at a mass scale. If we fail to address these issues, AI could not only reinforce, but greatly increase inequality.

No doubt, sound ethics require clear legal frameworks. At the same time, globally harmonized standards are needed to foster innovation and ensure a globally level playing field. Not least, academic and industry innovators can and must do their part – by setting and following stringent ethical standards of their own and discussing critical ethical questions with externally acknowledged experts, for example in dedicated ethics boards.

Ethics is one of many reasons why there is a third and final major challenge: stepping up global, interdisciplinary collaboration. From mere market logic, the scaling it takes to meet the world’s gigantic computing demand will make regulatory debates on a purely national or even European level a lost cause. And while markets typically reward those who stick to their core competencies, political institutions should incentivize research that draws on cross-industry, interdisciplinary expertise in new innovation fields, such as DNA storage. Not least, given the fundamental impact technology can have on society, we must make sure that the world’s leading industrial nations follow a collaborative approach based on clear international guidelines – ideally on a UN level, despite geopolitical rivalries.

Efficiency, ethics, collaboration – three simple words, three great challenges on our path to making disruptive technologies what they are set out to be: not “machines” to watch over us (however “loving”) – but tools to help us build human progress, as forces for good. The risks that lie in technological disruption remain considerable. But leaving its opportunities untapped is a risk we should not take.