Artificial Intelligence

5 reasons why we need to start talking about existential risks

A Russian Soyuz rocket lifts off

Image: REUTERS/Kirill Kudryavtsev/Pool

Risalat Khan
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

This article is part of: World Economic Forum Annual Meeting

The indigenous Maōri people of Aotearoa had a warring culture. Numerous tribes with diverse traditions fiercely fought one another as they competed for resources and struggled to survive.

In the early 16th century, some Maōri fled the violence by boat and arrived at the island of Rēkohu. They found it harsh and barren, inhospitable to agriculture. Nevertheless, they settled, and over time became the Moriori.

Despite the unforgiving conditions, their population grew. Food became scarce. Conflicts started, fuelled by their warring traditions. Many were killed. The population was left hanging by a thread.

Then a wise and visionary chief, Nunuku Whenua (pronounced feh-nua), had a moment of clarity. He declared an end to violence and cannibalism, and cursed anyone who disobeyed to suffer great pain.

His dictum became the divine Nunuku’s Law for the Moriori. They followed it, living peacefully for centuries in the harsh climate. They enforced tough choices, such as castrating male infants to maintain a stable population and killing only old male seals to keep their primary food source sustainable. They created a largely equitable society, to save themselves from themselves.

Unfortunately, they did not survive. When Maōri from the mainland and European settlers made contact centuries later, they exterminated the peaceful population and made slaves out of any survivors. The last full-blooded Moriori died in 1933. Today, the few remaining descendants of Moriori slaves are trying to revive a lost culture.

Infinite future possibilities

I find the story of the Moriori profound. It teaches me two lessons. Firstly, that human culture is far from immutable. That we can struggle against our baser instincts. That we can master them and rise to unprecedented challenges. Secondly, that even this does not make us masters of our own destiny. We can make visionary choices, but the future can still surprise us.

This is a humbling realization. Because faced with an uncertain future, the only wise thing we can do is prepare for possibilities. Standing at the launch pad of the Fourth Industrial Revolution, the possibilities seem endless. They range from an era of abundance to the end of humanity, and everything in between. How do we navigate such a wide and divergent spectrum?

I am an optimist. From my bubble of privilege, life feels like a rollercoaster ride full of ever more impressive wonders, even as I try to fight the many social injustices that still blight us. However, the accelerating pace of change amid uncertainty elicits one fundamental observation. Among the infinite future possibilities, only one outcome is truly irreversible: extinction.

Concerns about extinction are often dismissed as apocalyptic alarmism. Sometimes, they are. But repeating that mankind is still here after 70 years of existential warning about nuclear warfare is a straw man argument. The fact that a 1000-year flood has not happened does not negate its possibility. And there have been far too many nuclear near-misses to rest easy.

As the World Economic Forum’s Annual Meeting in Davos discusses how to create a shared future in a fractured world, here are five reasons why the possibility of existential risks should raise the stakes of conversation:

1. Extinction is the rule, not the exception

More than 99.9% of all the species that ever existed are gone. Deep time is unfathomable to the human brain. But if one cares to take a tour of the billions of years of life’s history, we find a litany of forgotten species. And we have only discovered a mere fraction of the extinct species that once roamed the planet.

In the speck of time since the first humans evolved, more than 99.9% of all the distinct human cultures that have ever existed are extinct. Each hunter-gatherer tribe had its own mythologies, traditions and norms. They wiped each other out, or coalesced into larger formations following the agricultural revolution. However, as major civilizations emerged, even those that reached incredible heights, such as the Egyptians and the Romans, eventually collapsed.

It is only in the very recent past that we became a truly global civilization. Our interconnectedness continues to grow rapidly. “Stand or fall, we are the last civilization”, as Ricken Patel, the founder of the global civic movement Avaaz, put it.

2. Environmental pressures can drive extinction

More than 15,000 scientists just issued a ‘warning to humanity’. They called on us to reduce our impact on the biosphere, 25 years after their first such appeal. The warning notes that we are far outstripping the capacity of our planet in all but one measure of ozone depletion, including emissions, biodiversity, freshwater availability and more. The scientists, not a crowd known to overstate facts, conclude: “soon it will be too late to shift course away from our failing trajectory, and time is running out”.

In his 2005 book Collapse, Jared Diamond charts the history of past societies. He makes the case that overpopulation and resource use beyond the carrying capacity have often been important, if not the only, drivers of collapse. Even though we are making important incremental progress in battles such as climate change, we must still achieve tremendous step changes in our response to several major environmental crises. We must do this even while the world’s population continues to grow. These pressures are bound to exert great stress on our global civilization.

 Trends over time for environmental issues identified in the 1992 scientists’ warning to humanity (grey line), updated with data up to 2016 (black line)
Trends over time for environmental issues identified in the 1992 scientists’ warning to humanity (grey line), updated with data up to 2016 (black line) Image: Ripple et al., 2017
3. Superintelligence: unplanned obsolescence?

Imagine a monkey society that foresaw the ascendance of humans. Fearing a loss of status and power, it decided to kill the proverbial Adam and Eve. It crafted the most ingenious plan it could: starve the humans by taking away all their bananas.

Foolproof plan, right? This story describes the fundamental difficulty with superintelligence. A superintelligent being may always do something entirely different from what we, with our mere mortal intelligence, can foresee. In his 2014 book Superintelligence, Swedish philosopher Nick Bostrom presents the challenge in thought-provoking detail, and advises caution.

Bostrom cites a survey of industry experts that projected a 50% chance of the development of artificial superintelligence by 2050, and a 90% chance by 2075. The latter date is within the life expectancy of many alive today.

Visionaries like Stephen Hawking and Elon Musk have warned of the existential risks from artificial superintelligence. Their opposite camp includes Larry Page and Mark Zuckerberg. But on an issue that concerns the future of humanity, is it really wise to ignore the guy who explained the nature of space to us and another guy who just put a reusable rocket in it?

4. Technology: known knowns and unknown unknowns

Many fundamentally disruptive technologies are coming of age, from bioengineering to quantum computing, 3-D printing, robotics, nanotechnology and more. Lord Martin Rees describes potential existential challenges from some of these technologies, such as a bioengineered pandemic, in his book Our Final Century.

Imagine if North Korea, feeling secure in its isolation, could release a virulent strain of Ebola, engineered to be airborne. Would it do it? Would ISIS?

Projecting decades forward, we will likely develop capabilities that are unthinkable even now. The unknown unknowns of our technological path are profoundly humbling.

5. 'The Trump Factor'

Despite our scientific ingenuity, we are still a confused and confusing species. Think back to two years ago, and how you thought the world worked then. Has that not been upended by the election of Donald Trump as US President, and everything that has happened since?

The mix of billions of messy humans will forever be unpredictable. When the combustible forces described above are added to this melee, we find ourselves on a tightrope.

What choices must we now make now to create a shared future, in which we are not at perpetual risk of destroying ourselves?

Common enemy to common cause

Throughout history, we have rallied against the ‘other’. Tribes have overpowered tribes, empires have conquered rivals. Even today, our fiercest displays of unity typically happen at wartime. We give our lives for our motherland and defend nationalistic pride like a wounded lion.

But like the early Morioris, we 21st-century citizens find ourselves on an increasingly unstable island. We may have a violent past, but we have no more dangerous enemy than ourselves. Our task is to find our own Nunuku’s Law. Our own shared contract, based on equity, would help us navigate safely. It would ensure a future that unleashes the full potential of our still-budding human civilization, in all its diversity.

We cannot do this unless we are humbly grounded in the possibility of our own destruction. Survival is life’s primal instinct. In the absence of a common enemy, we must find common cause in survival. Our future may depend on whether we realize this.

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceFuture of the Environment
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum