Artificial Intelligence

This futurist isn't scared of AI stealing your job. Here's why

A Buddhist monk looks at a 'robot priest' wearing a Buddhist robe during its demonstration at Life Ending Industry EXPO 2017 in Tokyo, Japan August 23, 2017. REUTERS/Kim Kyung-Hoon - RC1FF4004300

Ray Kurzweil sees the future as more nuanced. Image: REUTERS/Kim Kyung-Hoon

Michal Lev-Ram
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

You know a topic is trending when the likes of Tesla’s Elon Musk and Facebook’s Mark Zuckerberg publicly bicker about its potential risks and rewards. In this case, Musk says he fears artificial intelligence will lead to World War III because nations will compete for A.I. superiority. Zuckerberg, meanwhile, has called such doomsday scenarios “irresponsible” and says he is optimistic about A.I.

But another tech visionary sees the future as more nuanced. Ray Kurzweil, an author and director of engineering at Google, thinks, in the long run, that A.I. will do far more good than harm. Despite some potential downsides, he welcomes the day that computers surpass human intelligence—a tipping point otherwise known as “the singularity.” That’s partly why, in 2008, he cofounded the aptly named Singularity University, an institute that focuses on world-changing technologies. We caught up with the longtime futurist to get his take on the A.I. debate and, well, to ask what the future holds for us all.

Fortune: Has the rate of change in technology been in line with your predictions?

Kurzweil: Many futurists borrow from the imagination of science-fiction writers, but they don’t have a really good methodology for predicting when things will happen. Early on, I realized that timing is important to everything, from stock investing to romance—you’ve got to be in the right place at the right time. And so I started studying technology trends. If you Google how my predictions have fared, you’ll get a 150-page paper analyzing 147 predictions I made about the year 2009, which I wrote in the late ’90s—86% were correct, 78% were exactly to the year.

What’s one prediction that didn’t come to fruition?

That we’d have self-driving cars by 2009. It’s not completely wrong. There actually were some self-driving cars back then, but they were very experimental.

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

He’s not technology.

Have you tried to build models for predicting politics or world events?

The power and influence of governments is decreasing because of the tremendous power of social networks and economic trends. There’s some problem in the pension funds in Spain, and the whole world feels it. I think these kinds of trends affect us much more than the decisions made in Washington and other capitals. That’s not to say they’re not important, but they actually have no impact on the basic trends I’m talking about. Things that happened in the 20th century like World War I, World War II, the Cold War, and the Great Depression had no effect on these very smooth trajectories for technology.

What do you think about the current debate about artificial intelligence? Elon Musk has said it poses an existential threat to humanity.

Technology has always been a double-edged sword, since fire kept us warm but burned down our houses. It’s very clear that overall human life has gotten better, although technology amplifies both our creative and destructive impulses. A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation: It’s very important for your survival to be sensitive to bad news. A little rustling in the leaves may be a predator, and you better pay attention to that. All of these technologies are a risk. And the powerful ones—biotechnology, nanotechnology, and A.I.—are potentially existential risks. I think if you look at history, though, we’re being helped more than we’re being hurt.

How will artificial intelligence and other technologies impact jobs?

We have already eliminated all jobs several times in human history. How many jobs circa 1900 exist today? If I were a prescient futurist in 1900, I would say, “Okay, 38% of you work on farms; 25% of you work in factories. That’s two-thirds of the population. I predict that by the year 2015, that will be 2% on farms and 9% in factories.” And everybody would go, “Oh, my God, we’re going to be out of work.” I would say, “Well, don’t worry, for every job we eliminate, we’re going to create more jobs at the top of the skill ladder.” And people would say, “What new jobs?” And I’d say, “Well, I don’t know. We haven’t invented them yet.”

That continues to be the case, and it creates a difficult political issue because you can look at people driving cars and trucks, and you can be pretty confident those jobs will go away. And you can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceEmerging TechnologiesFuture of Work
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum