Artificial Intelligence

An Oxford professor on AI’s potential for humanity – and what might block it

Published · Updated

Beyond the scaling era: Why AI’s next breakthrough depends on human-like efficiency, not just more data. Image: Luke Jones/Unsplash

David Elliott
Senior Writer, Forum Stories

Quotes have been lightly edited for length and clarity.

  • Will AI's promise to benefit humanity come good? What might stop it fulfilling that promise?
  • Historian and economist Carl-Benedikt Frey spoke to the Forum about the pace of change, how AI's "era of scaling" is ending, and what future historians might say about this period.
  • The Forum's Radio Davos podcast looks at some of the world's biggest problems and how they might be solved.

AI promises to unlock a new era of progress – but how and when will that happen?

For historian and economist Carl-Benedikt Frey, Dieter Schwarz Associate Professor of AI & Work at Oxford University, the answers lie beyond the technology itself. Innovation and institutions will be vital to AI delivering on its potential.

Speaking to the Forum, here's what he had to say on the pace of change, why we are approaching the end of the "era of scaling", and how future historians might look back on the period we’re living through.

Responses have been edited for clarity.

Loading...

What happens when technological change outpaces institutional adaptation. Are we seeing early signs of that with AI?

Technology is not all that matters. Institutions and incentives matter too. In academia, for example, the incentive is ‘publish or perish’. So, academics tend to pursue multiple projects at any given time. With a powerful new productivity tool, you can either do more things or use the tool to drill deeper. And the evidence shows we’ve opted for doing more things. That means our attention spans are spread more thinly across more projects, and we’re less likely to push the boundaries and make a breakthrough in any given domain. We need to solve institutional challenges to realize the productivity potential of AI.

When future historians look back at the period we’re in now, what might they say?

What we've seen in the AI over the past decade is an era of scaling, growing existing approaches with more data and compute. That paradigm is gradually coming to an end, and we need new ideas to push AI. Take, for example, the game of [board game] Go. Most people know that [Google DeepMind's AI program] AlphaGo beat the world champion back in 2016. What few people know is that human amateurs, using standard computers, later beat the best Go programs by exposing them to positions that they would not have encountered in training. That tells us that even when machines achieve superhuman performance, we cannot be sure how well those algorithms work when circumstances change. And the world changes all the time. So, what we will need going forward is more resilient AI that's better capable of generalizing to novel circumstances and unseen situations. And that will require more innovation and research.

Carl-Benedikt Frey at the 2026 World Economic Forum Annual Meeting in Davos.
Carl-Benedikt Frey at the 2026 World Economic Forum Annual Meeting in Davos. Image: World Economic Forum

So, do we need to move from the scaling era to one of inventiveness?

What we need is a new paradigm of research because we need new ideas. It's far from clear that the future of AI is large language models. It might be small language models, it might be symbolic AI, it might be a fusion between large language and symbolic AI. It might be something else. And to explore those trajectories, we need a new paradigm of research, and we need more decentralization.

What might this mean for different countries and regions around the world?

We live in an increasingly fragmented world. It’s a big shift. If you go back to the post-war period, growth around the world relied heavily on American technology. America essentially exported the system of mass production to the world and, in large part, through the Marshall Plan to aid Europe, for example. We're in a very different situation now because America is showing much less interest in free-flowing technology. And if Europe cannot rely on the US as a partner, it will need to develop its own technology. And I think that's something we will see in many places around the world that used to rely on catch-up growth, adopting technology invented elsewhere. They will now have to shift towards a more innovation-led model.

Discover

How the Forum helps leaders make sense of AI and collaborate on responsible innovation

How long do societies need to course correct once a new technology’s trajectory has become clear?

If you go back to the first Industrial Revolution, it took decades for the benefits of the factory system to trickle down to the broader population. Now, of course, Britain was not a democracy back then by any stretch of the imagination. Today, in societies that have democratic institutions in place, there are more mechanisms for self-correction. We saw this in the US during the Gilded Age [late 1870s/early 1900s], when the public mobilized against rising tech monopolies, and systemic political corruption became apparent, leading to the Pendleton Act and a meritocratic civil service. That meritocratic civil service then went on to regulate American monopolies through the Sherman Antitrust Act, for example. This played out over several decades - that's how long it can often take for such correction to happen. I only hope it happens a lot quicker in the case of AI.

Have you read?

What about for emerging economies that may be in a race to catch up on deploying the technology – what is the right strategy for them?

I'm actually more optimistic about developing and emerging economies because I think AI, in its current form, reduces barriers to entry in knowledge work. It's a bit like the introduction of GPS technology for taxi services - knowing the name of every street in New York City or Beijing was no longer such a valuable skill. New companies could enter with digital platforms, matching supply with demand. What we're also seeing is that AI is making language barriers less of an issue. Most trade and services have historically been confined to English-speaking countries. But with machine translation, more countries can participate in service trade. So, I think there's a real opportunity for emerging economies to pursue service-led growth. That does not mean that they automatically catch up, but I think it creates a new opportunity for low-income countries to do so.

What kind of institutional flexibility would that require? Decentralizing models?

You wouldn't necessarily need to be at the technological frontier to achieve this. It's similar to what we've seen in manufacturing: the auto industry emerged in Detroit, spread across locations in the West and eventually migrated to regions with lower labour costs, like China. This migration lifted 800 million people out of poverty, and we may now see AI trigger a similar shift in the service sector.

Finally, what is your view on AI’s energy use?

Going back to the first Industrial Revolution and the development of the steam engine, the reason we associate it with James Watt is that he first made it energy efficient. AI is still waiting for that moment. And because we’ve seen such strong capital inflows into AI infrastructure, we’ve been able to support scaling. Future models will need to be more like humans – more energy and data efficient, capable of learning from only a few examples. And so, if capital flowing into the AI sector slows, it will likely push firms to focus more on data, compute efficiency and energy-saving AI – ultimately making energy less of a bottleneck than it is under today’s scaling-driven paradigm.

Loading...

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
Contents
What happens when technological change outpaces institutional adaptation. Are we seeing early signs of that with AI?When future historians look back at the period we’re in now, what might they say?So, do we need to move from the scaling era to one of inventiveness?What might this mean for different countries and regions around the world?How long do societies need to course correct once a new technology’s trajectory has become clear?What about for emerging economies that may be in a race to catch up on deploying the technology – what is the right strategy for them?What kind of institutional flexibility would that require? Decentralizing models?Finally, what is your view on AI’s energy use?
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

Making Agentic AI Work for Government: A Readiness Framework

Why scaling AI is about company culture, not winners and losers

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum