Artificial Intelligence

We need to talk about AI - find out why on the new Radio Davos podcast series

AI Picture generated by the AI Dall-E with the prompt 'the face of Rodin's Thinker as a robot'

Picture generated by the AI Dall-E with the prompt 'the face of Rodin's Thinker as a robot' Image: Dall-E/Robin Pomeroy

Robin Pomeroy
Podcast Editor, World Economic Forum
Charlotte Edmond
Senior Writer, Forum Agenda
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

Loading...
  • The World Economic Forum's weekly podcast, Radio Davos, is doing a special series on generative artificial intelligence (AI).
  • The first episode explores why AI is suddenly dominating the headlines, with views from IBM's AI ethics chief and a senior AI professor.
  • AI is a 'powerful wild beast' - can it be tamed for the good of humanity?
  • Subscribe to Radio Davos; find all episodes here.

AI podcast series:

“It's the first time that people all over the world can use and interact with an AI system. It's really a game-changer because everybody can experience the capabilities of an AI system.”

Francesca Rossi, Head of Ethics at IBM Research, is talking about ChatGPT. If you haven’t yet used the generative AI tool, at the very least you will have probably heard about it. Since being released late last year, it has quickly become the fastest-growing consumer app in history.

The explosion of ChatGPT and other “large language models” has brought the potential of AI into the public consciousness, raised many questions about ethics and opportunities, and left many regulators struggling to catch up.

According to the World Economic Forum’s latest Future of Jobs Report, employers predict almost a quarter of all jobs will be affected by technology. And 44% of the skills needed in the workplace will change within five years, as cognitive skills, complex problem solving and technology literacy become increasingly relevant.

Statistic on total job growth and loss.
The green transition, technology and the economic environment are all changing the jobs market. Image: World Economic Forum

To explore these issues in depth, the World Economic Forum’s weekly podcast Radio Davos has launched a six-part series, talking to experts and thinkers to hear about some of the biggest challenges and how we address them, as well as the potential of AI to change the way we live and work.

In the first episode, we talk to Rossi and to Pascale Fung, professor of computer engineering and director of the Centre for AI Research at the Hong Kong University of Science and Technology. Below is an edited version of those conversations.

In the series, we will also be explaining some of the jargon around AI. In this episode Cathy Li, Head of AI, Data and Metaverse at the World Economic Forum defines 'large language models' or LLMs:

What is a large language model AI?

Cathy Li: In simple terms, a large language model is a smart computer program that can understand and generate humanlike language. It works by using a type of artificial intelligence called deep learning, and it's trained on a massive amount of text data from books, articles, websites and other sources to understand and learn the patterns and relationships between words and sentences.

During training, the model analyses the text data and tries to predict the next word in the sentence based on the words that came before it.

When you interact with the language model, you provide a with a prompt or a question. The monitor uses its learned knowledge to generate a response by predicting the most likely words and sentences that fit in the context of what you are trying to say.

There's obviously difference between a small language model and the large language model, and the threshold sometimes cannot really be predicted. But these scientists have observed the capabilities of the predictability jumped exponentially after certain thresholds, and that's also where the scientists are seeing more surprising emergent properties that they have never seen and couldn't predict before.

What is conversational AI?

Pascale Fung: Technically speaking, there are two kinds of conversational AI systems. Basically, it's the interaction between a user, human user, and a machine.

The first includes open domain chatbots, where usually you can just talk about any topic, therefore open domain, and then you can chat about anything as long as you want.

And the other kind of conversational AI systems are called task-oriented dialogue systems. Your virtual assistants, your smartphone assistants, your call centre virtual assistants – these are all dialogue systems or conversational AI systems to try to accomplish a task to answer a query the user has.

Have you read?

What is behind these huge advances in AI?

Pascale Fung: Generative AI has been around for a while – it predates deep learning and neural networks. But recent generative AI models are much more powerful than previous generations because they have a huge amount of training data and they have a huge parameter size.

And these generative AI models, in particular the large language models, are used as foundational models to build conversational AI systems.

There's a common misunderstanding that ChatGPT is a conversational AI system. Technically, it is not. ChatGPT is what we call a foundational model. So they are large language models that can perform a multitude of tasks. And then there is a chat interface. It's like a UI for the user to interact with the underlying large language model via chat.

So ChatGPT can be used to build other systems, including but not limited to conversational AI systems.

Francesca Rossi: I didn't really see a big change, I saw an evolution over decades.

But what is different now compared to my experience with AI is that these research results – with a very simple interface – are available to everybody.

People were already using AI in almost everything they do in their online lives. But they were not realizing it because it was kind of hidden inside all the applications, all the things that we're using online.

Statistic showing the proportion of tasks completed by humans vs machines.
The human-machine frontier is shifting. Image: World Economic Forum

What applications are you excited about in the future?

Pascale Fung: I would like to see that we can come up with solutions where we can take advantage or control generative AI.

Today, these large language models are like these powerful wild beasts, right? We need to have algorithms and methods to tame such beasts and then to use them for the benefit of humanity.

In the long term, I hope to see more beneficial AI, in the medical domain for example: healthcare for the elderly, healthcare for disadvantaged people who have no access to advanced medical care. And we can democratize such health care with AI technology.

Today, the road from here to there is unknown. Maybe we can get there within a year. Or maybe we would need another paradigm shift to get there. But that's what's exciting about this field of generative AI. We're almost making scientific discoveries when we work with these models, and we are learning new ways of how to work with them and how to take advantage of them on a daily basis.

Why are people concerned about it?

Francesa Rossi: There were issues in the previous wave of AI – we knew all about the issues about bias, explainability, transparency, robustness, privacy and so on. But now these issues are still there and there are additional issues – related, again, to this ability to generate content.

So, for example, the possible spread of misinformation, and some copyright issues or issues relating to privacy.

It's also true that there can be misuses. The prompt can be anything, but we must embed into those large language models ways to respond appropriately.

I think that in the future we will have to find more effective methods that are not filters after the building of the large language model but are embedded into the building of the model itself.

Pascale Fung: Unintended harm also worries me a great deal. So people, for the good of their hearts, they're trying to build a system to help patients to find cures for different diseases, some sort of WebMD but based on AI. And they're thinking that this will help people get access to health information and so on. But they don't know that some of the answers given by these generative AI systems are incorrect.

And given there's so much investment today in this area – there are thousands of start-ups coming up using generative AI and ChatGPT alone – I'm afraid that so many people do not know the limit of ChatGPT and then build applications they claim will be doing one thing but actually it's doing something else, not what it’s intended to do.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Is there a risk that AI development is moving too fast?

Pascale Fung: The risk is already here. We have already seen that people are using generative AI in a way it's not intended to be used.

I am very encouraged to see the progress we have been making in upstream research, including coming up with ways to mitigate harm in AI.

I am worried the deployment is going too fast because we're deploying systems we don't 100% understand the ramifications of. We don't necessarily have to explain the AI system that we deploy in minute detail to everybody who's going to use it, but we need to have the confidence we can mitigate the harm before we release the system into the wild.

Francesca Rossi: I would distinguish between the different phases in the value chain of building an AI model, and releasing it and deploying it.

I think we want to facilitate and speed up the research and development even more because those are areas that can help us understand how to better mitigate the issues.

And of course, we want to be careful about the later phases in the value chain, such as deployment and uses. And that's why I think that policies and regulations should act more within that part of the value chain rather than at the initial part.

Big data and AI are changing what we need from workers.
Big data and AI are changing what we need from workers. Image: World Economic Forum

Where are we in terms of global or regional governance of AI?

Francesca Rossi: I think the most comprehensive legislation discussion is what is happening in Europe right now around the European AI Act proposal. It is still at the level of a draft with a lot of different proposals for amendments but soon will be approved by the European Parliament.

What I like about that regulation is that it’s risk-based, where the risk is associated with the scenarios in which the AI would be applied. There is a list in the regulation of “high-risk uses”, for example, for human resources applications, deciding who is hired, or who is being promoted and so on. That's one of the high-risk application areas.

Right now there is a lot of discussion around how to make this regulation also include something about generative AI and large language models.

And I hope this will not be shifting the focus of the risk-based framework from the application area to the technology itself. Because some discussion tries instead to say these models are risky no matter where you apply them.

I think that would be a big mistake.

How else can we mitigate some of the risks associated with AI?

Pascale Fung: Mitigating risk is a multi-stakeholder job, a multinational job, and we don't talk too much about the people who build the systems.

So starting from us complying with the code of conduct, the research engineers must comply with this code of conduct.

Meanwhile, we need to design algorithms such that they can be aligned with human values.

Check out all our podcasts on wef.ch/podcasts:

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum