Emerging Technologies

AI: how can governance and regulation catch up? On the Radio Davos AI podcast special

Two hands reaching out.

Making AI serve humans Image: Unsplash/Tara Winstead

Simon Torkington
Senior Writer, Forum Agenda
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

  • Generative AI is advancing quickly, and regulators around the world are playing catch-up.
  • On Episode 4 of the World Economic Forum’s Radio Davos AI podcast series, we hear opinion on what needs to be done.
  • To listen, visit this special episode landing page; Subscribe on any podcast app here.
Loading...

The arrival of generative AI, with all its opportunities and potential risks, has led to a global debate about how the technology should be regulated.

As part of a special artificial intelligence series on the World Economic Forum’s Radio Davos podcast, we are speaking to experts about how we should approach AI, as well as about its potential to change our lives.

In this fourth episode of this six-part series, we talked to Amir Banifatemi, a co-founder and director of non-profit AI Commons, and Cyrus Hodes, a co-founder of AIGC Chain and Stability AI who is also on the steering committee of global knowledge hub the AI Commons.

Here are some key insights from the conversation.

We must consider AI safeguards now

“This is a critical moment in human civilization,” says Cyrus Hodes.

“There's no doubt that we are at a pivotal moment and we should look at making sure that we have frameworks and safeguards in place.”

Discover

How is the World Economic Forum ensuring the responsible use of technology?

The World Economic Forum recently formed the AI Governance Alliance, to unite industry leaders, governments, academic institutions, and civil society organizations to champion the responsible global design and release of transparent and inclusive AI systems.

The Alliance was formed in recognition of the fact that AI has the potential to impact every aspect of human life and to ensure it is developed responsibly.

Infographic discussing the different areas of AI
AI has the potential to impact almost every aspect of human life. Image: World Economic Forum

We must carefully control the release of new AI models

“There are multiple conversations and debates about slowing down the pursuit of development of language models, their training and the speed at which is going,” says Amir Banifatemi, but he believes a different approach is more realistic.

“We should not slow down research. This is a discovery; this is ingenuity at work; this is how we progress. But there should definitely be guardrails in terms of the models that are developed and how these models are used to build applications, who these applications are serving and how safe they are. The release of powerful AI models should be carefully controlled.

“We cannot expose the general public – who may not be fully trained or literate in using these tools – to AI models that are in the works. I think regulators are looking at this.”

Loading...
Have you read?

We must approach AI regulation in new ways

“The core pillar is that AI has to be human-centred,” says Hodes. For example, we need to make sure AI is getting us towards the UN’s Sustainable Development Goals.

“Also fairness, accountability and transparency are high-level topics that have been discussed and adopted by many other countries,” Hodes says.

Amir Banifatemi believes that regulators must urgently grasp the size of AI’s potential impact and increase the pace of the regulatory process to keep up with the speed of development.

“This is a question for us as a species,” says Banifatemi. “For the past thousands of years, we didn’t have a cousin or a brother, and now we may have one. So it is how we understand that and how we deal with it. I think policy-makers and regulators are going to be pushed to accelerate the way they understand, the way they come together and the way they can create frameworks for us to navigate better.”

Hodes is sceptical about whether policy-makers can take a long-term view.

“Most policy-makers are concerned about re-election. They have a short-term view. At the same time, we don't talk about this most important topic, which is the rise of a new intelligence. There's a lot of education to be made for policy-makers worldwide.”

“This is a question for us as a species. For the past thousands of years, we didn’t have a cousin or a brother, and now we may have one.”

Amir Banifatemi

Hodes cites examples of international treaties on atomic energy safety and the non-proliferation of nuclear weapons as a model for AI regulation.

“This is the moment in time where policy-makers worldwide should regroup and work towards such an arrangement,” he says.

“Why don't we regulate AI at a world level? I believe, with all my colleagues ringing alarm bells, this is the moment in time where it's going to happen.”

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Robot rock stars, pocket forests, and the battle for chips - Forum podcasts you should hear this month

Robin Pomeroy and Linda Lacina

April 29, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum