Forum Institutional

AMNC23: How do we keep up with the pace of AI development?

Published · Updated
AI experts call for guardrails, responsible design and deployment at the World Economic Forum's Annual Meeting of the New Champions in Tianjin, China.

Pascale Fung, Chair Professor, Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong SAR, speaks at the Annual Meeting of the New Champions 2023 in Tianjin, People's Republic of China, 27 June 2023. Image: World Economic Forum/Faruk Pinjo

Robin Pomeroy
Podcast Editor, World Economic Forum
Simon Torkington
Senior Writer, Forum Agenda
This article is part of: Annual Meeting of the New Champions

Listen to the article

  • The final episode of our AI podcast series features highlights from the Annual Meeting of the New Champions (AMNC), the World Economic Forum’s ‘summer Davos’, in Tianjin, China.
  • AI experts call for guardrails, responsible design and deployment at the World Economic Forum's Annual Meeting of the New Champions in Tianjin, China.
  • Final decision making in critical fields must remain with a human, they say.
  • And measures to eliminate bias and ensure transparency are also crucial.
  • Listen to the latest episode here.

Leaders from government, the tech industry and academia have recognized the huge potential of generative AI – but called for a series of protective guardrails to ensure intelligent technology is deployed safely with the intention of bringing benefits to all.

The experts spoke in two sessions at the World Economic Forum's Annual Meeting of the New Champions in Tianjin, China.


Framing the debate, Cathy Li, Head of AI, Data and Metaverse at the Forum and Member of the Executive Committee, said in a recent episode of Radio Davos: “It’s crucial for everybody to understand the enormous potential that we see with this novel technology but also the challenges and responsibilities that come with it.”

Here are the highlights from the two sessions on the future of AI.

Note: This article is based on automated transcripts and some quotes have been edited for length and grammar.


How is the World Economic Forum ensuring the responsible use of technology?

How to regulate a technology that could change human civilization

Speaking at the session Generative AI: Friend or Foe?, Pascale Fung, Chair Professor at the Department of Electronic and Computer Engineering, Hong Kong University of Science & Technology, said the world is facing powerful and creative models that mark "a quantum jump in human civilization".

Fung continued, saying, “we must have a framework for developing responsible generative AI models, not just by keeping a human in the loop but to fundamentally understand these models and align them with what humans want".

While the panellists agreed that guardrails are essential for deploying AI safely, there was also a recognition that concerns over the potential negative consequences of AI should not slow down development.


Emilija Stojmenova Duh, Minister of Digital Transformation, Slovenia, said: “AI can really boost innovation”, adding that fears should not hinder that innovation. "Of course there are fears, but I believe that if we speak all the time about the fears, then we might lose this potential." Stojmenova Duh went on to speak about her biggest concerns with AI, its role in education and bias inherent in AI systems.

"I see huge potential, my concern [is] that I am not quite sure whether the teachers will know how to use AI, whether they understand what generative AI means and how they can use it. But my biggest [concern is] the biases in AI. We already have stereotypes and biases in the world. We need to find a way to eliminate the stereotypes and biases from AI and to make sure that they will not cause additional divisions in our communities."

For Darko Matovski, CEO of CausaLens, the benefits of AI-driven innovation must be backed up with transparency. To build trust, he told the panel, humans must be able to understand how AI systems operate and the decisions they make. "Some 85% of AI projects never leave the lab and there is a fundamental reason for that ... people do not actually trust the algorithms. Generative AI has lots of uses, but when it comes to decision making, people really need to understand what the algorithm is doing. The AI must explain why it made a decision. It must be able to explain what it will do if a data point that it has never seen in the past comes to light."


Who makes the rules to regulate AI?

At the second session, Keeping Up: AI Readiness Amid an AI Revolution, panel members discussed who should take responsibility for regulating AI. Should that fall to governments or should the tech industry be left to regulate itself?

"As governments, we can't regulate what we don't know," said Paula Ingabire, Minister of Information Communication Technology and Innovation, Rwanda. Ingabire went on to describe how a public/private approach to regulation could deliver the best outcome, telling the panel: "It is natural for everyone to think governments should take a lead, and I agree with that. What is very important to understand here is government alone cannot do it themselves. You need a kind of partnership and collaboration and to figure out who plays a stronger role."


Emilija Stojmenova Duh pointed out the dangers of governments rushing into regulation without fully understanding the risks and the potential of AI, telling the panel: "If we want to regulate something, it is good to take some time and see what we want to regulate". The minister warned that the speed of the development of AI could catch legislators off guard and lead to regulation that is either inadequate to control risks or stifles positive innovation.

Have you read?

Joanna Bryson, Professor of Ethics and Technology at the Hertie School, told the panel that it's paramount that AI developers and regulators are asking themselves the right questions about the potential impact of AI. She suggests a greater focus on ensuring people feel secure in a world with AI, rather than trying to convince them to trust it.

"We need to be thinking very much about the people that are displaced [by AI]. When people have an unexpected life event, are they likely to be able to continue paying the rent or the mortgage? Governments have an important role in helping us all deal with change. Part of the reason that we have challenges of trust right now is because we know we're not getting the information we should and we have not had enough transparency. I think it is important that we recognize we are in the information age and we do have to do things that benefit the people."

The World Economic Forum has launched the AI Governance Alliance, a dedicated initiative focused on responsible generative artificial intelligence. This initiative expands on the existing framework and builds on the recommendations from the Responsible AI Leadership: A Global Summit on Generative AI.

Related topics:
Forum InstitutionalEmerging Technologies
How to regulate a technology that could change human civilizationWho makes the rules to regulate AI?

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum