The final episode of our AI series comes from the Annual Meeting of the New Champions (AMNC), the World Economic Forum’s ‘summer Davos’, in Tianjin, China.
Cathy Li, head of AI at the World Economic Forum, says what needs to happen next as the world gets to grips with generative AI, and she introduces the AI Governance Alliance.
And we listen in to discussions at AMNC about AI - the opportunities for business and implication for things such as medicine and education.
Follow AMNC here: https://www.weforum.org/events/annual-meeting-of-the-new-champions-2023
Watch the AMNC sessions quoted in this episode:
Generative AI: Friend or Foe?
Keeping Up: AI Readiness Amid an AI Revolution
Check out all our podcasts on wef.ch/podcasts:
Join the World Economic Forum Podcast Club
Join the World Economic Forum Book Club
Podcast transcript
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Pascale Fung, Chair Professor, Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology: We are seeing a revolution that's beyond the industrial revolution. It is perhaps another quantum jump in human civilisation.
Robin Pomeroy, host, Radio Davos: Welcome to Radio Davos, the podcast from the World Economic Forum that looks at the biggest challenges and how we might solve them.
This week, in the last in our series on generative artificial intelligence, we're at the Forum's 'summer Davos' in China, where AI is on everyone's mind.
Olaf Groth, Professional Faculty, Haas School of Business, University of California, Berkeley: AI, and especially generative AI, has been touted to bring with it the potential for very, very significant economic development, potential growth to the tune of some $15.7-16 trillion around the world.
Robin Pomeroy: The opportunities are big, and not just for Silicon Valley. We hear from the country that wants to lead Africa's progress in AI.
Paula Ingabire, Minister of Information Communication Technology and Innovation, Rwanda: Rwanda sets itself to be a leading technology hub on the African continent.
Robin Pomeroy: The world is changing fast. Can governments and society catch up? We hear how the World Economic Forum is bringing stakeholders together to ensure they can.
Cathy Li, Head, AI, Data and Metaverse, World Economic Forum: AI development is moving so fast, so we must move as fast in order to make any meaningful contribution.
Robin Pomeroy: The head of AI at the World Economic Forum tells us about the new AI Governance Alliance it's set up to tackle the big issues around AI.
Subscribe to Radio Davos wherever you get your podcasts, or visit wef.ch/podcasts. I'm Robin Pomeroy at the World Economic Forum, and with our final episode in this series on generative AI from the Forum's annual meeting in China...
Olaf Groth: Is there hype? Most definitely there is hype. That does not mean that this technology is not groundbreaking and won't change both our economies and our societies.
Robin Pomeroy: This is Radio Davos.
Welcome to the final episode in our special series on generative artificial intelligence, coming to you from the World Economic Forum’s Annual Meeting of the New Champions in China. AMNC is an annual gathering in China, but has not happened for the past three years due to the pandemic. It’s back now and you can follow it on our website weforum.org and across social media using the hashtag #amnc23.
AI is far from the only big issue being discussed by stakeholders from government, business, academia and civil society there in Tianjin, where the global economy, geo-politics and the energy transition are also top of mind. But at a meeting whose theme is entrepreneurship, there are several sessions dedicated to AI that you can watch on catchup.
And in the second half of this episode, we’ll hear some soundbites from AMNC about the opportunities and risks posed by this rapidly growing technology.
But first, before she left for Tianjin, I spoke to Cathy LI, head of AI at the World Economic Forum. She helped organise the Responsible AI Leadership Summit in San Francisco where I recorded the interviews for this series a few weeks ago. I wanted to know from Cathy what the Forum is doing to bring stakeholders together to find ways of making generative AI a force for good for all.
I started by asking Cathy what that summit in San Francisco had aimed to achieve.
Cathy Li: The objective of the summit was to come up with a set of recommendations able to guide technical experts and policymakers on the responsible development and governance of generative AI systems.
It was fantastic because we had more than 100 thought leaders and practitioners in AI gathered over the course of three days to deliberate on aspects related to the design, development, release and societal impact of generative AI.
What emerged from those discussions was the set of 30 action-oriented recommendations for responsible development, open innovation, and social progress.
It's important to remember, Robin, that as we stand on the cusp of an era of transformation driven by novel AI systems it's never been more vital for stakeholders to come together and align on key questions and issues linked to the diffusion and governance of generative AI systems.
These recommendations should be seen as a step forward, building greater consensus and alignment around how to mitigate risks while shaping a more innovative, equitable and prosperous future.
Robin Pomeroy: Those recommendations are available online, right, so people can find that. So that was, you know, an immediate outcome of that summit. But now the World Economic Forum's building on the momentum because last week you announced the launch of this AI Governance Alliance. Can you tell us more about that?
Cathy Li: Yes, The AI Governance Alliance, which we launched last week, is a groundbreaking initiative that aims to champion responsible global design and release of transparent and inclusive A.I. systems.
This initiative, to your point, was built upon existing frameworks and incorporates the preliminary recommendations from the summit that's just been published.
The AI Governance Alliance is built on the Forum's more than 50 years of expertise in establishing multi-stakeholder partnerships. It brings together private sector knowledge, public sector governance mechanisms, and civil society objectives, to address the challenges brought by generative AI.
With the support of the World Economic Forum's Centre for the Fourth Industrial Revolution, the Alliance actively engages with various regions while contributing to shaping a global approach to address the transformative nature of generative AI systems.
Robin Pomeroy: Okay, so in a nutshell, what do you hope the alliance will achieve? What areas will it address?
Cathy Li: There are many existing alliances and initiatives out there already, but we do believe that the AI Governance Alliance could be much more action-oriented. And that's really the underlying reason why we we decided to launch it at this time.
The alliance will focus on three crucial actions to ensure responsible and safe AI development and deployment.
Firstly, there's an emphasis on prioritising safe systems and technologies by investing in robust and secure systems to mitigate risks and ensure user safety.
Secondly, the Alliance aims to ensure sustainable applications and transformation by aligning generative AI with long-term societal goals, addressing biases, and promoting transparency.
Lastly, the Alliance recognises the importance of resilient governance and regulation. It actively collaborates with regulators, policymakers and other stakeholders to establish ethical frameworks and specific regulatory measures tailored to generative AI..
Robin Pomeroy: And are all stakeholders now agreed that there need to be clear guardrails on AI.
Cathy Li: While there's widespread agreement on the need for regulation, for clear guardrails, it's important to note that various viewpoints exist regarding the best approach.
These perspectives often find themselves navigating between existing laws, progressive proposals, like the EU AI Act and the tailored strategies for specific sectors. Additionally, the debate surrounding open source versus closed source solutions and the responsible release of extensive language models generates a range of opinions and discussions.
Robin Pomeroy: And those are all the kind of things you'll be discussing within this AI Governance alliance.
Cathy Li: Yes, the strength of the Alliance is that all those different viewpoints will be represented, so we hope to reach some conclusions that will be acceptable for the hugely varied array of stakeholders, big companies, small new ones, governments, civil society, consumers, etc..
Robin Pomeroy: This week, as this podcast goes out, there is the Annual Meeting of the New Champions, the AMNC, which is the World Economic Forum's annual gathering in China. So it's happening now as people are listening to this as it comes out. And AI is one of the most important topics under discussions there. What do you hope can be achieved in those sessions and other talks in Tianjin?
Cathy Li: The Annual Meeting of New Champions incorporates open dialogues and interactive sessions, allowing participants to explore the different aspects of AI.
For example, there will be dedicated sessions aimed at understanding AI from a national ecosystem perspective, where representatives from various sectors will share their contributions to fostering a vibrant AI ecosystem that aligns with country priorities.
Moreover, we will delve into the unique value and challenges posed by generative AI, while also hosting curated conversations and workshops that demonstrate the applications of AI in sectors such as healthcare, climate, and advanced manufacturing.
Robin Pomeroy: And people can follow some of what's happening at AMNC and see the livestreamed sessions on AI and lots of other really big issues on our website live and on catch up.
But to follow your work, the work of the Centre for the Fourth Industrial Revolution on generative AI, what should people be looking out for in the coming months?
Cathy Li: Alongside of the launch of the AI Governance Alliance, we're excited to announce the formation of a working groups focusing on the technical dimension, industry applications, and governance.
We will share initial findings from these groups demonstrating our active commitment to driving progress in various areas related to AI governance.
We owe this momentum to our inclusive project community, which brings together stakeholders from the private, public, academia and civil society sectors, fostering collaboration and collective actions.
These efforts will contribute to shaping the outputs between our first and second AI summits. By the time of our second summit, which is planned for November this year, we hope that we will already be able to share some of the findings and publications with the world following the three tracks that I outlined earlier.
AI development is moving so fast, so we must move as fast in order to make any meaningful contribution.
Robin Pomeroy: So there's a second AI summit planned in November, a follow-up to the one I was at and you were at in San Francisco a few weeks ago. People should be looking out for that. Great. Thanks, Cathy. Thanks for joining us on Radio Davos.
Cathy Li: Thanks, Robin, for having me.
Robin Pomeroy: Cathy Li. If you're interested in learning more about the AI. Governance Alliance, you can find detailed information on the Forum's dedicated website. Search: WEF AI Governance Alliance.
So to China, you can watch several sessions on AI at the AMNC meeting on our website. But here's a flavour of the discussions.
Pascale Fung is Chair Professor at the Department of Electronic and Computer Engineering at the Hong Kong University of Science and Technology. She co-chaired the San Francisco AI Summit, and you can hear my interview with her in episode one of the series. Here she is at AMNC on a session called Generative AI: Friend or Foe?
Pascale Fung: So a few years ago, we talked about the Fourth Industrial Revolution, and that was when AI was taking off. But today I believe, along with some of my peers, that we are seeing a revolution that's beyond Industrial Revolution. It is perhaps another quantum jump in human civilisation.
Robin Pomeroy: Another session at the meeting in Tianjin was called Keeping Up: AI Readiness Amid an AI Revolution. Let's hear from that session's moderator, Olaf Groth, who, among other things, teaches about AI at the Haas School of Business at the University of California, Berkeley.
Olaf Groth: AI, and especially generative AI, has been touted to bring with it the potential for very, very significant economic development potential growth to the tune of some $15.7-16 trillion around the world over the next decade or so.
Is there hype? Most definitely there is hope. That does not mean that this technology is not groundbreaking and won't change both our economies and our societies.
Robin Pomeroy: Olaf Groth from the University of California, Berkeley.
One of the many risks associated with the rapid rise and impact of AI is that it could increase the global 'digital divide' and allow wealthier countries and citizens to pull ahead while poorer ones are left behind.
Leaders in Africa are determined that will not happen. Here's Paula Ingabire, Rwanda's minister of Information Communication Technology.
Paula Ingabire: Rwanda sets itself to be a leading technology hub on the African continent. And to do that, our strategy is to be a proof of concept hub.
And one would ask, what does that mean, really? It's, you know, being a space where innovators, start-ups, big corporations can come and experiment with emerging technologies, test them, try them out, and if they are proven successful, then they're able to scale to the rest of the continent from Rwanda.
The role of government is to put in place a regulatory environment that enables innovation.
Just about a month ago, we did put in place our national AI policy, which we put together with the support of the Centre for the Fourth Industrial Revolution in Rwanda and obviously with the World Economic Forum.
The other panellists will talk about AI in healthcare. I think that's one particular area that we're looking at to see what healthcare solutions we can, AI enabled health care solutions that can be deployed. We've already done, you know, an economic analysis assessment to see what are those industries and segments where we feel like AI has the most potential to disrupt but also create impact.
So healthcare, agriculture, public service are the three top ones that we've already identified with very specific use cases that we will be implementing over the next two years to really look at what the potential of implementing these solutions will look like.
Robin Pomeroy: From Africa to Europe. A country also looking at AI's applications is Slovenia. Here's Minister of Digital Transformation, Emilija Stojmenova Duh.
Emilija Stojmenova Duh, Minister of Digital Transformation, Slovenia: Of course there are fears, but I believe that the fears should not hinder the innovation. That is because AI brings huge potential.
And if we speak all the time about the fears then we might lose this potential. And this is not what I would like to happen, also as somebody coming from the government.
I believe that generative AI can really boost innovation and I would like to see that in the government as well.
So one of my concerns - not fears - is the competences of the public servants, then the competences also of the citizens, then I am especially, my concern is to introduce AI into schools the generative AI into schools. Why? Because I am not quite sure whether the teachers know how to use, if they understand what a generative AI means, how it can be used, what are the fears behind?
The biggest one is the biases, because we already have the stereotypes of the biases in the world existing, and the AI can even increase those biases. So we need to find a way how to eliminate the stereotypes and the biases and to make sure that they will not cause additional divides.
Robin Pomeroy: From Africa to Europe and back to China. A local company developing AI products in China is Neusoft Corporation. Its chairman, Liu Jiren, explained how his company was using AI in healthcare.
Liu Jiren, Chairman, Neusoft Corporation: So that is a market we can apply AI.
We use AI to create a kind of digital treatment so a doctor in big city, a small city, or countryside, they have the same kind of quality.
Robin Pomeroy: Using AI in healthcare has serious implications, as it does in anything that affects people in real lives in the real world, such as law or education.
Neusoft's Li Jiren again.
Liu Jiren: If we are talking about a healthcare transformation with AI technology, firstly you'll meet a challenge about data: how to capture data, how to collect data, privacy, governance. And also if you use that data to make a diagnostic that is not like ChatGPT. ChatGPT is okay, you can write an article, right or not, but if you make a diseases diagnostic, you must be precise ... at least same as doctors' level.
So I think now people are talking about ChatGPT, it's booming it's happy, because you can draw a picture, you can write an article, you can do anything. It's a kind of entertainment.
But if you're coming to healthcare, it's very serious. And you need to talk about who paid money for these idea, what is their skill and safety? How to convert the knowledge of a doctor to become a digital clinical pathway.
Robin Pomeroy: So what about the governance, the oversight of AI? Joanna Bryson is Professor of Ethics and Technology at the Hertie School in Berlin, and she favours what's known as AI auditing, evaluating AI systems to make sure they work as expected without bias or discrimination and are aligned with ethical and legal standards.
Joanna Bryson, Professor of Ethics and Technology, Hertie School: We need to at least know what there is to know and what you need to do to be able to find out about that.
So for example, in the EU, we've been talking a lot about audits and some companies are very afraid of that.
But you need to understand other industries, the vast majority of industry now is all digitising, and they're not afraid of audits because they already do compliance. It's just the tech industry hasn't been used the compliance yet.
So the fact that you will be audited and that you need to be able to yourself know how your code works. I mean some AI companies have been very sloppy - means that it shouldn't terrify you. It does not mean all your competitors are going to come through the door and see what you are doing and that you have to explain every detail like every weight on the neural network.
No, we audit banks without finding out how the synapses work in the heads of the people. The same way we will audit whether you actually followed good practice, best practice, just like any other industry. Due diligence, best practice. Whether your people did that when they trained the AI.
And if they used machine learning and AI when they trained it, when they tested it, how are they ensuring quality?
So that's the kind of thing that companies can do and that now the EU looks very likely to be mandating that companies do, especially where AI takes decisions that affects people's lives, like welfare systems or banking or education or medicine.
Robin Pomeroy: As the theme of the AMNC meeting was entrepreneurship, let's hear from an entrepreneur. Darko Matovski is co-founder of AI startup causaLens.
Darko Matovski, Chief Executive Officer, causaLens: I will start by saying first that 85% of AI projects never leave the lab. And there is a fundamental reason for that.
The fundamental reason is that people don't actually trust the algorithms. The way it works is there is usually a bunch of data scientists. They throw a lot of data in a black box and something comes out. And as you mentioned, you know, it can be very entertaining if it's, you know, in the context of generative AI and, you know, generative AI has lots of uses and it's great. But when it comes to decision making, people really need to understand what the algorithm is doing.
The AI must explain why I made this decision. It must be able to explain what it would do if a data point that it has never seen in the past comes to light.
That's real life. We have to have AI that guarantees outcomes even if we haven't seen a data point in the past.
And that's why there is kind of a fundamental research in AI and there's a lot of new technologies coming out that are able to explain why I make this decision, are able to explain their decision even beyond kind of historical data on which they're trained.
So until we we solve this problem of trust, we will have 85% of the project, whether it's in healthcare, whether it's in government, whether it's in in any industry, they'll just remain in the lab.
Robin Pomeroy: Finally, the human impact, generative AI is going to change our lives. But how? How can we prepare for disruption? Should we'll be retraining as computer engineers? Professor Pascale Fung has a different idea of the role of education in the AI world.
Pascale Fung: I believe in the future, as I mentioned earlier, machine intelligence can take over a lot of skills that we possess today. Skills.
So what we need to train in humans, the future humans, is to be more human. So to have more critical thinking, more humanity.
So I advocate for curriculum revival into having more, for example, history, philosophy, ethics, the arts, you know, the creativity side, as well as mathematics and sciences.
So I advocate for an education system where everybody receives this kind of a holistic curriculum.
Today, our education system has been very much divided into silos. Our engineers, for example, I can see very clearly. You ask the engineers developing AI systems to figure out human value alignment - it's a huge challenge.
Then you ask ethicists to give us feedback on the systems. They do not necessarily understand the algorithms.
So in the future, we cannot have this kind of silo anymore.
So we need to go back to the basics and teach our younger generation to be really renaissance men and women and then go back to the basics. So more humanities, more sciences, more mathematics.
And maybe less of the skills that we are trying to teach them today because those skills will be replaced by machines.
Robin Pomeroy: One final, final word on the real world impact of AI from Professor Joanna Bryson.
Joanna Bryson: A lot of people used to ask me, how can we make people trust AI? And that is the wrong question to ask. We need to be talking about the people who are building AI and the people who are regulating AI.
So I love the question you actually asked, which is how do we make people feel secure about this? I think it's by making people secure. And we need to be thinking very much about the fact, are we worrying about wages? How are we helping the people who are displaced? When people have an unexpected life event are they likely to be able to continue paying their rent or their mortgage?
So governments have an important role in helping us all deal with change, because we're entering change whether or not, with technology or without.
Robin Pomeroy: Ethics Professor Joanna Bryson speaking at the session called Keeping Up: AI Readiness amid an AI Revolution at the AMNC meeting under way in China.
You can follow many of those sessions live or on catch up on our website weforum.org.
This is the end of our series on generative AI, but in a way it's just the end of the beginning as this is a subject we will no doubt be returning to sooner rather than later on Radio Davos, the podcast that looks at the world's biggest challenges and how we can meet them.
To ensure you don't miss an episode, please subscribe to or follow Radio Davos wherever you get your podcasts. Please leave us a rating or review, and join the conversation on the World Economic Forum Podcast Club on Facebook.
This episode of Radio Davos was presented by me. Robin Pomeroy. Studio production was by Gareth Nolan. Radio Davos will be back next week, but for now, thanks to you for listening and goodbye.
Podcast Editor, World Economic Forum
Head, AI, Data and Metaverse; Member of the Executive Committee, World Economic Forum
Nigel Vaz
December 12, 2024