Emerging Technologies

3 of the biggest issues for AI that will set the agenda at Davos 2024

Here's what the experts are saying about the benefits and dangers of our AI-enabled future.

Here's what the experts are saying about the benefits and dangers of our AI-enabled future. Image: Unsplash/Damian Markutt

Johnny Wood
Writer, Forum Agenda
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

This article is part of: World Economic Forum Annual Meeting
  • Artificial intelligence has the capacity to redefine how we live, work and interact, but uncontrolled development of the technology comes with risks.
  • The World Economic Forum’s AI Governance Summit (AIGS) held in November highlighted some of AI’s key challenges and talking points.
  • These same issues will likely inform discussion about the transformative impact of AI at the Forum’s Annual Meeting 2024 in Davos.

Artificial intelligence (AI) will change your life – it’s already happening. What is unclear is how?

AI’s potential to reimagine what’s possible was brought into stark focus in 2023 with the emergence of generative AI models like ChatGPT, which can create detailed content at the click of a button.

While disruptive, these models represent just one application of a technology that has the capacity to redefine how we live, work and interact.

Discussions at the World Economic Forum’s AI Governance Summit (AIGS) held in San Francisco in November, point to some of the key AI themes that will be addressed at the Forum’s upcoming Annual Meeting 2024 in Davos.

So, what are the experts saying about the benefits and dangers of our AI-enabled future?

Unleashing AI’s limitless potential comes with unknown risks

“AI promises to allow us to augment our human capital, and it seems to be able to solve many of the challenges of our time,” Gan Kim Yong, Minister for Trade and Industry, Singapore, told the AIGS audience.

But it also raises risks, he warned: “Its capability of generating content at speed and at scale will create new risk elements, including, for example, the higher risk of misinformation or disinformation.

“We will need to think about how we can encourage appropriate use of this technology. But if we use it properly it has the ability to uplift our economies, our societies and empower our workforce to do even better.”

But to fully harness the potential of generative AI we will need to help businesses transform, and equip them with the necessary digital tools and skills to embrace this technology, he said.

Investment in AI systems is increasing exponentially.
Investment in AI systems is increasing exponentially. Image: Our World in Data

Over the past decade, investment in AI technologies has increased exponentially, before dipping due to the COVID-19 pandemic.

The unprecedented rate of AI innovation means a one-size-fits-all approach to regulation is not fit for purpose.

“We will need to adopt a nimble, flexible and practical approach to governing AI development,” said Yong.

“It will be important for us to come together as a global community, to collectively steer the development of the technology to advance the public good and do so in a safe and secure way.”

AI models must be built on collaboration and trust

Discussions at AIGS focused on interactive, gradual deployment of AI, prioritizing experiential learning, and not pursuing “speed for speed’s sake,” as Sabastian Niles, President and Chief Legal Officer, Salesforce, argued.

“Trust has to come first,” he said.

“Asking the right questions now will enable us to create the future that we want to have rather than the future we may end up with.”

This means mitigating, monitoring and fundamentally understanding the risks associated with AI development, he said.

“We need to have systems that are even more multilateral, that are even more multi-stakeholder.

“The transformation opportunity that AI brings for all of society, for governments, business, communities and just human beings, can only be achieved if we have one strong public and private sector collaboration.

“If we lead with trust and inclusion, think about equality, sustainability and innovation and really embrace stakeholder success as we look at AI, I think we can raise the floor and improve business outcomes, human outcomes, societal outcomes, civil society outcomes and achieve the really powerful moonshot goals, too,” said Niles.

Accountability is essential so AI benefits all

AIGS delegates explored how AI could be developed and deployed to increase inclusivity rather than exacerbate existing social and economic inequalities. Developed correctly, AI could lead to a fairer, more equal society.

Alexandra Reeve Givens, Chief Executive Officer, Center for Democracy and Technology, said accountability must be built into the way AI systems are developed. She emphasized the need to take action to prevent the technology deepening existing socio-economic divides that threaten individual freedoms and rights, or negatively impacting how people access information and communicate with one another.

“It’s nice to think about countries being a testbed for innovation, but governments have an obligation to think about the rights of the people living within their borders and around the world,” Givens said.

“When you have a technology that functions by learning from existing data sets and identifying patterns and then making decisions based upon the patterns it sees, that is a recipe for replicating existing social inequality.


How is the World Economic Forum creating guardrails for Artificial Intelligence?

“We need to ensure that AI also works for the communities that are not well represented in the existing data sets, and have that embedded from the very beginning,” she said. “This includes ensuring the information and suggestions an AI-system makes do not include existing biases and are representative of different communities and cultures.”

Similar concerns exist around the use of AI for government surveillance, such as face-recognition systems or predictive policing, and the capacity to use the technology to infringe on people’s individual freedoms and rights, she said.

Increasingly sophisticated AI-enabled systems mean misinformation, disinformation and deep-fakes can be generated at the click of a button.

“The solution isn’t to ban the technology,” said Givens. Governance is very important “at the developer level, the companies creating these tools, at the deployer level, the social media platforms and others that are allowing this information out into the ecosystem, and for governments themselves to step up and act.”

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesForum Institutional
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How venture capital is investing in AI in the top five global economies — and shaping the AI ecosystem

Piyush Gupta, Chirag Chopra and Ankit Kasare

May 24, 2024

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum