Emerging Technologies

The UN has moved to close the gap in AI governance. Here's what to know

United Nations Secretary-General Antonio Guterres addresses the 80th United Nations General Assembly at U.N. headquarters in New York City, U.S., September 23, 2025.  REUTERS/Mike Segar

The UN General Assembly sought to address the issue of AI governance at a high-level meeting in New York. Image: REUTERS/Mike Segar

David Elliott
Senior Writer, Forum Stories
This article is part of: Centre for AI Excellence
  • The United Nations has launched two AI governance bodies: The Global Dialogue on AI Governance and the Independent International Scientific Panel on AI.
  • The new architecture is intended to usher in a more inclusive form of international governance for AI.
  • Trustworthy AI ecosystems will be a critical differentiator that allows AI to scale safely, sustainably and inclusively, according to the World Economic Forum report Advancing Responsible AI Innovation: A Playbook.

Artificial intelligence is not science fiction – it's a fact of modern life. Variations of this message were repeatedly driven home at the recent meeting of the United Nations General Assembly in New York. Speakers across many sessions highlighted the risks and opportunities of AI and attempted to rally governments to ensure the technology benefits all people.

It’s a timely call. As the capabilities and deployment of AI continue to accelerate worldwide, 118 countries are not party to any significant international AI governance initiative, according to a UN report. The growth of AI tools has yet to be matched by effective, internationally agreed rules on how to govern the technology, the organization says.

The UN General Assembly sought to address this issue with a high-level meeting on AI governance that gathered diplomats, scientists, members of the tech community, private sector and civil society.

The event was the first time that all 193 UN Member States were given a say in how international AI governance is developed – and marked the launch of two new bodies on AI: The Global Dialogue on AI Governance and the Independent International Scientific Panel on AI.

Loading...

A big gap in responsible AI adoption

As “the fastest-moving technology in human history”, AI is already transforming the world, as the UN Secretary-General António Guterres underscored in a speech covering the new initiatives.

From supporting healthcare professionals by speeding up diagnoses and cures to driving efficiencies in manufacturing and climate monitoring and prediction, the technology is already proving its enormous potential.

But to fully take advantage, widespread adoption of responsible AI – the practice of building and managing AI systems to maximize benefits while minimizing risks to people, society and the environment – is vital.

There is, however, a huge gap in the adoption of responsible AI.

Industry plays an important role in governing AI by implementing responsible practices. But less than 1% of organizations have fully operationalized responsible AI in a comprehensive and anticipatory way, according to Advancing Responsible AI Innovation: A Playbook, a report from the World Economic Forum.

Fragmented approaches to regulation are one of the roadblocks businesses face in tackling this gap which is seen across sectors and regions and, if left unaddressed, could erode confident AI investment, compliance and public trust, according to the Forum's report.

An ‘early-warning’ system on AI governance

Just seven countries – all from the developed world – are parties to all the current significant global AI governance initiatives, according to the UN.

The bodies launched by the organization at the General Assembly – which grew out of recommendations made by experts following the UN’s 2024 report Governing AI for Humanity – aim to “kickstart a much more inclusive form of international governance”.

The Global Dialogue on AI Governance is designed to be a forum for governments, industry, civil society and scientists to share best practices and common approaches on how to govern AI.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Promoting interoperability between different strands of governance and encouraging open innovation that makes tools and resources available to all, it ultimately aims to complement existing efforts to provide an “inclusive, stable home” for coordinating AI governance that will help build safe, secure and trustworthy AI systems.

The Independent International Scientific Panel on AI, meanwhile – which has been likened by some to an “IPCC for AI” – will comprise 40 expert members who will provide evidence-based insights into the opportunities, risks and impacts of AI.

Its findings will inform the Global Dialogue and, as the UN describes it, it will be “the world’s early-warning system and evidence engine – helping us separate signal from noise, and foresight from fear”.

Allowing innovation to scale safely

Experts have called the new UN bodies “the world’s most globally inclusive approach to governing AI”. And many heads of state and corporate leaders at the General Assembly backed the need for urgent collaboration on the issue, despite some criticism from the US of the role of international bodies in doing so.

Regardless of implementation approach, government initiatives to build trustworthy AI ecosystems are essential to advancing responsible AI. The Hiroshima AI Process International Guiding Principles, established by G7 leaders to promote safe, secure, and trustworthy AI development globally, emphasize that responsible governance frameworks are fundamental to realizing AI's benefits while managing its risks. This perspective is reinforced by the Forum's Advancing Responsible AI Innovation playbook, which demonstrates that responsible AI is not a constraint on innovation but rather a critical enabler that allows AI to scale safely, sustainably and inclusively across society.

The report – produced by the Forum’s AI Governance Alliance in collaboration with Accenture – makes a series of recommendations for how organizations, in cooperation with governments, can operationalize responsible AI principles across three dimensions: aligning corporate strategy and responsible AI innovation; increasing and incentivizing organizational capacity for responsible AI; and overseeing the life cycle of responsible AI development, acquisition and use.

It emphasizes the importance of a strong, collaborative ecosystem that fosters public-private partnerships and international cooperation.

As the Prime Minister of Spain noted at the UN General Assembly: "The rise of AI is unstoppable, but it cannot be ungovernable."

Ensuring everyone has a voice in how the technology is governed will be crucial to its success.

Loading...
Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Emerging Technologies
See all

7 ways the tech sector can lead the nature-positive transition

Michael Donatti and Benoit Bégot

December 4, 2025

Africa’s AI moment: How coordinated investment in 'green' computing can unlock $1.5 trillion

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum