Artificial Intelligence

The Hiroshima AI Process: a third way towards common ground on AI governance

Japan's Hiroshima AI Process could help foster harmonized global AI governance.

Japan's Hiroshima AI Process could help foster harmonized global AI governance. Image: Getty Images/iStockphoto

Agustina Callegari
Initiatives Lead, Technology Governance, Safety and International Cooperation, World Economic Forum
Khalid Alaamer
Lead, Digital Trade and AI, World Economic Forum
This article is part of: Centre for AI Excellence
  • Approaches to AI governance currently differ widely across the world, each reflecting unique balances of innovation, trust and authority.
  • The Global South risks being shaped by external standards of AI regulation, diminishing its agency and deepening digital inequality.
  • Japan’s Hiroshima AI Process fosters international alignment by offering a flexible framework that can connect diverse national systems and promote interoperability.

Whether you are scrolling through social media, applying for a mortgage, or getting a diagnosis from your doctor, artificial intelligence (AI) is already shaping the choices you make every day. Beyond everyday convenience, AI is transforming economies, shifting global influence and challenging existing rules of governance. The real question now is not whether to regulate it, but how.

Governments worldwide are racing to shape the rules of AI, but they’re not all taking the same approach. Some are building comprehensive, risk-based regimes; others rely on principle-driven oversight or state-led coordination to align innovation with strategic priorities. Each model reflects a distinct balance between innovation and accountability, flexibility and protection – and understanding these differences is essential to finding areas of cooperation.

Have you read?

That search for common ground is already taking shape. From Europe’s detailed rulebooks to Japan’s consensus-based model, countries are learning to govern technology that transcends borders. Initiatives like Japan’s Hiroshima AI Process are helping connect these diverse approaches through shared transparency and collaboration, showing how nations with different regulatory logics can still work toward a trustworthy global AI ecosystem.

Diverse paths to governance

Governments increasingly agree that AI must be transparent, accountable and safe – but their paths to achieving those goals diverge.

Europe’s EU AI Act prioritizes risk management, classifying systems by potential harm and imposing stricter rules on those affecting rights, health or safety. The United Kingdom applies a principle-based model, embedding fairness and accountability across regulators with flexibility for experimentation. The United States follows a market-driven, security-oriented approach, combining voluntary frameworks such as NIST’s AI Risk Management Framework with state and federal initiatives. China, by contrast, adopts a directive model emphasizing registration, security reviews and content oversight to align innovation with national goals and social stability.

Each of these approaches reflects a distinct balance between innovation, oversight and public trust. Yet as these logics evolve, they expose deeper challenges: how to ensure that governance keeps pace with technology without constraining it – and how to translate national priorities into globally compatible rules. Ultimately, as highlighted in the World Economic Forum’s 360° framework for Governance in the Age of Generative AI, the convergence or otherwise of national approaches around trust, safety and human-centric innovation will determine whether AI governance strengthens global cooperation or deepens regulatory fragmentation.

The Global South and the new digital divide

This question of convergence is especially urgent for the Global South. As AI governance frameworks mature across advanced economies, many developing nations risk being shaped by external standards rather than defining their own. While digital transformation offers vast potential for inclusive growth, persistent gaps in data access, infrastructure and skills threaten to widen inequalities.

Influence in the digital economy remains concentrated among a few actors who set technical standards, govern data flows and shape norms for AI development. Their advantages in computing power, proprietary data and research capacity have driven global efficiency – but often at the cost of local adaptability and agency. As major powers compete, smaller economies risk being absorbed into regulatory and technological systems that only partly reflect their linguistic, social and developmental priorities.

Bridging this divide requires more than investment; it demands agency and coordination. Strengthening infrastructure, skills and research ecosystems, alongside regional cooperation, can help countries move from technology adopters to active contributors. By engaging in standard-setting and advancing context-sensitive governance, developing economies can ensure that the AI age reflects a richer plurality of voices, values and visions.

Cooperation in action: the Hiroshima AI Process

As global approaches to AI governance diverge and capacities remain uneven, new initiatives are emerging to bridge these divides. Japan’s Hiroshima AI Process, launched during its 2023 G7 presidency, introduces a comprehensive framework composed of the Hiroshima Process International Guiding Principles, Hiroshima Process International Code of Conduct, and a voluntary Reporting Framework for companies and governments.

These enable organizations to demonstrate accountability and a commitment to responsible AI practices, even in the absence of binding regulation. By promoting transparency, it offers a practical mechanism for large companies to communicate how they manage AI risks and align with global expectations.

Building on this foundation, the World Economic Forum’s Advancing Responsible AI Innovation: A Playbook shows how such voluntary reporting can turn transparency into a driver of trust, competitiveness and reputational value. The Hiroshima AI Process thus exemplifies what the study Japan’s Hiroshima AI Process: A Third Way in Global AI Governance describes as a “third way” approach built on soft law, emphasizing stewardship and openness rather than strict enforcement.

Moreover, the process offers a flexible pathway for countries still developing AI institutions to align with shared principles before formal regulation. Its reach continues to grow through the Hiroshima AI Process Friends Group, spanning over 50 countries and regions. In ASEAN, for instance, discussions at the World Economic Forum’s 2025 AI Stakeholder Dialogue in Kuala Lumpur highlighted how the process supports the ASEAN Responsible AI Roadmap – helping harmonize governance, enable trusted data flows and safely test innovation through regulatory sandboxes.

Policy-makers increasingly agree that AI must be transparent, accountable and aligned with human well-being. The routes differ, but the goal is shared: to build a trustworthy ecosystem where innovation and responsibility reinforce each other.

Japan’s Hiroshima AI Process offers something new: a collaborative fabric that connects these systems through shared transparency and cooperation. It does not replace national strategies or legal regimes, but it helps create the conditions for them to coexist and evolve together.

If policy-makers, businesses and standards bodies continue refining this living interoperability layer, Hiroshima’s soft-law experiment could evolve into the backbone of practical, trusted and globally inclusive AI governance.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Artificial Intelligence
Emerging Technologies
Global Cooperation
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

How identity fraud is changing in the age of AI

Katherine Cloud and Ilya Brovin

December 11, 2025

Metals at scale for AI at scale: securing the data centre materials backbone

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum