Artificial Intelligence

Shared infrastructure can enable sovereign AI – if we can make it trustworthy

The inside of a data centre: Sovereign AI based on shared infrastructure can succeed if we can get trust right

Sovereign AI based on shared infrastructure can succeed if we can get trust right Image: Usplash/Getty

Cathy Li
Head, Centre for AI Excellence; Member of the Executive Committee, World Economic Forum
Florian Mueller
Senior Partner and Head, AI, Insights and Solutions for Europe, Middle East and Africa, Bain & Company
This article is part of: Centre for AI Excellence
  • Artificial Intelligence (AI) debates increasingly focus on infrastructure realities: limited access to compute, power and secure connectivity; and financing models that can further entrench dependencies and narrow participation.
  • Shared AI infrastructure can expand global access to compute and data capabilities, helping more economies, especially developing ones, on their path to building AI capacity while retaining meaningful control.
  • Trust will be a deciding factor: legal clarity, robust data management mechanisms and technical and operational assurances will determine whether shared infrastructure creates global inclusivity or new dependencies.

Artificial intelligence (AI) infrastructure is increasingly critical, shaping productivity, public services and national security. Economies want control and resilience in how critical AI capabilities are built, accessed and governed.

It was no surprise to see that achieving AI sovereignty was a key topic of conversation at the recently concluded World Economic Forum Annual Meeting 2026 in Davos, Switzerland. However, this ambition of economies often collapses into a single proxy: who can build domestic hyperscale compute fastest.

In January 2026, the Forum and management consultancy Bain & Company published Rethinking AI Sovereignty: Pathways to Competitiveness through Strategic Investments, a white paper arguing for a more workable lens: sovereignty as strategic interdependence.

This matters because frontier infrastructure is already highly concentrated. A small number of economies are pulling ahead in access to advanced chips, reliable power and high-assurance data centre capacity, with the US and China alone capturing around 65% of aggregate global AI investment.

Without new infrastructure models, many economies, especially developing ones, risk missing out on the full benefits of AI. Shared infrastructure can widen access but only if it is designed with and for trust.

Have you read?

Why AI sovereignty is turning into an infrastructure crunch

The paper forecasts investment in AI-dedicated infrastructure, such as data centre capacity equipped to host advanced AI workloads, to grow at 10-15% annually, reaching more than $400 billion per year by 2030.

AI deployment is accelerating and compute demand is now a strategic constraint: it is not just graphics processing units but also grid capacity, secure facilities with high assurance and the high-speed networks that connect workloads to data and users.

For many economies, the bottleneck is infrastructure lead times – data centres can be planned in months but power and land permitting can take years.

In the UK, for example, the wait time for a grid connection has been reported as approximately eight to 10 years, with the government introducing “AI Growth Zones” to fast-track planning approval for data centre construction.

Advanced chips remain supply-constrained and land, water, talent and capital are unevenly distributed across regions and markets. Additionally, capital availability is increasingly being shaped by balance-sheet capacity and long-term energy commitments.

Even when prerequisites exist, the trade-offs are stark. Moving fast often means deeper dependence on a small set of vendors and platforms. Building at scale improves unit economics but can concentrate operational and cyber risk. Moreover, long-lived infrastructure bets can reduce optionality as technology and geopolitics evolve.

In short, achieving AI sovereignty entails making infrastructure choices about what to anchor locally, what to access through trusted partners and how to keep those choices resilient over time.

These choices will be explored further in an upcoming publication to be launched at the Forum Global Collaboration and Growth Meeting in Jeddah, Saudi Arabia, in April 2026.

What is new in the AI era is the need to govern not only where data sits but where it is processed.

What shared infrastructure means and what it doesn’t

Shared infrastructure covers arrangements that extend access to compute, storage and connectivity under enforceable safeguards — so economies can scale capabilities without surrendering control over how critical data and workloads are governed.

It can take different forms, such as pooled regional capacity and trusted partner capacity delivered with contractual and technical controls. It also includes digital embassies that allow data and increasingly, workloads to be hosted abroad under agreed legal protections and security requirements (e.g. the digital embassy agreement between Estonia and Luxembourg).

What is new in the AI era is the need to govern not only where data sits but where it is processed.

Notably, shared infrastructure is not automatically cheaper or safer. It can accelerate access and reduce upfront capital needs but it can also introduce legal complexity, operational risk and new forms of lock-in.

The practical question, therefore, is how economies can benefit from shared infrastructure models while maintaining trust.

In an effort to answer this question, during a session in Davos in January called "Digital Embassies for Sovereign AI," the Forum announced a multi-stakeholder effort to draft a global framework for innovative and trusted digital embassies, identifying baseline principles and challenges for creating and using them in the AI era.

3 questions that shape trust in shared infrastructure

Shared AI infrastructure can preserve control only if trust is embedded by design – legally, technically and operationally – and if it holds under evolving technological, social and geopolitical conditions.

Economies should consider answering critical questions to foster trust before entering into or scaling shared infrastructure agreements. Three such questions include:

1. Which rules apply and what happens when laws conflict?

Be explicit on jurisdiction, dispute resolution and how the arrangement works if regulations diverge or emergency powers are invoked or reinterpreted.

2. What makes protections enforceable in practice?

Define concrete controls, such as who has physical access to the facility and how sensitive data and workloads are protected throughout processing, not just at rest or on paper.

3. How is trust continuously proven?

Require ongoing assurance, such as audits, incident response protocols and workable exit options, to avoid regional or vendor lock-ins.

Inclusion is the goal and 2026 is the inflection point

Getting these trust questions right matters because shared infrastructure is ultimately about fostering inclusivity – widening participation in AI capabilities rather than concentrating them.

This is why the India AI Impact Summit 2026 starting today in New Delhi matters. It puts the focus on impact and inclusive AI and consequently, on the choices needed to make AI’s benefits available to all. Decisions made this year will shape whether AI sovereignty becomes a widening divide or a shared foundation.

If AI sovereignty is rising, shared AI infrastructure can help keep it inclusive but only if we design it for trust.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Artificial Intelligence
Digital Trust and Safety
Global Cooperation
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

How AI-first operating models unlock scalable value

Maria Basso and Michael Römer

February 12, 2026

The day after AGI: Two 'rock stars' of AI on what it will mean for humanity

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum