What carbon markets can teach us about governing frontier AI

Someone is seen walking past a lit-up sign that reads "AI"

AI governance could mirror carbon markets by regulating physical compute instead of subjective model capabilities. Image: REUTERS/Aly Song

Joel Christoph
This article is part of: Centre for AI Excellence
  • AI governance could mirror carbon markets by regulating physical compute instead of subjective model capabilities.
  • Risk-weighted permits would provide financial incentives for developers to prioritize safety and independent evaluations.
  • Permit auction revenues could fund AI infrastructure and audit capacity in emerging global economies.

In 2005, the European Union launched the world’s largest carbon trading system. The concept was simple: set a cap on total emissions, distribute permits and let polluters trade them. Those who cut emissions cheaply could sell their surplus; those who could not had to pay. The system was imperfect at first. Permits were over-allocated, prices crashed and critics declared the experiment dead. Then regulators tightened the cap. Today, emissions covered by the EU scheme have fallen by roughly 47%.

Artificial intelligence (AI) governance is stuck where climate policy was before cap-and-trade. More than 60 countries have published national AI strategies, and hundreds of companies have signed voluntary safety commitments. Yet frontier AI development continues to concentrate in a handful of firms, safety investment remains largely discretionary, and most countries have no practical leverage over the systems that will reshape their economies. Good intentions have not changed the underlying incentives.

Have you read?

Carbon markets solved this problem by identifying a measurable metric to regulate. AI governance can do the same, and it has an even better measuring stick.

The input you can count

The most powerful AI systems are no longer limited by ideas alone. They are limited by chips, data centres, electricity, water and supply chains. The International Energy Agency estimates that global data centre electricity consumption reached roughly 415 terawatt-hours in 2024 and could surpass 945 terawatt-hours by 2030, comparable to Japan’s total electricity consumption. This is a shift in industrial scale, not a niche technology story.

This physical intensity is an opportunity for governance. Compute is metered, logged and billed. Cloud providers already track usage to the GPU-hour. While regulating the content or capabilities of AI models involves subjective judgements, regulating compute means working with a physical quantity that can be independently verified, much as carbon emissions can be measured at the smokestack.

How risk-weighted compute permits could work

Imagine a framework modelled on cap-and-trade. A regulator defines which training runs are high-stakes, using compute thresholds and related signals of capability. It sets an aggregate cap on the compute available for those runs within a given period. Developers obtain permits denominated in compute units.

Crucially, permits are risk-weighted. Independent evaluations inform a risk multiplier that scales the permit requirement up or down. Developers that demonstrate stronger safeguards and lower misuse risk face a lower effective obligation per unit of compute. Those who do not face a higher one. This converts safety effort from a public-relations claim into a costed input that firms can reduce by investing in evaluation, security and deployment safeguards.

Because permits are tradable, the market allocates scarce frontier compute to the highest-value uses and discovers the lowest-cost path to compliance, avoiding the rigidity of prescriptive rules that become outdated as technology evolves. Enforcement relies on stochastic audits and escalating penalties. The objective is not to police every run in real time. It is to make compliance cheaper than evasion.

A tool for global inclusion, not just control

The World Economic Forum’s AI Governance Alliance has highlighted that meaningful global AI governance requires broad participation. Yet most governance discussions remain confined to a small group of wealthy nations and large technology firms. Much of the world is currently a price taker in AI, dependent on a handful of jurisdictions for chips and cloud access.

Permit auctions generate revenue. That revenue can fund audit capacity, evaluation infrastructure and safety-relevant public goods in countries that currently lack them. Coalitions can agree on mutual recognition of audits, lowering compliance friction for responsible developers. For middle powers and emerging economies in Southeast Asia, Africa and Latin America, this reduces the false choice between ungoverned dependence and the costly, unrealistic pursuit of self-sufficiency.

Honest limits

The analogy between carbon and compute should not be pushed too far. Compute is not harm, and a permit system would be one layer in a broader governance stack that includes evaluations, incident reporting and application-specific rules. Carbon markets took years to calibrate, suffering from over-allocation and questionable offsets. A compute permit system would face its own challenges: defining which training runs qualify as high-stakes, preventing developers from fragmenting workloads to stay below thresholds, and ensuring that monitoring infrastructure does not become a tool for government surveillance of private innovation.

These are serious design problems, not reasons to abandon the approach. Financial regulators already manage analogous tensions. Risk-weighted capital requirements in banking involve the same trade-offs between measurability, gaming and enforcement. The lesson from both carbon markets and financial regulation is that imperfect pricing still outperforms no pricing at all.

A window that will not stay open

The next two to three years represent a narrow window. Compute supply chains are being reshaped by export controls, industrial policy and record capital expenditure. Governance frameworks established during this period will set the terms for a generation. If the international community waits until frontier capability is even more concentrated, the political economy of reform becomes far harder.

Carbon markets taught policy-makers a lesson that still holds: you do not need perfect control to create meaningful incentives. You need a measurable unit, clear monitoring rules and a system that makes compliance easier than evasion. AI governance can adapt that lesson. Risk-weighted compute permits will not solve every problem. But they can help the world move from slogans to scalable incentives at exactly the moment when AI capability and infrastructure are accelerating together.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum