Artificial Intelligence

Where should governments start with agentic AI? A practical guide to getting it right

A woman on a computer with headset and two other people on computers with headsets in the background: Making AI Work For Government: A Readiness Framework is a new report that demystifies agentic AI

Making AI Work For Government: A Readiness Framework is a new report that demystifies agentic AI Image: Pexels/Jep Gambardella

Kelly Ommundsen
Head, Digital Inclusion, Member of the Executive Committee, World Economic Forum
This article is part of: Centre for AI Excellence
  • Increasingly, governments have adopted agentic artificial intelligence (AI) for public service activities but still grapple with how to implement it responsibly and effectively.
  • A new report titled Making AI Work For Government: A Readiness Framework introduces the first systematic framework for assessing where governments should start with agentic AI.
  • Successful agentic AI in government depends on a strategic, workflow-focused approach that starts with manageable, high-value use cases and is guided by strong governance and realistic local readiness.

Imagine: a citizen applies for housing benefits at 9.00 pm on a Sunday. By Monday morning, their documents have been verified, their eligibility assessed by three agencies and their application approved, all without a single government employee logging in.

It may sound like science fiction but actually, this could all be possible with agentic artificial intelligence (AI) and can be applied through multiple government departments. This reality is ever more appealing as tighter budgets, shrinking workforces and increasing expectations for a seamless digital experience from public services.

Against that backdrop, agentic AI is attracting serious attention. Unlike earlier automation tools that handled single tasks in isolation, agentic AI systems can coordinate entire workflows: gathering information, making decisions, routing cases and delivering outcomes across organizational boundaries.

For governments, that is a meaningful leap forward.

However, enthusiasm for the service won’t necessarily lead to good implementation. A recent Capgemini survey of 350 public-sector organizations found that 90% plan to explore or deploy agentic AI within two to three years. That level of momentum is exciting but it is also a signal to tread carefully. As Gartner predicts, over 40% of agentic AI projects could be cancelled by 2027, often because organizations have moved without a clear sense of where the real value in them lies.

A new report titled Making AI Work For Government: A Readiness Framework, from the World Economic Forum and the Global Government Technology Centre Berlin and Capgemini, tries to close that gap by introducing the first systematic framework for assessing where governments should start with agentic AI.

With agentic AI, governments can move from automating individual tasks to delivering entire outcomes.

Manuel Kilian, Managing Director, Global Government Technology Centre, Berlin

Starting with the right question

The question most governments are asking is: where do we begin? After deciding to adopt agentic AI, there are still decisions on how to prioritize, sequence and deploy it in ways that actually deliver.

The answer requires a different way of thinking about government work. Agentic AI does not map neatly onto organizational charts or departmental structures. It works across workflows i.e. the recurring, end-to-end processes that cut across ministries and agencies.

Think eligibility assessment, fraud detection, permit issuance and document processing. These are the units that matter for agentic AI rather than the organizational chart.

“With agentic AI, governments can move from automating individual tasks to delivering entire outcomes,” said Manuel Kilian, managing director for the Global Government Technology Centre in Berlin. “Those that act strategically now – mapping their workflows, building the right foundations – will be in a fundamentally stronger position than those that don’t.”

What the framework actually shows

The report maps 70 core government functions against two things: the potential for agentic AI to add public value and the complexity of deploying it responsibly.

The result is a readiness map showing where governments can move with confidence, where more preparation is needed and where caution is warranted for now.

The findings are encouraging. Half of the 70 functions assessed fall into the high or medium readiness categories. Public service activities, such as appointment management, document validation, public information provision, come out particularly well.

These are high-volume, rule-based functions where agentic AI can make a visible difference to citizens relatively quickly and without excessive risk.

That matters because early wins build the institutional confidence needed to tackle harder problems later.

This report will help public sector organizations progress from ambition to implementation with agentic AI.

Marc Reinhardt, Global Public Sector Leader, Capgemini

Global framework, local decisions

No global framework can tell a government exactly what to do. Local context, such as the state of digital infrastructure, workforce capabilities, regulatory environment and public trust in AI, shapes what is actually feasible.

A function that scores low globally might be entirely achievable in a jurisdiction with strong data governance and political will. The same function might be a stretch elsewhere.

The report offers six practical steps to bridge that gap:

  • Assess local conditions honestly.
  • Develop risk strategies before deployment.
  • Adjust global scores with local knowledge.
  • Sequence from high-readiness functions first.
  • Test through small pilots before scaling.
  • Revisit the assessment regularly because the landscape will keep shifting.

Running through all of this is the idea of bounded autonomy – being deliberate about what AI agents are allowed to do, keeping humans meaningfully in the loop and being transparent about how decisions get made.

It is what makes the difference between agentic AI that builds trust and agentic AI that erodes it.

Marc Reinhardt, global public sector leader at Capgemini said: “This report will help public sector organizations progress from ambition to implementation with agentic AI.

“Using this framework, they can identify where the balance between risk and reward is right, and learn as they go, expanding to more complex areas when ready.”

The cost of getting this wrong

Doing nothing is not a neutral choice. Governments that delay action risk becoming dependent on solutions built elsewhere, for contexts very different from their own.

However, rushing in without a strategy carries its own costs – fragmented pilots, wasted resources and a loss of institutional confidence that can set back AI adoption for years.

The governments that get the most from agentic AI are not necessarily the most technologically advanced. They are the ones that are honest about where they stand, clear about where they want to go and disciplined about how they get there.

That is what this framework is designed to support.

Watch the full session below:

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

Crop protection can no longer keep pace with nature. How do we catch up?

Anthony Klemm

April 28, 2026

From systems of record to systems of trust: A board-level playbook for governing agentic AI

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum