Where should governments start with agentic AI? A practical guide to getting it right

Making AI Work For Government: A Readiness Framework is a new report that demystifies agentic AI Image: Pexels/Jep Gambardella
- Increasingly, governments have adopted agentic artificial intelligence (AI) for public service activities but still grapple with how to implement it responsibly and effectively.
- A new report titled Making AI Work For Government: A Readiness Framework introduces the first systematic framework for assessing where governments should start with agentic AI.
- Successful agentic AI in government depends on a strategic, workflow-focused approach that starts with manageable, high-value use cases and is guided by strong governance and realistic local readiness.
Imagine: a citizen applies for housing benefits at 9.00 pm on a Sunday. By Monday morning, their documents have been verified, their eligibility assessed by three agencies and their application approved, all without a single government employee logging in.
It may sound like science fiction but actually, this could all be possible with agentic artificial intelligence (AI) and can be applied through multiple government departments. This reality is ever more appealing as tighter budgets, shrinking workforces and increasing expectations for a seamless digital experience from public services.
Against that backdrop, agentic AI is attracting serious attention. Unlike earlier automation tools that handled single tasks in isolation, agentic AI systems can coordinate entire workflows: gathering information, making decisions, routing cases and delivering outcomes across organizational boundaries.
For governments, that is a meaningful leap forward.
However, enthusiasm for the service won’t necessarily lead to good implementation. A recent Capgemini survey of 350 public-sector organizations found that 90% plan to explore or deploy agentic AI within two to three years. That level of momentum is exciting but it is also a signal to tread carefully. As Gartner predicts, over 40% of agentic AI projects could be cancelled by 2027, often because organizations have moved without a clear sense of where the real value in them lies.
A new report titled Making AI Work For Government: A Readiness Framework, from the World Economic Forum and the Global Government Technology Centre Berlin and Capgemini, tries to close that gap by introducing the first systematic framework for assessing where governments should start with agentic AI.
With agentic AI, governments can move from automating individual tasks to delivering entire outcomes.
”Starting with the right question
The question most governments are asking is: where do we begin? After deciding to adopt agentic AI, there are still decisions on how to prioritize, sequence and deploy it in ways that actually deliver.
The answer requires a different way of thinking about government work. Agentic AI does not map neatly onto organizational charts or departmental structures. It works across workflows i.e. the recurring, end-to-end processes that cut across ministries and agencies.
Think eligibility assessment, fraud detection, permit issuance and document processing. These are the units that matter for agentic AI rather than the organizational chart.
“With agentic AI, governments can move from automating individual tasks to delivering entire outcomes,” said Manuel Kilian, managing director for the Global Government Technology Centre in Berlin. “Those that act strategically now – mapping their workflows, building the right foundations – will be in a fundamentally stronger position than those that don’t.”
What the framework actually shows
The report maps 70 core government functions against two things: the potential for agentic AI to add public value and the complexity of deploying it responsibly.
The result is a readiness map showing where governments can move with confidence, where more preparation is needed and where caution is warranted for now.
The findings are encouraging. Half of the 70 functions assessed fall into the high or medium readiness categories. Public service activities, such as appointment management, document validation, public information provision, come out particularly well.
These are high-volume, rule-based functions where agentic AI can make a visible difference to citizens relatively quickly and without excessive risk.
That matters because early wins build the institutional confidence needed to tackle harder problems later.
This report will help public sector organizations progress from ambition to implementation with agentic AI.
”Global framework, local decisions
No global framework can tell a government exactly what to do. Local context, such as the state of digital infrastructure, workforce capabilities, regulatory environment and public trust in AI, shapes what is actually feasible.
A function that scores low globally might be entirely achievable in a jurisdiction with strong data governance and political will. The same function might be a stretch elsewhere.
The report offers six practical steps to bridge that gap:
- Assess local conditions honestly.
- Develop risk strategies before deployment.
- Adjust global scores with local knowledge.
- Sequence from high-readiness functions first.
- Test through small pilots before scaling.
- Revisit the assessment regularly because the landscape will keep shifting.
Running through all of this is the idea of bounded autonomy – being deliberate about what AI agents are allowed to do, keeping humans meaningfully in the loop and being transparent about how decisions get made.
It is what makes the difference between agentic AI that builds trust and agentic AI that erodes it.
Marc Reinhardt, global public sector leader at Capgemini said: “This report will help public sector organizations progress from ambition to implementation with agentic AI.
“Using this framework, they can identify where the balance between risk and reward is right, and learn as they go, expanding to more complex areas when ready.”
The cost of getting this wrong
Doing nothing is not a neutral choice. Governments that delay action risk becoming dependent on solutions built elsewhere, for contexts very different from their own.
However, rushing in without a strategy carries its own costs – fragmented pilots, wasted resources and a loss of institutional confidence that can set back AI adoption for years.
The governments that get the most from agentic AI are not necessarily the most technologically advanced. They are the ones that are honest about where they stand, clear about where they want to go and disciplined about how they get there.
That is what this framework is designed to support.
Watch the full session below:
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Artificial IntelligenceSee all
Anthony Klemm
April 28, 2026





