Artificial Intelligence

Agile AI governance: How can we ensure regulation catches up with technology

A banner shows message reading "Regulating AI", during the AI & Big Data Expo 2025 at the Olympia, in London, Britain, February 5, 2025: AI governance must adapt continuously, not periodically

Governance must evolve from static to dynamic, from retrospective to real-time, from compliance to continuous assurance. Image: REUTERS/Isabel Infantes

Amir Banifatemi
Chief Responsible AI Officer, Cognizant
Karla Yee Amezaga
Initiatives Lead, AI and Data Governance, Centre for AI Excellence, World Economic Forum
This article is part of: World Economic Forum Annual Meeting
  • Artificial intelligence (AI) requires governance that adapts continuously, not periodically. Real-time monitoring mechanisms can help detect risks early and strengthen public and investor confidence.
  • Agile pilots and sandboxes show how policy can evolve as fast as technology.
  • Public-private collaboration can help ensure the benefits of innovation are fully realized, responsibly developed and sustainably invested in.

Artificial intelligence's (AI) continuously evolving infrastructure is shaping economies, societies and public services. The rapid scaling of generative AI, multimodal models, autonomous agents, robotics and other frontier technologies has introduced capabilities that update, coordinate and behave in ways that shift rapidly in real-world environments.

Across international initiatives such as the Global Partnership on Artificial Intelligence and the AI Global Alliance, one lesson is clear: the most serious operational risks do not emerge at deployment but down the line, as systems adapt or interact with other models and infrastructures. However, existing governance timelines cannot capture these shifts.

At the same time, organizations face strong pressure to adopt AI safely and competitively while new regulatory frameworks, including the European Union’s AI Act, take effect. A governance model designed for periodic compliance cannot keep pace with or match the complexity of learning AI systems.

What is needed is agile, iterative oversight that can update as systems evolve and new evidence emerges.

How can we achieve real-time AI governance?

Generative and agentic systems no longer behave as fixed-function tools. They adapt through reinforcement, respond to user interactions, integrate new information and can coordinate with other systems. These characteristics require governance that functions more like a living system than a periodic audit.

The path is clear: governance must evolve from static to dynamic, from retrospective to real-time, from compliance to continuous assurance. Some countries and organizations are already leading the way here.

1. From point-in-time audits to continuous monitoring

Much like modern cybersecurity, the centre of gravity is moving toward always-on observability. Continuous monitoring systems, such as automated red-teaming, real-time anomaly detection, behavioural analytics and monitoring APIs, can evaluate model behaviour as it evolves, not just in controlled testing.

As outlined in the publication Advancing Responsible AI Innovation: A Playbook, new “control planes” and AI agents can provide ongoing risk assessments, enabling organizations to detect harmful drift, hallucinations, self-preserving behaviour or fairness deviations as they occur.

For example, enterprise platforms such as Cognizant’s TRUST Framework provide on-demand continuous risk detection, monitoring and metrics on trust, safety and performance across AI systems, enabling real-time governance visibility and rapid, data-driven interventions.

National initiatives such as Singapore’s AI Verify toolkit integrate robustness, factuality, bias and toxicity testing into structured evaluation cycles for production systems, demonstrating that continuous and standardized assessments are feasible at a national scale.

2. From static safeguards to live, adaptive policies

Traditional guardrails assume systems behave consistently. However, today’s models may shift due to updates, user interactions or exposure to new data. This requires policies that adapt to system behaviour, through dynamic content filtering, context-aware safety constraints or adaptive access controls.

A recent report offering a 360° Approach for Resilient Policy and Regulation highlights that complex adaptive regulations can adjust based on observed system impacts and predefined thresholds, like financial risk models or public health surveillance systems.

3. From fragmented oversight to sector-wide assurance systems

Governments are beginning to create shared infrastructure for AI oversight, including national safety institutes, model evaluation centres and cross-sector sandboxes.

The Hiroshima AI Process, Singapore’s Global AI Assurance Pilot and the International Network of AI Safety Institutes reflect the growing recognition that no single company or government can evaluate AI risks alone.

Collaboration in this area allows for progress in defining common risks, standardized reporting, shared testing protocols and coordinated incident disclosure. These aspects are essential for global interoperability – without them, businesses operating across countries face a compliance maze and governments risk regulatory blind spots.

Have you read?

Recommendations for decision makers

Agile AI governance is not about speed for its own sake. It is about creating the conditions for systems that learn, adapt and interact to be supervised effectively, enabling both innovation and safety.

Evidence across sectors shows that organizations with systematic monitoring and transparent reporting experience fewer deployment delays, smoother engagement with supervisors and faster time-to-scale for high-risk applications.

Real-time oversight can also prevent harms before they propagate, identifying biased outputs, toxicity spikes, data leakage patterns or unexpected autonomous behaviour early in the lifecycle.

And by incorporating continuous feedback from civil society and affected communities, agile governance helps ensure that AI systems remain aligned with societal expectations and can adapt as those expectations evolve. But translating these benefits into institutional practice requires coordinated action.

Recommendations for policymakers include:

  • Build national AI observatories and model evaluation centres that aggregate test results, incident data and systemic indicators across sectors.
  • Adopt risk-tiered, adaptive regulatory frameworks that protect without slowing innovation.
  • Standardize transparency and incident reporting, paired with safe-harbour provisions that incentivizes early disclosure and collective learning rather than punitive response.
  • Strengthen international cooperation to avoid fragmented rules and uneven risks.

Recommendations for industry leaders include:

  • Deploy continuous monitoring across the full AI lifecycle.
  • Embed responsible AI into development pipelines with automated assessments and real-time alerts.
  • Implement adaptive guardrails and modernize human oversight for agentic AI.
  • Invest in AI literacy and governance tech while treating trust as a strategic capability, not a checkbox.

Future-ready governance starts now

As AI systems become more dynamic, autonomous and deeply embedded in critical functions, governance must transition from periodic verification to continuous assurance.

This shift echoes the focus of the World Economic Forum Annual Meeting 2026 in Davos, Switzerland, on deploying innovation at scale and responsibly, calling for regulatory approaches appropriate to frontier technologies that safeguard human agency and enable growth through trust.

The transformation starts with a simple recognition: in a world of adaptive, autonomous AI, governance must be as adaptive, continuous and intelligent. Anything less is not only insufficient, it's also a competitive disadvantage we can't afford.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Agile Governance

Share:
The Big Picture
Explore and monitor how Agile Governance is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

Why data, not code, is fuelling the AI revolution

Jake Loosararian

January 13, 2026

How can we design AI agents for a world of many voices?

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum