Artificial Intelligence

From systems of record to systems of trust: A board-level playbook for governing agentic AI

Boardrooms are increasingly reallocating decision rights to autonomous systems like agentic AI, while retaining governance models built for human judgement.

The mismatch between agentic AI and human judgement is the real risk for boards and governance. Image: Getty Images

Rohan Sharma
  • Boards are increasingly reallocating decision rights to autonomous systems, while retaining governance models built for human judgement.
  • The mismatch between agentic AI and human judgement is the real risk, not in system behaviour, but in how boards define the boundary of delegation.
  • This means that to govern systems of trust, boards must abandon the illusion of total control, with the new mandate being the architecture of constraint.

For decades, boards have governed technology the same way: buy the system, hire the team, set policy, audit outcomes. But the recent shift is different.

The modern boardroom is not adopting a faster class of software. It is reallocating decision rights to autonomous systems such as AI agents, while retaining governance models built for human judgement.

This mismatch between autonomous systems and human judgement is the real risk, not in system behaviour, but in how boards define the boundary of delegation. Boards that govern these agents, not just buy them, will quietly compound an advantage their competitors won’t even know how to measure.

The World Economic Forum’s recent AI Agents in Action white paper, alongside its Advancing Responsible AI Innovation playbook, reflects a reality already visible in boardrooms: governance must now be encoded, not discussed.

Have you read?

As agents move from conversational novelties to core operational engines, the latency between a strategic directive and a catastrophic execution shrinks to zero. In practice, this means boards are now voting, often blindly, on where human judgement ends and machine authority begins.

Where governance fails in practice is not where boards are looking. Boards assume the risk is a technical glitch, an agent hallucinating or crashing. But the true failure mode of an autonomous agent is rarely a breakdown. It is hyper-competence applied to a flawed metric.

Consider a financial agent optimizing procurement. It does not fail; it executes perfectly, renegotiating at scale to extract marginal gains, collapsing a critical supplier and disrupting the supply chain. The system worked exactly as designed. That is the failure, and it is entirely invisible to a standard risk matrix.

Traditional compliance is often post-mortem, but systems operating at machine velocity cannot be audited retroactively. Relying on static audits for an autonomous agent is like analyzing the trajectory of a bullet after it has struck the wall.

The Organization for Economic Co-operation and Development's (OECD) AI Policy Observatory notes that while national AI strategies proliferate, functional frameworks for real-time agentic oversight remain absent. We are building engines without brakes, expecting legacy seatbelts to save us.

Governing systems of trust in the agentic AI era

The implication for governance is clear. To govern systems of trust, boards must abandon the illusion of total control. The new mandate is the architecture of constraint.

Board directive 1: Design for legible friction

The defining tradeoff is between maximum system yield and legal defensibility. Absolute efficiency eliminates human legibility. If you cannot explain how a decision was made, you cannot defend it.

Governance therefore requires intentionally constraining speed. Boards must engineer “legible friction”: defined pause points where high-stakes actions require human authorization. What appears as inefficiency is, in practice, operational control.

The US AI Safety Institute Consortium, reflects this shift towards standardized guardrails. As Kathleen Hicks, a former US Deputy Secretary of Defense, has emphasized: “When we say ‘human in the loop’, we mean that someone in the chain of command must ultimately take responsibility.”

Board directive 2: Audit the reward, not the route

Making thousands of micro-decisions in real time is infeasible. Boards must shift from auditing execution to governing the underlying reward function. If you incentivize an agent to maximize engagement, it will find the most extreme path to achieve it. The objective defines the outcome.

Loading...

This cannot remain a Western construct. Systems trained on narrow data will reshape global outcomes. Efforts such as Research ICT Africa’s Africa Just AI project underscore the need to embed regional realities into system objectives to avoid reinforcing structural inequities.

Board directive 3: Internalize the liability perimeter

Boards still treat risk transfer as contractual; in practice, accountability is non-transferable. Infrastructure can be outsourced, but liability remains anchored to the institution. When an autonomous procurement agent executes a discriminatory vendor-selection practice, it does so under the authority of the board, whether that authority is explicitly understood or not.

Global institutions now treat AI as a structural driver of trade and growth, not a side experiment. Capturing that upside requires internalizing the downside. You cannot buy an indemnity clause for a synthetic actor acting on your behalf. If the agent acts, a named executive must own the consequence.

These shifts are no longer scenarios on a risk register; they are showing up in board minutes and litigation dockets, and they now demand an institutional response measured in quarters, not years.

Building transparency and trust in AI

The different stakeholders should take the following actions to help build transparency and trust in agentic AI.

  • Governments: Regulate the interfaces where agents interact with markets, critical infrastructure and physical systems, not the underlying models. Align these control points with United Nations Sustainable Development Goal 16 on Peace, Justice and Strong Institutions to ensure transparent and accountable institutions.
  • Business leaders: Abandon the assumption that compliance can be layered on after deployment. Safety is a design decision that requires sacrificing a portion of peak optimization. Encode risk tolerance directly into system architecture, aligning with emerging baselines such as the ISO/IEC 42001 for AI management systems. Pay the margin for defensibility or absorb the cost of uncontained failure.
  • Civil society: Shift scrutiny from training data to optimization objectives. Demand transparency not just in what systems ingest, but in what they are incentivized to achieve. The reward function is the new battleground for human rights.

Boards should initiate three immediate directives:

  • Demand a shadow agent audit: Identify informal automation built within the organization. The primary risk is not vendor AI; it is unsupervised shadow internal deployment.
  • Force a D&O liability stress-test: Review directors & officers (D&O) insurance coverage for exposure to autonomous AI negligence. Many policies do not account for agent-driven risk. Right now, most boards are likely materially underinsured.
  • Execute a ‘synthetic subpoena’ drill: Require management to formally defend a single high-stakes decision made by an autonomous agent, as if under legal scrutiny. If leadership cannot clearly trace the decision back to defined objectives and human intent, the system should not be operating. Governance begins where explainability fails.

The defining liability of the next decade is not the code you failed to write, but the decisions you allowed machines to execute. You can outsource execution to a synthetic system, but not fiduciary duty.

When an AI agent acts, it extends the board’s decision-making perimeter, and with it, its liability. The system is no longer processing the ledger; it is writing the history.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Artificial Intelligence
Digital Trust and Safety
Technological Innovation
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

Crop protection can no longer keep pace with nature. How do we catch up?

Anthony Klemm

April 28, 2026

The trust dividend: Why connected data makes AI decision-ready for sustainability

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum