In the AI era, business governance means safeguarding trust

Leaders need clarity, not complexity, on AI governance. Image: Shutterstock
- AI governance must focus on outcomes like integrity, accountability, transparency and resilience.
- Boards must converge AI transformation and cybersecurity, ensuring teams co-design systems from the start rather than validating after the fact.
- Trustworthy governance requires independent leadership, accurate AI/IT inventories and genuine literacy at the board level to steward long-term value.
AI is transforming business, cyber risks are escalating, and trust in institutions is under pressure. With these developments in mind, what does "good governance" look like in 2025? Governance refers to the structures, systems and practices an organization has in place to assign decision-making authority, define how decisions are to be made, and most importantly, make progress towards the organization’s strategic direction. It also includes service delivery, performance management, monitoring and mitigation of key risks.
But alas, governance often gets caricatured as red tape that slows innovation. The remedy is not more procedures, but a return to first principles: Defining the outcomes that governance exists to protect – integrity, accountability, transparency and resilience – then working backwards to the minimum mechanisms required to achieve them. Good governance doesn’t mean never failing; it means, even in failure, being transparent, resilient and ready to adapt.
What if real transformation comes not from tweaking the old, but from focusing on outcomes and proving the value of the new?
Two board imperatives that must converge
This reframing of governance connects directly to the challenges boards face today. Boards are the guardians of a corporation’s longevity; over the past two years, two items have dominated their agenda: transforming with AI and defending against cyber risk. We are at the beginning of a new industrial revolution, and companies that miss the AI transformation will not survive.
A common mistake is treating these two agenda items as separate. However, the constant evolution of the threat landscape demonstrates their intersection: AI is used to amplify cyberattacks, and cyberattacks target AI systems. Therefore, trustworthy AI systems depend on many of the same disciplines cybersecurity has defined: policy, risk, controls, testing and red-teaming. Convergence of these disciplines is not optional.
'By design' only works if teams design together
But the shift can’t happen if teams remain siloed. Yet it’s rare to see cybersecurity leaders working side by side with a corporation’s transformation agenda. Instead, they’re often seen as validators of outcomes, stepping in at the last minute to check the security state. This “after-the-fact” model is costly and slows innovation. What if true speed and effectiveness came from cybersecurity and (AI) transformation teams co-designing from the very start?
Public vs. private: the duty of resilience
In the private sector, consumers can vote with their feet (or wallet) if they lose confidence or trust in a provider after a breach. However, in the public sector, citizens have no such choice: They cannot change where they get passports, healthcare or essential services. That makes procurement standards, transparency, accountability and resilience even more vital. Governments must show that – even when mistakes occur – processes are ethical, transparent and resilient enough to maintain trust of the citizens.
AI governance: the evolving CISO mandate?
An increasing number of organizations are actively exploring where AI governance should sit. A few have already made the choice to expand the role of the chief information security officer (CISO), who brings a level of independence from day-to-day operations and is already accountable for several risk and control disciplines. However, organizational structure matters; where the CISO sits in the hierarchy and who they report to can impact their ability to influence and deliver, as well as their perceived independence.
The inventory problem
Effective governance starts with knowing what you’re governing. However, creating an AI inventory routinely hits the reality that many organizations’ IT asset inventories are incomplete or of low quality. Taking a systems thinking approach positively helps by providing a more holistic view of how people, processes, data, and machines interact. This approach also ensures accountable ownership for maintaining accurate maps of these interactions.
Literacy is necessary
While AI and cybersecurity literacy at the board and executive level is rising, jargon still blocks collaboration. Leaders need clarity, not complexity. It is imperative to simplify the complexity and speak about transformational impacts. Only then can governance decisions be both informed and resilient.
In summary, five opportunities stand out:
- Mandate independent convergence: a single senior manager empowered across AI, security and data risk, reporting to the board with independence from delivery pressures.
- Institutionalize co-creation: cross-functional design reviews that pair cybersecurity experts with product and data & AI teams at the inception of projects.
- Operationalize transparency: Record decision logs for high-risk systems, communicate broadly and rigorously rehearse incident communication and recovery.
- Fix the inventory: Establish an authoritative AI/IT catalogue with clear ownership and quality targets.
- Grow “real” literacy: Replace checkbox training with scenario workshops where leaders practice making and defending trade-offs under uncertain conditions.
How is the Forum tackling global cybersecurity challenges?
If governance is ultimately about stewarding decision-making that protects value over time, then in the age of AI, the board must act as the guardians of trust. That begins by collapsing the walls between AI transformation and cybersecurity, and by measuring governance, not by procedures, but by outcomes.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.







