Businesses aren't prioritizing AI governance. Here's why that holds them back

Leaders should governance and trust in AI as they build out their AI stacks. Image: Getty Images/iStockphoto
- Fewer than one in 100 organizations has implemented full responsible AI practices.
- Delivering on the promises of AI requires trust, an area that is lacking in current AI strategies.
- The technology is increasingly ready to implement effective AI governance – but leaders must prioritize it.
Artificial intelligence (AI) holds enormous promise, but its future depends on trust. And when it comes to making the most of this technology, the data tells a story: fewer than 1% of organizations have fully operationalized responsible AI practices. That gap is not just technical; it is structural.
Without governance built in from the start, AI risks repeating the failures of past technologies, from poor data quality to opaque decision-making and weak accountability. The World Economic Forum's Advancing Responsible AI Innovation: A Playbook report delves into what this means and how innovators and those using AI can deliver on its potential.
Governance is crucial at the point where policy meets product. When governance shows up late, it’s like pouring concrete after the residents move in; hairline cracks today, structural problems tomorrow. Build it into the blueprint and you don’t slow the work; you steady it, scale it and make it last.
Trust in AI starts at the data layer
The Advancing Responsible AI Innovation: A Playbook report underscores a simple truth: the success of modern AI depends on modern data governance. Yet many organizations still struggle with siloed systems, uneven data quality and approval processes that slow progress and erode trust.
Distributed ledger technology is starting to change that. EQTY Lab, working with NVIDIA, employs ‘Verifiable Compute’ and anchors cryptographic receipts on Hedera; tamper‑proof records of how models are trained and how they infer. ProveAI covers the other flank, documenting who touched which training set, when and under what policy, aligned with emerging rules like the EU AI Act. That’s real‑time accountability, not a post‑mortem.
These approaches show what happens when governance is built in from the start. Trust is not added later as a safeguard; it becomes part of the system itself, continuous, transparent and resilient by design.
Pour the foundation first
Data isn’t the only beam that matters. Organizations need owners. The World Economic Forum’s playbook calls for named AI stewards, cross‑functional councils and a phased path that starts centralized and matures into federated oversight as capability grows. That avoids both chaos and bureaucracy.
Decentralized systems also offer useful lessons. In decentralized finance (DeFi), token-holder voting and governance councils help balance speed, transparency and resilience. Open-source communities push accountability even further, distributing oversight across developers and users who audit code and safeguard integrity. These models are not perfect, but they show that when governance is embedded in design, accountability becomes a built-in strength rather than an afterthought.
Taking this a step further, by creating a council of enterprises, nonprofits and universities that share equal responsibility, governance can be embedded into the system itself. No single actor with unchecked power. That architecture has yielded durable trust and responsible scale precisely because authority is distributed, not hoarded.
AI needs that kind of discipline. Governance must be visible, intentional and continuous; guiding design, implementation and growth. That is how resilience is built and how trust compounds.
Fusing progress and principles
Governments themselves need to bring clarity to the AI value chain, especially as generative AI blurs the lines between creators, deployers and users. Without clear accountability and shared standards, you invite systemic risk. International coordination matters too. Just as financial markets rely on common rules and oversight, AI will need guardrails that cross borders if it is to inspire confidence.
We are already seeing early steps. In the UK, the reintroduced AI Regulation Bill proposes an AI Authority and mandatory AI Officers to oversee responsible deployment. The EU is taking a different approach by enforcing compliance across the bloc under the AI Act. These are concrete examples of different models to address AI governance.
The task now is to improve on these models: define who is accountable, empower senior governance roles, embed oversight throughout deployment and work toward global alignment. Trust, safety and innovation will all depend on it.
Build AI on solid ground
The Forum’s playbook calls this a defining opportunity. AI can either become a technology people fear or one they trust to drive progress while protecting rights. The outcome will depend on whether governance is treated as a foundation or an afterthought.
As with any structure, once the foundation is set, everything built on top can stand taller and last longer. When governance is designed-in from the start, innovation becomes more resilient and transparent. Trust grows alongside adoption, giving AI the chance to scale not only quickly but responsibly, with accountability and inclusivity at its core.
Progress will not come from fenced-off efforts. It takes open ecosystems and serious collaboration among policymakers, builders and researchers. Let governance be the catalyst, not the brake, for trust and growth.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Artificial IntelligenceSee all
Simon O'Connell
February 20, 2026





