The $4.8 trillion AI trust crisis: Why public-private partnerships are key for equitable innovation

Tackling the global AI trust deficit demands coordinated solutions. Image: Getty Images/iStockphoto
- AI adoption has surged but trust remains precarious, with only 62% of business leaders believing it is deployed responsibly in their organizations.
- Tackling the global AI trust deficit requires coordinated solutions that only public–private partnerships (PPPs) can deliver.
- PPPs combine government legitimacy, industry capability and civic oversight to turn 'trust' into measurable controls, audits and redress of AI.
Artificial intelligence (AI) adoption in enterprises surged an unprecedented 115% from 2023 to 2024, yet trust remains precarious – only 62% of business leaders and 52% of employees believe AI is deployed responsibly within their organizations.
This growing trust deficit urgently demands coordinated solutions that no single entity can provide like public-private partnerships (PPPs) can. PPPs combine government legitimacy, industry capability and civic oversight to turn “trust” into measurable controls, audits and redress.
The arithmetic is unforgiving: without trustworthy AI governance, the global economy forfeits not merely growth, but an estimated $4.8 trillion in unrealized economic upside by 2033 – largely the value lost to a widening digital divide between countries and communities with AI access and those without.
Distrust hinders the development of AI worldwide
Across sectors, distrust has visibly hindered the development of AI. Even within companies, a KPMG study found only 35% of decision-makers trusted AI and analytics in their own operations. This is precisely where PPPs co-design standards, transparency, audit and accountability, so adoption is “safe by default”.
These trust gaps translate to missed opportunities: projects are shelved, efficiencies are unrealized and innovations are left on the table. According to a recent MIT study, 95% of AI pilots fail – far below expectations – due to model output inaccuracy, security concerns and ineffective change management.
World Trade Organization research shows that universal AI adoption could boost global trade by an additional 14 percentage points by 2040 – double what's possible under current fragmented approaches. In simpler terms, everyone benefits more when AI tools are trusted and used worldwide.
However, International Monetary Fund (IMF) analysis reveals a troubling pattern: currently, wealthy nations capture twice the productivity benefits from AI compared to developing economies, widening rather than narrowing global economic disparities.
Targeted PPPs – combining multilateral financing, national regulators and industry consortia – are the fastest mechanism to diffuse capabilities, data access and skills to the Global South, narrowing the productivity gap the IMF highlights.
The very reasons there is a trust deficit in AI – i.e. concerns about bias, privacy, security, safety, and accountability – span technical, social and regulatory domains. PPPs connect public guardrails with private innovation and bring academia/nongovernmental organizations to stress-test real-world impact.
When these stakeholders work in concert, their combined credibility creates systems worthy of confidence. For example, Estonia’s 99% online tax filing and EU-leading collection efficiency show how trust unlocks digital uptake.
How PPPs operationalize AI trust
Three-quarters (75%) of CEOs recognize that trusted AI requires effective governance, yet only 39% report having adequate frameworks in place. Leaders need a repeatable stack that turns policy into controls, controls into evidence and evidence into incentives.
The TrustworthyAI Index's assessment methodology – which builds on established frameworks including the Organization for Economic Co-operation and Development (OECD) AI Principles, Stanford Human-Centered Artificial Intelligence (HAI) benchmarks and National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) standards – identifies that only 20% of leading models meet exemplary transparency and accountability thresholds.
Governments are catching up too: only 30% of countries have used AI in policy-making, signalling missing linkages between principles and line-of-business execution. This gap is structural, not attitudinal. As Microsoft CEO Satya Nadella noted at the World Economic Forum Annual Meeting 2024 in Davos: “I don’t think the world will put up anymore with something not thought through on safety, equity and trust.”
PPPs operationalize trust by turning principles into sector protocols, building assurance through certification/audits/incident reporting, and enabling responsible data-sharing for robust, fair models. PPPs deliver through three levers – with clear owners and artefacts:
- Governance (public lead): risk tiering, procurement clauses, model registries, regulatory sandboxes
Why it works: integrates democratic authority with technical feasibility, collapsing the policy-to-execution gap. - Assurance (shared): third-party testing labs, certification/audits, incident reporting, post-market monitoring
Why it works: shifts trust from claims to evidence and enables regulatory recognition and cross-border comparability. - Inclusion and data (shared/civic): data trusts and privacy-enhancing technologies for safe sharing; targeted skills and access programs for high-exposure sectors/regions
Why it works: balances scale with responsibility and hard-wires equitable value distribution.
The Partnership on AI (PAI), for example, convenes 129 technology companies, media organizations and civil society to establish concrete AI governance frameworks. Its Responsible Practices for Synthetic Media set provenance norms and are supported by organizations, including Adobe, BBC, OpenAI, TikTok and WITNESS.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
Meanwhile, PAI’s Safe Foundation Model Deployment guidance and the AI Incident Database provide shared risk infrastructure, complementing PPPs by turning principles into verifiable practice.
Economic benefits of closing the digital divide
The $4.8 trillion upside materializes only when trust closes the digital divide. As United Nations Secretary-General António Guterres argued at the UN Security Council, governments must urgently collaborate with technology companies on risk management frameworks, while systematically expanding access to ensure developing economies capture AI's transformative potential.
The Group of Twenty (G20) has already called for interoperable governance, safety assurance, and inclusive digital infrastructure. PPPs are the delivery vehicle – aligning standards, audits and deployment across borders.
Within 12 months, every G20 economy should:
- Establish a national public–private task force on trustworthy AI
- Adopt a common assurance baseline (independent audits, incident reporting, provenance)
- Pilot an AI-dividend for high-exposure workers with industry co-funding
Following that AI roadmap advances Sustainable Development Goals SDG 9 (Industry, Innovation and Infrastructure) and SDG 16 (Peace, Justice and Strong Institutions) and turns principles into growth – fast and fair. The opportunity is $4.8 trillion; the path is clear; execution must now be collective.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Technological InnovationSee all
Alexander Tsado and Robin Miller
December 3, 2025




