From regulation to innovation: How certification can build trusted AI for a sustainable future
Frameworks around AI are driving the need for certification, which can prove user trust in AI systems. Image: REUTERS/Isabel Infantes
- Frameworks such as the European Union’s AI Act are turning compliance into a competitive edge.
- Certification proves trust in practice: AI systems that meet reliability and explainability standards earn user confidence.
- Investors, governments and procurement teams now reward companies that can demonstrate governance and certification.
For over a decade, the narrative surrounding artificial intelligence (AI) has been one of speed: deploy quickly, scale rapidly and gain a first-mover advantage. Now, the year 2025 is a turning point for AI. The European Union’s (EU) AI Act has entered into force; meanwhile, the United States and Asia are advancing their own frameworks.
For many leaders, regulation is instinctively viewed as a cost or a hindrance to innovation. In fact, the opposite is true. Regulation is becoming a catalyst for trusted adoption, offering companies that embrace compliance early not just protection against fines but a competitive advantage in credibility, access and market share.
Just as the EU’s General Data Protection Regulation (GDPR) reshaped global cloud adoption, the AI Act and its international counterparts will define who earns trust – and who is left behind.
The trust challenge in AI
Multiple surveys show compliance and risk concerns are stalling AI: 77% of executives say regulatory uncertainty impacts decisions, while 74% paused at least one AI project in the last year due to risk.
The EU AI Act classifies systems into unacceptable (prohibited), high-risk (subject to assessment), limited-risk (transparency obligations) and minimal-risk (no obligations).
High-risk systems, in healthcare, transport, energy, or education, must undergo a conformity assessment before entering the market. Without this assurance, adoption stalls. With it, buyers, from hospitals to governments, can adopt AI solutions with confidence.
Compliance, from hurdle to loop
Too often, compliance is treated as a late-stage hurdle, bolted on after innovation. However, leaders who flip the model can make compliance a design driver. We call this the compliance-driven innovation loop:
- Detect: Map AI projects against emerging legal frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Design: Translate regulatory principles into features and practices. Use model cards, datasheets and impact assessments as living documents, not paperwork.
- Deploy: Involve independent validators early. Build machine learning operation pipelines with traceability and auditability, ensuring each release is “trust-ready.”
- Differentiate: Market trust. Procurement teams in healthcare, infrastructure and government increasingly demand certification evidence as a condition for contracts.
Rather than slowing things down, this approach should accelerate adoption by eliminating the friction of uncertainty.
Certification in practice
There are several use cases across different industries of validating AI applications:
Digital healthcare: Trustworthy skin cancer AI
Recent studies demonstrate how explainability tools enable physicians to understand why AI models classify skin lesions as malignant or benign. Meanwhile, reliability audits assess how consistently these systems perform under real-world conditions using metrics such as the Brier score.
Together, such methods demonstrate how certification frameworks can transform medical AI into solutions that doctors can trust, regulators can approve and patients can rely on.
Mobility: Autonomous driving
Mercedes-Benz applied “compliance-by-design” in developing its Drive Pilot system. By embedding explainability and human-in-the-loop safeguards from the start and working with German regulators early, the company secured approval for Level 3 automated driving at 95 km/h.
This positions it ahead of competitors and opens procurement opportunities with fleet buyers who prioritize certification readiness.
Digital infrastructure: Safer construction
Industrial projects such as ZeroDefectWeld show that AI can detect and classify weld defects on radiographs, reducing manual inspection error in industrial environments.
Grounding these systems in the EU AI Act – meeting Article 15 requirements on accuracy, robustness and cybersecurity and applying high-risk controls when the AI serves as a safety component – creates a clear, auditable path to compliant, AI-enabled Non-Destructive Testing across infrastructure projects.
The result: safer builds, faster delivery and more reliable assets, which is direct progress towards Sustainable Development Goal (SDG) 9 on infrastructure.
Generative AI: Trusted cloud adoption
Microsoft is adapting its products and contracts to comply with the EU AI Act, updating policies to ban prohibited uses such as social scoring and signing the EU AI Pact.
It supports customers with Trust Centre documentation, transparency notes and governance tools such as Purview Compliance Manager and Azure AI Content Safety. By combining internal standards with regulatory engagement in Europe, Microsoft aims to help enterprises innovate with AI while staying compliant.
Across these cases, certification transforms regulation from a constraint into an enabler of scale.
Why this matters now
Economically, investors are applying a “trust premium” to companies with strong governance. Procurement teams in government and critical infrastructure now demand conformity assessments upfront.
Socially, certification safeguards fundamental rights and helps AI align with the SDGs:
- SDG 3 (Health): Safer medical diagnostics.
- SDG 9 (Infrastructure): More resilient industry and construction.
- SDG 11 (Sustainable cities): Trusted mobility and smart city applications.
Politically, certification bridges high-level regulation with technical methods, enabling governments to harmonize standards across borders, thereby reducing fragmentation and facilitating global AI trade.
What leaders should do
For executives, policy-makers and innovators, the agenda is clear:
- Establish clear leadership for AI trust: For example, by appointing a chief trust officer or creating a cross-functional AI-trust steering committee that brings together compliance, legal, product and technical expertise.
- Conduct AI project audits: These should be held up against the EU AI Act, the NIST AI Risk Management Framework and emerging standards from the International Standards Organization to ensure early compliance and market readiness.
- Engage with certification bodies early: Engagement shouldn’t just happen at the end of development.
- Treat compliance artefacts as market assets: Your model cards, data governance frameworks and audit trails are becoming your passport to global buyers.
Trust is the new frontier of innovation
Regulation clarifies the rules of the game, certification translates those rules into practice –together, they make AI not only powerful but trustworthy.
The leaders of tomorrow will not simply deploy advanced AI. They will deploy trusted AI by design, earning both market access and societal license to operate.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michael Donatti and Benoit Bégot
December 4, 2025






