Artificial Intelligence

Scaling trustworthy AI: How to turn ethical principles into global practice

ETH Zurich in Davos.

ETH Zurich has a strong presence in the World Economic Forum Annual Meeting in Davos, leading dialogues about global challenges such as emerging technologies, climate change and sustainability. Image: ETH Zurich/ Andreas Eggenberger

Joël Mesot
President, ETH Zürich
This article is part of: World Economic Forum Annual Meeting
  • Trustworthy artificial intelligence (AI) is a global priority, as societies shift from broad ethical principles to the more challenging work of putting them into practice.
  • Universities are emerging as key actors of responsible AI, embedding fairness, privacy and accountability through ethics-by-design methods and interdisciplinary governance.
  • Collaboration and education in academia are shaping sustainable AI for the future, advancing innovation while safeguarding the social values that underpin stable economies and societies.

Trust serves as a foundation for stable economies; it sustains democratic institutions and underpins international cooperation. Generative artificial intelligence (AI) accelerates scientific discovery, economic growth and societal change – transforming how we innovate, make decisions and solve global challenges.

Now more than ever, I see AI as a strategic priority, one that requires international cooperation, institutional capacity and culturally adaptable approaches. While we have defined trustworthy AI, articulating principles is just a first step in the much more challenging task of scaling AI in practice.

Implementing ethical principles

Trustworthy AI has evolved from an abstract aspiration to an operational necessity. As generative AI systems shape health, finance and public services, societies recognize that ethical principles such as fairness, transparency and accountability must become practical frameworks guiding real-world technologies.

This shift from articulating values to implementation marks a pivotal moment. It demands standards, methodologies and governance mechanisms that can scale worldwide, yet remain flexible for diverse cultural and economic contexts. I see this as a global priority in which institutional and technical capacity realizes ethical intent into practice.

Have you read?

In 2024, the United Nations General Assembly adopted a landmark resolution promoting "safe, secure and trustworthy" AI. It called for international cooperation to ensure generative AI development respects human rights, reduces digital divides and advances sustainable development, underscoring the shared responsibility of governing AI technologies. The ETH Zurich community contributes to this goal by connecting scientific insights with policy action. Computer scientist Andreas Krause, for example, serves on the UN’s Global AI Advisory Body, offering expertise on how AI can be governed for the common good.

Global collaboration is central to implementing trustworthy AI. To advance this goal, Meta, IBM and more than 50 organizations – including ETH Zurich and CERN – founded the AI Alliance, which grew to more than 140 members in 23 countries in its first year. In 2025, it launched several initiatives, including the development of a roadmap for Responsible and Strategic Open-Source AI Innovation in Europe and Beyond in collaboration with ETH Zurich’s AI Ethics and Policy Network.

Honest brokers for ethics-by-design AI

Universities play a unique role in the evolution and governance of AI. Their independence, scientific rigour, and public mission enable them to serve as honest brokers – stewards of both knowledge and public trust – advancing trustworthy AI without commercial or political pressure.

Across disciplines, momentum is growing for ethics-by-design approaches that embed fairness, privacy and accountability into algorithms and datasets from the start. This reflects a broader understanding that responsible AI will need to be built into the architecture of innovation. At the nexus of research and governance, universities bring together diverse experts to assess risk, develop safeguards and propose evidence-based regulatory frameworks. In doing so, academia increasingly serves as part of the world’s ethical infrastructure, shaping norms for responsible AI development and deployment.

Legal frameworks also benefit from academic innovation. While the European Union’s AI Act (2023) set the first comprehensive regulatory standard, it was LatticeFlow – an ETH Zurich spinoff – that created a robust, compliant AI platform to help machine-learning teams implement the Act. LatticeFlow validates AI systems in healthcare, finance and manufacturing, serving as a bridge between complex models and real-world deployment.

In French, we say, "joindre le geste à la parole" – roughly equivalent in meaning to "walk the talk." ETH Zurich, EPFL, and the Swiss National Supercomputing Centre (CSCS) recently did just that with Apertus, Switzerland’s first large-scale, open-source multilingual language model. True to its Latin name for "open," Apertus offers fully documented architecture, model weights, training data, and methods – a milestone for transparency and diversity in generative AI.

Discover

How the Forum helps leaders make sense of AI and collaborate on responsible innovation

Shaping sustainable AI for the future

To scale trustworthy AI globally, collaboration and education are essential. Partnerships between academia and civil society help translate research insights into actionable policies and practical tools. These collaborations are more effective if conducted with transparency and mutual respect to ensure that innovation remains aligned with democratic values and societal well-being.

At the same time, universities prepare the next generation of scientists, engineers and leaders to navigate the societal dimensions of AI. Embedding ethical reflection and interdisciplinary training into curricula creates professionals who are not only technically capable, but also attuned to the broader impacts of technology. This combination – international collaboration and responsible education – is vital for building AI systems that support stable economies, resilient institutions and inclusive societies.

Loading...

Universities also serve as neutral conveners, fostering dialogue across sectors and borders.

Two examples emerging in Switzerland are the Swiss National AI Institute (SNAI) founded by ETH Zurich and EPFL – the country’s two federal universities – which leverages the broad scientific expertise to accelerate research, education and innovation using the country’s highly ranked Alps Supercomputer while also addressing inherent challenges of AI for research; and the Albert Einstein School of Public Policy at ETH Zurich, an interdisciplinary centre addressing societal challenges in public policy, ethics and governance in science and tech. A focus on "Digitalization and AI" offers public officials opportunities for fact-based dialogues and network support for initiatives.

Finally, sustaining trustworthy AI will depend on deep collaboration and long-term investment in education. Universities, industry, governments and civil society will need to continue to work together to create governance models, technical safeguards and open resources that reflect democratic values.

Equally important is preparing future scientists and leaders to understand the societal implications of AI, not just its engineering challenges. Through initiatives such as national AI institutes, interdisciplinary policy schools, and global alliances, institutions can help build resilient economies, inclusive societies, and a technological ecosystem grounded in accountability and public trust.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Cybersecurity

Related topics:
Artificial Intelligence
Digital Trust and Safety
Emerging Technologies
Global Cooperation
Global Risks
Technological Innovation
Leadership
Education and Skills
Share:
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

AI’s $15 trillion prize will be won by learning, not just technology

David Treat

January 19, 2026

Why data readiness is now a strategic imperative for businesses

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum