Civil Society

Restoring trust online: what can we learn from cybersecurity's zero trust models?

Zero trust models can transform how we trust

Could a zero trust model restore broken trust online? Image: Flickr/Aqua Mechanical

Tomas Okmanas
Co-Founder, Nord Security
This article is part of: World Economic Forum Annual Meeting
  • In recent years, cybersecurity engineers have been shifting into a new security model called “zero trust”: the idea that trust should not be automatically granted to any user or device by default.
  • This stance has not yet been adopted by digital societies. Online users often bypass the information verification step, as evident from the global rise of false and misleading information.
  • By adopting principles from zero trust models, online users could use the power of emerging AI technology, together with human moderation, and blockchain technology to become better at verifying information.

To trust or not to trust? In recent years, a surge in misinformation, disinformation and fake news suggests that such a Shakespeare-esque question is, in fact, not the question; neither is it who to trust.

People tend to rely on sources they recognize as authoritative and it doesn’t matter whether those sources can actually be trusted – it could be something seen on social media, TV or even an answer to a prompt on a large language model (LLM) site. When information is abundant but not always reliable, the real answer online users should seek is how we should trust.

Have you read?

While this trust challenge is not limited to any medium or region, it would be hard to overlook the role of technology in its growth. Technological advancement allows platforms to both inform and misinform at scale. Social media algorithms boost engagement without always verifying facts, LLMs generate text that can seamlessly blend truth and hallucinations, and even historically reputable news sources struggle to maintain authority over a flood of competing claims.

However, a potential solution to this crisis in trust may also lie in the technology world if we borrow one of the principles of zero trust. In cybersecurity, zero trust means that no user, device or transaction is granted trust by default; instead, each request is authenticated and continuously verified.

The core principle of zero trust contrasts to how online users typically consume digital information. Today, people often rely on a platform’s perceived authority or familiarity, skipping the information verification step entirely. Yet, as the content volume grows, upholding standards becomes a struggle.

Are current cybersecurity initiatives enough?

Some platforms have recognized the decreasing element of trust and attempted to rebuild credibility through community-driven verification tools. For example, the social platform Twitter (now X) launched a community-driven content moderation programme called “Birdwatch” in 2021, which later became “Community Notes”.

This approach invites users to collectively annotate posts that may contain misleading information. On paper, this decentralized fact-checking model sounded promising: the crowd could offer additional context and highlight inaccuracies. Yet, community notes have struggled to become the definitive solution in practice. There is limited transparency about who contributes and how specific notes are elevated.

Another example of collective vetting is Wikipedia's open-editing model, where volunteer contributors evaluate and refine articles for accuracy. Wikipedia demonstrates that collaborative moderation can help maintain a certain quality baseline: blatantly false information is often corrected swiftly and the platform has developed strict rules, citation requirements and editorial discussions. But Wikipedia is also not immune to challenges, from harassment within the editor community to disagreements over interpretations, sources and editorial biases. The site's reliance on volunteers also means that lesser known topics may not receive the same level of attention or expertise as more popular subjects.

These two prominent examples show that although such measures help to some degree, they often lack scalability and transparency. If zero trust frameworks became a potent solution to cybersecurity issues, can we apply a similar concept to digital information and trust?

A blueprint for change

Drawing inspiration from zero trust models, we can imagine an unbiased, third-party, sophisticated knowledge base, containing validated sources of truth, facts and information, as a standardized system which could be implemented on platforms dealing with information – from news outlets to social media platforms.

Such a knowledge base could integrate three elements, taking advantage of their strengths: human moderation, blockchain technology and AI-driven analysis. Each element would operate as an additional verification step, reducing the likelihood of misinformation slipping through and improving the overall quality of digital discourse.

How human moderation, blockchain technology and AI-driven analysis could work together on verification
How human moderation, blockchain technology and AI-driven analysis could work together on verification

Human moderation, preferably through a transparent and diverse body of experts, can offer nuanced judgement, oversee critical factual claims and provide guidance on establishing standards. They would not judge every piece of content – an increasingly impossible task – but rather help set the rules and boundaries.

Only in recent years has blockchain technology started to diverge from its association with cryptocurrencies. As online users have become more familiar with blockchain technology, it could serve as an immutable ledger that records the verification steps each piece of information undergoes. In this system, any claim’s fact-checking history, sources and editorial decisions would be stored in a decentralized manner. Users and platforms alike could trace the lineage of a claim, seeing when it was verified, by whom, and according to which standards.

In line with these elements, AI and machine-learning models would operate as the layer of analysis and automated scrutiny. Trained with data from vetted sources and already approved factual statements, these models could flag inconsistencies, highlight suspicious patterns and even suggest higher-quality sources. Transparent verification histories would add another level of review. While not infallible, these tools would reduce the burden on human moderators by filtering out large volumes of questionable content.

Such a theoretical model is just one way to move forward. Platforms could also implement anonymous authentication to show the information sharer is an actual user and not a bot, helping to reduce the automated spread of misinformation. Network analysis could help determine where misinformation is coming from and what common pathways it follows. Finding the source of misinformation is vital to controlling it.

Rather than accepting misinformation as an inevitable by-product of connectivity, we can build systems that demand and demonstrate credibility at every turn. By embracing this shift, we take a step closer to a future where facts and trust stand on firmer ground.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Civil SocietyGlobal RisksCybersecurity
Share:
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Why we urgently need strategic solutions to internal displacement

Marie McAuliffe

February 12, 2025

4:02

6 Indigenous leaders have a message for the world

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum