Why humanity must learn to trust differently in the GenAI era

Verification literacy is required to respond appropriately to GenAI Image: Photo by Karla Rivera on Unsplash
- It used to be the case that if something appeared authentic, society accepted it as true; generative artificial intelligence (GenAI) is changing that relationship.
- Verification literacy is the ability to understand how information becomes trustworthy, rather than judging truth based only on appearance.
- The camera once taught humanity to trust what it could see, GenAI is teaching humanity to understand how truth is constructed.
For more than a century, trust depended on what people could see; photographs documented history, video confirmed events and recorded voices served as proof of presence. From journalism to courts to social media, visual media was the shared foundation of modern reality. If something appeared authentic, society generally accepted it as true, but generative artificial intelligence (GenAI) is quietly changing that relationship.
Controlled experiments show human accuracy in identifying AI-generated faces averages only about 62%, with confidence levels unrelated to correctness, suggesting perception is no longer a dependable indicator of authenticity.
Today, AI systems can produce images, voices and videos that closely resemble real recordings, with humans often unable to reliably distinguish between synthetic and authentic media. The result is not simply a misinformation problem: it is a learning moment for humanity.
What is verification literacy?
Verification literacy is the ability to understand how information becomes trustworthy, rather than judging truth based only on appearance. UNESCO supports the development of media and information literacy, which includes the knowledge, attitudes, skills and practices required to access, analyze, critically evaluate and validate information ethically.
It shifts the central question from: Does this look real? to: How has this been verified?
Verification literacy includes these practical habits:
- Checking whether multiple independent sources confirm a claim.
- Understanding where digital content originates.
- Recognizing how algorithms amplify emotional or sensational material.
- Allowing time for verification before accepting or sharing information.
Just as digital literacy became essential during the internet era, verification literacy is becoming necessary in an AI-mediated world. GenAI is creating synthetic media, forcing humanity to upgrade how it evaluates reality.
How the Forum helps leaders make sense of AI and collaborate on responsible innovation
Why is AI a catalyst, not a crisis?
Each major communication technology has expanded human literacy. The printing press required reading literacy. Broadcast media requires media literacy. The internet introduced digital literacy and genAI is catalyzing verification literacy.
While AI lowers the cost of producing convincing content, it simultaneously exposes the limits of perception-based trust. Studies show people consistently overestimate their ability to detect deepfakes, revealing that intuitive judgment alone is no longer sufficient.
According to Stanford University’s AI Index Report, organizational AI adoption rose from 55% to 78% in a year, while global genAI investment reached $33.9 billion, signalling rapid integration into everyday information systems.
Rather than signaling the collapse of truth, this moment encourages societies to develop more resilient ways of establishing it. AI becomes a pressure that accelerates cognitive adaptation.
From visual proof to verification systems
Historically, trust was attached to objects, photographs, documents and recordings. In the verification age, credibility increasingly comes from processes.
Journalists, researchers and open-source investigators already verify events by combining independent signals, such as geolocation data, timestamps and multiple eyewitness sources. Trust emerges through convergence, rather than single evidence.
Technology is beginning to support this shift. The Content Provenance and Authenticity (C2PA) initiative embeds secure metadata into digital media so audiences can trace how content was created or edited.
A United Nations report has stated that increasingly realistic deepfakes pose risks to political activities, financial security and public trust, calling for global verification standards and digital provenance systems.
These systems do not ask users to distrust media. They help users understand context and verification literacy, allowing individuals to interpret these signals meaningfully.
How communication and social media can evolve
Social media platforms were designed during a period of visual certainty. Engagement rewarded speed and emotional reaction because images were assumed to represent reality. In today's age, communication norms are likely to change.
Platforms are experimenting with contextual indicators that provide additional information about content origins and authenticity. Instead of simply viewing posts, users may increasingly interpret layers of verification surrounding them.
Scholars note that digital platforms have fundamentally shifted communication from one-way broadcasting towards multi-directional interaction, where individuals, institutions and governments can engage in continuous dialogue, rather than traditional top-down messaging.
Communication becomes less about instant belief and more about informed interpretation. This new form of literacy, therefore, reshapes individual behaviour and platform design.
Why verification literacy matters now
Experimental research from the MIT Media Lab, published in Nature Communications, shows that even large groups of participants struggle to reliably distinguish authentic political speech from AI-generated deepfakes across audio and video formats.
Verification literacy offers a constructive response. It reframes AI not as an existential threat, but as an educational turning point, one that strengthens collective resilience. It does not eliminate trust, it restructures it by moving away from appearances and towards systems of validation shared across institutions, technologies and communities. Credibility becomes something built gradually through transparency and convergence.
In this environment, citizens are not passive consumers of information, but active participants in verification. Verification literacy becomes a civic capability and may become a determining factor in various industries across the globe.
Learning to trust differently is the strength of modern communication
The camera once taught humanity to trust what it could see. GenAI is teaching humanity to understand how truth is constructed.
This transition may ultimately strengthen communication, rather than weaken it. By encouraging societies to question, cross-check and contextualize information, AI pushes humanity towards more deliberate forms of understanding.
A systematic review of 177 scientific studies in the Journal of Business Research finds that social media platforms are increasingly functioning as innovation ecosystems, enabling rapid information exchange and collaborative knowledge creation across industries.
It is ultimately less about scepticism and more about participation. Today, individuals are no longer passive recipients of information; they are active contributors to how reality is confirmed and shared. Trust, in this sense, evolves from something granted automatically to something maintained collaboratively, a shared responsibility between citizens, platforms and institutions navigating an increasingly synthetic information environment.
Its success will depend not on whether AI becomes more powerful, but on whether humans learn how to verify.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Artificial IntelligenceSee all
Mihir Shukla
March 16, 2026







