All videos

Here's Why You Can't Tell If ChatGPT Is Telling The Truth

ChatGPT is an AI bot that can generate natural-language text that has been trained using a vast dataset of written sources. ChatGPT uses this data to spot patterns and relationships between words and sentences. That’s how it’s able to sound human. But ChatGPT’s output is generated statistically as it uses whatever words and sentences are most likely to be correct based on an analysis of its vast dataset. In other words, it’s a highly complex autocorrect machine, and statistically likely sentences are not the same as verifiable truths. But because ChatGPT can sound like a journalist or a researcher, it’s easy to assume, mistakenly, that its output is just as rigorous. Watch the video to learn more.

Topics:
Emerging Technologies
Share:
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Emerging Technologies
See all

Powering the future: Why the AI revolution must be built on real energy security

Majid Jafar

December 22, 2025

From quantum to climate: the frontier tech stories that defined 2025

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum