All videos

Here's Why You Can't Tell If ChatGPT Is Telling The Truth

ChatGPT is an AI bot that can generate natural-language text that has been trained using a vast dataset of written sources. ChatGPT uses this data to spot patterns and relationships between words and sentences. That’s how it’s able to sound human. But ChatGPT’s output is generated statistically as it uses whatever words and sentences are most likely to be correct based on an analysis of its vast dataset. In other words, it’s a highly complex autocorrect machine, and statistically likely sentences are not the same as verifiable truths. But because ChatGPT can sound like a journalist or a researcher, it’s easy to assume, mistakenly, that its output is just as rigorous. Watch the video to learn more.

Topics:
Emerging Technologies
Share:
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Emerging Technologies
See all

The world is approaching a positive tipping point towards clean steel. Here’s how to accelerate it

Steve Smith

December 19, 2025

How industrial clusters are helping the circular economy go mainstream

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum