Media, Entertainment and Sport

Does fake news create online echo chambers? New study has surprising results

97% of online readers have a diverse news intake and people who read the most fake news also tend to read more news in general, says the report.

97% of online readers have a diverse news intake and people who read the most fake news also tend to read more news in general, says the report. Image: Unsplash/John Schnobrich

Angie Basiouny
Writer / Editor, Wharton School of the University of Pennsylvania
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Media, Entertainment and Sport?
The Big Picture
Explore and monitor how Media, Entertainment and Sport is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Media, Entertainment and Sport

  • People who read fake news online aren’t doomed to fall into an echo chamber, according to a new study, with only 2.8% of online readers consuming fake news exclusively.
  • An echo chamber is where online users only see, hear and read content that aligns with their own ideology.
  • 97% of online readers have a diverse news intake and people who read the most fake news also tend to read more news in general, says the report.
  • The authors recommend that platforms target the households that are most susceptible to falling into echo chambers, as opposed to forming blanket policies to protect all users from fake news.


People who read fake news online aren’t doomed to fall into a deep echo chamber where the only sound they hear is their own ideology, according to a revealing new study from Wharton.

Surprisingly, readers who regularly browse fake news stories served up by social media algorithms are more likely to diversify their news diet by seeking out mainstream sources. These well-rounded news junkies make up more than 97% of online readers, compared with the scant 2.8% who consume online fake news exclusively.

“We find that these echo chambers that people worry about are very shallow. This idea that the internet is creating an echo chamber is just not holding out to be true,” said Senthil Veeraraghavan, a Wharton professor of operations, information and decisions.

Veeraraghavan is co-author of the paper, “Does Fake News Create Echo Chambers?” It was also written by Ken Moon, Wharton professor of operations, information and decisions, and Jiding Zhang, an assistant operations management professor at New York University Shanghai who earned her doctorate at Wharton.

The study, which examined the browsing activity of nearly 31,000 households during 2017, offers empirical evidence that goes against popular beliefs about echo chambers. While echo chambers certainly are dark and dangerous places, they aren’t metaphorical black holes that suck in every person who reads an article about, say, Obama birtherism theory or conspiracies about COVID-19 vaccines. The study found that households exposed to fake news actually increase their exposure to mainstream news by 9.1%.

“We were surprised, although we were very aware going in that there was much that we did not know,” Moon said. “One thing we wanted to see is how much fake news is out there. How do we figure out what’s fake and what’s not, and who is producing the fake news and why? The economic structure of that matters from a business perspective.”

“This idea that the internet is creating an echo chamber is just not holding out to be true.”

Senthil Veeraraghavan

The professors found that relatively few sites account for most of the fake news, so rather than rating individual articles as true or false based on content, they parsed the data by source. The news sources in the study that were identified as purveyors of fake information, including Occupy Democrats and The Federalist Papers, had about 1 in 1,000 articles that were fact-checked and determined to be false. By comparison, mainstream sites, including The New York Times and Bloomberg, published incorrect information in 3 out of every 100,000 articles.

The data yielded few demographic differences between households browsing predominantly mainstream news and the 10% identified as “avid readers” of fake news, meaning they spent more time than average browsing fake news sources while still consuming mainstream news. Avid fake news readers tend to be slightly older, live in smaller households, and are less likely to have children, the paper stated. “Contrary to some popular beliefs, they are neither poor nor less educated. In fact, they average slightly higher education levels.”

Moon and Veeraraghavan said these demographic similarities show the hazard in stereotyping people who read fake news. There’s no singular profile; virtually everyone is at least a casual reader of information that’s dubious or downright wrong.

“One interesting thing in the data was that the outliers, the people who read the most fake news, also tend to read more news in general,” Moon said. “These news-loving consumers seek all the information that’s out there, so they consume a healthy amount of fake news. But if you’re looking for folks who read only fake news, they are actually hard to find.”

“People are complicated,” Veeraraghavan added. “I think the straw man that has been built about who the fake news consumers are doesn’t quite add up. That’s one thing the paper tries to address: Who are these consumers?”

Locking readers out of echo chambers

The professors make a specific recommendation on how platforms such as Facebook can better moderate fake news content: Rather than having blanket policies designed to protect all users from fake news, target the tiny percentage of households that are most susceptible to falling into echo chambers.

The recommendation comes from a pattern they found in the data after August 2017, when Facebook began flagging questionable content to discourage users from sharing it. Pages that repeatedly shared false information were also banned from advertising on the platform, which sharply incentivized them to stop the viral spread of fake news.

“If you’re looking for folks who read only fake news, they are actually hard to find.”

Ken Moon

The professors analyzed household news consumption before and after the policy among both Facebook and non-Facebook users. Before the policy, Facebook and non-Facebook users alike browsed real and fake online news at roughly the same rate. After the policy, Facebook users consumed less fake news, which was the policy intention, but they also consumed significantly less mainstream news compared with those off the platform.

Blanket policies are expensive and inefficient for social media companies, and they have an unintended consequence of choking off legitimate news access, the professors argue in their paper. Instead, Facebook and other platforms ought to use their vast consumer data to zero in on the most vulnerable users with “harm-based interventions” that would stop those individuals from accessing fake news sources.

Moon and Veeraraghavan concede that the recommendation may not solve all the problems associated with fake news, but it’s a suggestion that companies and policymakers should consider in the complicated efforts to combat the problem.

“This recommendation comes with the caveat that we should implement it carefully,” Moon said. “There is always a question of what’s ethical. Should we really censor content for one particular group of people? But if you understand that the problem boils down to the vulnerability of just a few, what safeguards can there be? It suggests a more informative way to look at solutions that are ethical or palatable, and to evaluate their efficacy for that vulnerable group.”

Veeraraghavan pointed out that fake news will never be eradicated; it’s been around since the dawn of storytelling. From ancient times to yellow journalism, from grocery store tabloids to deep fakes on the internet, history is rife with examples. The goal, he said, is to find ways to make it less influential and less dangerous.

“Fake news is always going to be there,” he said. “You’re not going to eliminate fake news or make people uninterested in it, so we have to understand how fake news is consumed rather than judging people for consuming it.”

Discover

What is the Forum doing to improve online safety?

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Social media in the crossfire: This is how you establish 'digital trust'

Kate Whiting

February 20, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum