Emerging Technologies

This is how AI can help identify biases in news media

Code on a laptop screen.

Artificial intelligence can help identify biases in news reporting. Image: Unsplash/tetrakiss

Shirley Cardenas
Researcher and Writer, McGill University
Shirley Cardenas
Researcher and Writer, McGill University
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

  • Artificial intelligence can help identify biases in news reporting, according to researchers.
  • A study compared simulated news coverage to actual reporting of COVID-19 - and found a marked difference.
  • Researchers say it opens up new avenues of study in which AI could be used to model American Supreme Court decision-making, for instance.

Artificial intelligence can help identify biases in news reporting that we wouldn’t otherwise see, researchers report.

For a new study, researchers got a computer program to generate news coverage of COVID-19 using headlines from Canadian Broadcast Corporation (CBC) articles as prompts. They then compared the simulated news coverage to the actual reporting at the time.

The findings show that CBC coverage was less focused on the medical emergency and more positively focused on personalities and geo-politics.

“Reporting on real-world events requires complex choices, including decisions about which events and players take center stage,” says Andrew Piper, professor of languages, literatures, and cultures at McGill University. “By comparing what was reported with what could have been reported, our study provides perspective on the editorial choices made by news agencies.”

Evaluating these alternatives is critical given the close relationship between media framing, public opinion, and government policy, according to the researchers.

Loading...

“The AI saw COVID-19 primarily as a health emergency and interpreted the events in more bio-medical terms, whereas the CBC coverage tended to focus on person- rather than disease-centered reporting.

“The CBC coverage was also more positive than expected given that it was a major health crisis—producing a sort of rally around the flag effect. This positivity works to downplay public fear,” Piper says.

While a lot of studies seek to understand the biases inherent in AI, there’s also an opportunity to harness it as a tool to reveal the biases of human expression, say the researchers. “The goal is to help us see things we might otherwise miss,” Piper says.

“We’re not suggesting that the AI itself is unbiased. But rather than eliminating bias, as many researchers try to do, we want to understand how and why the bias comes to be,” says Sil Hamilton, a research assistant and student working under Piper’s supervision.

For the researchers, this work is just the tip of the iceberg, opening new avenues of study where AI can be used not only to look at past human behavior, but to anticipate future actions. For example, in forecasting potential political or judicial outcomes.

Hamilton is currently leading a team working on a project using AI to model American Supreme Court decision-making.

“Given past judicial behavior, how might justices respond to future pivotal cases or older cases that are being re-litigated? We hope new developments in AI can help,” he says.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Have you read?
Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

What is the 'perverse customer journey' and how can it tackle the misuse of generative AI?

Henry Ajder

July 19, 2024

About Us

Events

Media

Partners & Members

  • Sign in
  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum