Education

When false information goes viral

Farida Vis
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Education?
The Big Picture
Explore and monitor how Education is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Education

Every new communications technology experiences a phase when people make assumptions about its powers and problems, so it’s important to remember that the spread of misinformation is not a uniquely digital issue. You only have to look at Orson Welles’s War of the Worlds – when it was first broadcast on the radio in 1938, people fled their homes believing the Earth was being invaded by aliens.

Any online information is part of a larger and more complex ecology, with many interconnected factors. It’s therefore very difficult to fully map the processes involved in the rapid spread of misinformation, or to identify where this information originates. Moreover, we should endeavour to look beyond the specific medium and consider the political-cultural setting in which misinformation spreads and is interpreted.

During the 2011 summer riots in the UK, for example, a rumour spread on Twitter that a children’s hospital had been attacked by looters. The story fitted with people’s preconceptions of who the rioters were and what they might be capable of, and it caught the public’s imagination. But interestingly, it was the Twitter community that swiftly debunked the rumour, killing it off well ahead of official confirmation from the hospital and media.

Misinformation of a different kind occurred in the US during the December 2012 Newtown shootings and the April 2013 Boston bombings. In the Newtown case, online and mainstream media misidentified a Facebook page as that of the shooter. After the Boston bombings, various social media users engaged in online detective work, examining images taken at the scene and wrongfully claiming that a missing student was one of the bombers. But in this case, mainstream media outlets also played a part in perpetuating and validating the misinformation by publishing images of the wrong suspects.

In another recent example, again at the intersection between social and mainstream media, hoaxes emerged during the Turkish protests that began with the response to redeveloping Taksim Square. Twitter “provocateurs” were condemned as responsible for spreading misinformation, including a photograph of crowds at the Eurasia marathon, which was presented as “A march from the Bosphorus Bridge to Taksim”. But blaming Twitter ignores the context; the country’s mainstream news media had been slow to respond to the protests, creating a vacuum in which misinformation spread easily, especially when referenced by foreign media outlets.

It can also be difficult to establish what “fake” actually means. One popular image shared during Hurricane Sandy in 2012 showed soldiers standing guard at the Tomb of the Unknown Soldier in Arlington Cemetery, braving the approaching storm. Unlike the pictures of the marathon on Bosphorus Bridge, the framing of the image did not place radically different meaning on its subject, but it also didn’t show what people thought they were looking at. The image had been taken during an earlier storm and was undoubtedly real, but it had no relevance to Hurricane Sandy.

It’s now common practice for news organizations to source images online, so we must get better at understanding how these images can be verified. Storyful, which describes itself as “the first news agency of the social media age”, is developing invaluable guidelines and techniques that can help with this essential verification process. An appreciation of the ways in which media influence each other, as well as broader cultural and social issues, may help us understand the content of such images.

It’s also imperative to highlight the volume and rapid dissemination of online misinformation. When you are dealing with social media, you are dealing with Big Data. It’s simply not possible to read the 1 billion tweets produced every two-and-a-half days. To properly understand this data, we need to make use of computer-assisted processing and combine this with human evaluation to put information into context.

Finally, we should remember that every case of misinformation is unique and should be considered independently, paying attention to the complexities of the ecosystem within which it circulates. In terms of interpreting misinformation, human evaluation will remain essential to put information into context, and context is ultimately what this is all about.

This is an extract from the Outlook on the Global Agenda 2014, published this week.

Read a blog on the top 10 trends facing the world in 2014.

Author: Farida Vis is a Research Fellow in Social Sciences at the University of Sheffield, United Kingdom, and Member of the Global Agenda Council on Social Media.

Image: People surf the web at an Internet cafe REUTERS/Stringer.

Enhanced by Zemanta
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why we need global minimum quality standards in EdTech

Natalia Kucirkova

April 17, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum