Media, Entertainment and Sport

Why we need a moderate approach to moderating online content

The Facebook logo is reflected on a woman's glasses in this photo illustration taken June 3, 2018. REUTERS/Regis Duvignau/Illustration - RC13DB520D70

Making platforms entirely liable for what's published on their channels won't work - what's the alternative? Image: REUTERS/Regis Duvignau

Harold Feld
Senior vice president, Public Knowledge
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Media, Entertainment and Sport?
The Big Picture
Explore and monitor how Media, Entertainment and Sport is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Media, Entertainment and Sport

The debate over what liability platforms have for content is divided into two diametrically opposed camps. One camp argues that we can only address the very real problems of fraud, harassment, incitement to violence, radicalization and political manipulation by making platforms strictly liable for the content that appears on their services; the other says that appointing platforms as police creates an unaccountable class of powerful gatekeepers and eliminates the very thing that has made social media so empowering for civic engagement and democratic movements.

Each side’s argument has substance and a host of examples. The platform liability camp points to Russian manipulation of elections, the explosion of “fake news” and the use of Facebook by the perpetrators of the Rohingya genocide. Opponents of platform liability point to the critical role of social media in popular democratic uprisings, such as the Arab Spring and the Black Lives Matter movement; the damage done by “false positives”, encouraged by over-inclusive and poorly designed algorithms to legitimate (even necessary) controversial speech; and the ineffectiveness of trying to take down harmful content.

It is perhaps emblematic of how social media has trained us to think in extremes that the only options under discussion are strict liability regimes (generally with “notice and take down” provisions) or a completely hands-off approach that argues for total deregulation and immunity for any third-party content.

The problems with current intermediary efforts

One of the problems with developing a balanced and effective approach is a lack of real data. Platforms may issue “transparency reports” (either voluntarily or as required by law) but these focus exclusively on quantitative metrics that tell us little about the effectiveness or impact of the steps they’ve taken.

For example, the recently released, first transparency report under the German Network Enforcement Law (NetzDG) provides a table of how many complaints a reporting platform received and how many resulted in takedowns. This could indicate that platforms are not being over-inclusive, as suggested by one expert, or when combining this report with other data, that legal content has been blocked, as argued by Reporters Without Borders, but tells us nothing about the impact.

By prioritizing quantitative rather than qualitative metrics, these reports focus on the wrong question entirely. They should be asking: does more aggressive content moderation address the very real social problems we seek to address, and if so at what cost?

Existing evidence suggests that platforms are simply not very good at moderating content. For example, we have known for years that mandatory platform liability for third-party material that infringes copyright leads to significant false positives, resulting in takedowns, and apparently little impact on overall online infringement. The most successful copyright content moderation system, Google’s Content ID has cost Google more than $60 million to develop and maintain while satisfying nobody.

Deciding whether third-party content infringes another’s copyright is the easiest case for international content moderation by platforms. Copyright law is reasonably (but not completely) consistent around the globe. It involves comparing one work with another and determining a match. If the comparatively modest variations in exceptions and limitations or application of infringement standards between countries confound expensive and sophisticated systems such as Content ID, it is unreasonable to expect that platforms and algorithms can handle the far more difficult and sensitive issue of content moderation for hate speech, incitement to violence and indecency.

Anecdotal evidence suggests platforms handle this poorly – with significant potential costs to both the speakers and the broader community. Some of the examples seem trivial at first but have significant implications. Take Facebook’s new political advertising ID system which has blocked advertising by ordinary businesses with names that include “Bush” or “Clinton”. This is not just an inconvenience but a very real concern for innocent businesses that find themselves incapable of advertising on the world’s largest social media platform.

Meanwhile, Facebook’s accidental blocking of newspapers that promote political reporting (while failing to block many political advertisements) is not merely inconvenient or a matter of lost income to newspapers, but of denying the public of news reporting on pressing political issues. The inability to promote an important story in the days before an election can have very real and deleterious consequences that are simply not captured in a transparency report focused on metrics.

More troubling is the distressing tendency of content moderation efforts to impact vulnerable or traditionally marginalized communities in unintended ways. Facebook’s political ID system has effectively prohibited undocumented immigrants in the US from buying political advertisements. Citing its community standards, Facebook has previously accidentally taken down notice of civil rights protests. If the objective is to prevent violence against vulnerable communities and individuals, accidentally suppressing efforts to report and organize against such violence need to be treated seriously when evaluating the effectiveness of content moderation.

The push to make online platforms responsible for content moderation has a corrosive effect on society as a whole given the opaque and apparently arbitrary manner in which unaccountable giant corporations carry out their policing. In the US, President Trump has repeatedly accused Google Search and other social media platforms of bias. Congressional Republicans and conservative activists have repeatedly made similar accusations. At the same time, progressives and communities of colour have argued that social media platforms consistently discriminate against their content.

This further erodes our confidence in news reporting, undermines the utility of social media platforms for civic discourse in the public sphere and reinforces social divisions between rival political camps each convinced that social media platforms give an unfair advantage to their rivals. Even more worryingly, it encourages social media platforms to respond to threats, made by those in power, of greater regulation with more active promotion of content designed to curry favour with those politicians.

We need to explore other options

In her recently republished book, Internet of Garbage, Sarah Jeong (herself a victim of coordinated online harassment) argues that the focus on detection and deletion in the early days of the internet has narrowly focused all efforts to protect vulnerable users on these two options. But, as Jeong explains, “any harassment cycle that focuses on detection and deletion is bound to spin in circles.” This does not, however, mean we must stand down and look on helplessly. “Deletion should be thought of [as] one tool in the toolbox, not the end goal,” Jeong concludes. “Because deletion isn’t victory, or freedom or liberation from fear. It’s just deletion.”

Every new means of electronic mass communication has required a rethink of how to prevent this tool from being used by hatemongers, demagogues and criminals. Countries have developed laws to prevent people from using the telephone to conduct fraud or threaten individuals, for example. These laws do not focus on simple detection and deletion or abdicate responsibility entirely to the phone company or broadcaster. At the same time, they are designed to be sensitive to concerns about free speech and due process. Generally, these solutions involve criminalizing certain uses of the new technology, some form of cooperation and reporting requirement between the broadcaster or network operator, some form of judicial due process and exceptions for cases of imminent harm.

Have you read?

These solutions aren’t perfect but they represent a pragmatic and workable balance struck by society between arbitrary, private gatekeepers and helplessness. The time has come for countries to seriously explore a similar and appropriately pragmatic and workable balance for social media. No one expects that the solutions for regulating communications in the past are sufficient in and of themselves but we should begin by studying them.

In short, the time has come to move away from the two extremes of telling platforms “take down” or governments to “stand down”. Stakeholders must come together and recognize two fundamental principles: that the status quo is intolerably dangerous to individuals and to democratic norms; but that platforms cannot somehow magically make bad content go away without imposing very real costs on individuals and society as a whole. Only when governments and stakeholders have accepted this reality can the real work of effectively addressing dangerous online content begin.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How Paris 2024 aims to become the first-ever gender-equal Olympics

Victoria Masterson

April 5, 2024

1:44

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum