Global Cooperation

Tech companies, the media and regulators must come together to prevent online harm

Online harm can take various forms, including images of violence, terrorism or child sexual abuse material. It must be prevented.

Online harm can take various forms, including images of violence, terrorism or child sexual abuse material. It must be prevented. Image: Kacper Pempel/Reuters

Noam Schwartz
CEO & Co-Founder, ActiveFence
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Global Cooperation?
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Global Cooperation

This article is part of: World Economic Forum Annual Meeting

Listen to the article

  • To protect internet users, particularly children, the media, tech companies and regulators must cooperate to develop an effective solution.
  • Online harm is currently the domain of technology companies almost exclusively, and so they have much to teach regulators about the reality of confronting it.
  • Only through effective cooperation can online harm be prevented and the internet made a safer place for all.

Recent years have brought about an increase in how individuals, governments and the media view trust and safety.

In fact, if Google Trends are any indicator, over the last 10 years, there has been a twenty-five-fold rise in the public’s interest in content moderation — the core function of trust & safety.

But in this field, there are too many opinions and too little cooperation.

Only through effective cooperation can online harm be prevented and the internet made a safer place for all.
Only through effective cooperation can online harm be prevented and the internet made a safer place for all. Image: Google Trends

Big tech, media and regulators: a three-way stand-off

Until recently, the US’s Section 230 was the “rule of the land” for online safety. Enacted in the mid-1990s, the law limited the liability of technology companies for the content hosted on their platforms.

More than 30 years later, technology platforms have been used to share child sexual abuse material (CSAM), make calls for violence and spread hate speech, disseminate disinformation that damages the fibers of our societies and live broadcast terror attacks and beheadings. While not always legally liable, they have been handling complex societal issues with little guidance from legislative bodies — facing significant scrutiny as they do so.

The lack of legislation and cooperation has led to a growing perception that technology platforms, governments and the media sit on opposing ends of the harmful content debate. Platforms are accused of limiting free speech by some, and of profiting from the proliferation of online harm by others. Legislators are perceived as overbearing by some and overextending their reach by others, while the media is seen as stirring the pot and driving public scrutiny.

However, new laws that aim to provide specific guidelines on online safety have been introduced. The EU’s Digital Services Act and the UK’s Online Safety Bill are aimed at securing online engagements, but still fail to take into account the unique perspective and expertise that technology platforms have gained over the years. This may result in a missed opportunity for a holistic solution to online safety.

Collaboration is key to preventing online harm

A collaborative approach is possible and, in fact, essential. Take the UK’s Age Appropriate Design Code. Launched in September 2021, the code involved an iterative process, during which its enforcer, the Information Commissioner Office, issued guidance and clarifications based on direct communication with dozens of technology platforms.

Moreover, we have seen some constructive collaborations involving civil groups and both technology platforms and government bodies. Groups like the Family Online Safety Institute and the National Center for Missing and Exploited Children act as mediators between tech companies and the government in issues related to child safety. The 5Rights Foundation has supported British regulators in building out child safety codes like the Age Appropriate Design act.

This collaborative approach can be applied to content moderation. Take the apparently simple directive that when harmful content is detected on a platform, action should be taken. When these previously voluntary actions are made law, it is important to understand the details and limitations — something which can be achieved by tapping into the wealth of knowledge that technology platforms have acquired over the years.

Have you read?

What is harmful content?

How does one decide what counts as harmful? Is it only graphic CSAM, or do textual descriptions of harm against children also count? What about disinformation? Where does one draw the line between harmless lies and dangerous narratives that can harm public health?

What constitutes ethical detection?

When is it enough to wait for content to reach moderators via flagging, and when are more proactive measures needed? If links are shared on a platform that leads users to harmful content, should these be detected too, or should detection only include direct, on-platform violations? What are the limitations of privacy and encryption that may challenge the act of proactive detection? Who teaches artificial intelligence algorithms what to detect, and how do those algorithms understand the context of a knife in a cooking show versus one in an attack? In human-based detection by content moderators, how does one balance the need for safe platforms with that of moderator wellbeing?

What actions should be taken?

How do platforms decide when to remove content, label it or remove the user entirely? How do they handle questions of freedom of speech versus freedom from harm?

Considerations should also include the cultural aspect of harmful content and the potential legal ramifications of cross-border content sharing. Content that’s harmful in one culture may not be in another, and platforms need to weigh the consequences of taking action in one country versus inaction in another with regard to a singular piece of content.

There is a double-edged sword when it comes to enforcement actions: platforms are judged for taking action yet denigrated when they don’t.

Only when taking into account the complete picture can legislative decisions, which make mandatory the processes that some technology platforms have been facilitating for years, be made.

The content moderation balance

Content moderation requires a balance between individual freedoms, the needs of multiple stakeholders, technological constraints and desired outcomes. This can only be achieved through collaboration.

Governments, technology platforms and civil groups can and must work together to:

1. Understand the harms lurking in online spaces. Beyond the obvious threats, bad actors from all abuse areas take advantage of digital spaces to cause harm. Before taking action, a thorough understanding of their motivations, techniques and tools is critical.

2. Analyse the complex challenges. Many challenges come with harmful content detection from technological and ethical perspectives. It is essential to understand what technology companies can actually do against these online harms, and what this means for user privacy.

3. Design new, more comprehensive solutions. This problem impacts far beyond the digital spaces we take part in. With that in mind, the actions against some of these online harms cannot be up to technology platforms alone.

It is only when technology platforms and regulators approach this problem with an openness to communicate, a desire to truly collaborate and a feeling of true partnership that an optimal solution be reached, and the victims of online harm be protected.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Global CooperationDavos Agenda
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Here's what to know about the EU-US Trade and Technology Council 

Johnny Wood

April 17, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum