Fourth Industrial Revolution

Is patience waning with efforts to moderate content on social media?

Josh Joffe a 23-year-old Jewish American, sits with the phone he uses to access social media and see posts about the Israel-Palestinian conflict at his home in Washington, U.S., October 15, 2023. REUTERS/Elizabeth Frantz

Some countries are making more concerted efforts to protect social media users from harmful content. Image: REUTERS/Elizabeth Frantz

John Letzing
Digital Editor, World Economic Forum
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Fourth Industrial Revolution?
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
  • The arrest of Telegram’s founder and CEO has heightened scrutiny of the ways content is currently moderated on social media platforms.
  • It will be ‘a tricky few years ahead’ for balancing stronger safeguards with free-speech demands, an expert says.
  • Several countries are zeroing in on ways to boost related transparency and increase protections.

Shortly after Hamas launched its surprise attack on Israel last October, channels used by the terrorist group on Telegram lit up with images of the ensuing atrocities.

The response? A restriction of those channels that reportedly didn't do much restricting.

Nearly a year later, Telegram’s steady cultivation of a reputation as a free-speech bastion, sometimes to an extreme degree, has hit a glitch with the arrest of its founder and CEO in Paris.

Pavel Durov has been compelled to remain in France, where he’s under investigation for alleged complicity in criminal activity on his popular platform. “I got interviewed by police for 4 days,” he posted on his channel. “No innovator will ever build new tools if they know they can be personally held responsible for potential abuse of those tools.”

The sudden jolt of legal scrutiny begs the question: How are these services normally moderated? That is, how are they expected to be moderated?

Like so many other aspects of our increasingly automated existence, humans still play a central role. Office cubicles filled with contractors staring at screens and making snap decisions on content may have given way to artificial intelligence and machine learning, but frequently overworked and underappreciated people remain an essential last line of defense.

Patience with the current way of doing things is waning, however, according to Institute for Strategic Dialogue CEO Sasha Havlicek. “You have consistent, systemic failures in content moderation across the platforms,” she said.

Content moderators do not have an easy job.
Content moderators do not have an easy job. Image: World Economic Forum

Content moderation has been called the hardest problem on the internet for a reason.

In many ways it’s a neglected backstop for businesses designed to draw in big audiences, and extract the strongest possible reactions from them in the process. Moderators may be relied on to instantly referee right-to-expression issues that philosophers have pondered for centuries.

Of course, some calls are easier than others.

There’s the “sharp tip” of the problem, as Havlicek calls it, in the form of clearly illegal and harmful activity – which bookends an “awful but lawful” grey area that’s more difficult to police. Still, in much of the world, particularly the non-English-speaking parts, even the sharp tip frequently gets a pass, Havlicek said.

“You now have an interesting situation where you’ve got the Five Eyes and the EU, minus the US, essentially moving towards regulation,” she said, referring to the five anglophone countries (Australia, Canada, New Zealand, the UK, and the US) that coordinate intelligence gathering.

Yet, “there’s a danger in over-stoking” free-speech absolutists who already equate efforts to regulate social media with repressive censorship, Havlicek warned. “So it’s going to be a tricky few years ahead.”

One readily addressable issue seems to be a dearth of skilled content moderators.

Disclosures made by the biggest platforms to comply with the EU’s Digital Services Act (nascent but potentially “game-changing,” according to Havlicek) include tallies of moderators fluent in local languages. X's most recent report, for example, shows 1,535 moderators listed as fluent in English, 29 in Spanish, two in Italian, and one in Polish.

“Even 20 people per country feels a little bit slim,” Havlicek said. “These are the best-served markets in terms of content moderation that exist, so you can only imagine what that means elsewhere.”

A prime candidate for automation?

The psychological toll often paid by content moderators is now widely understood. It’s even provided the basis of a Broadway play – the protagonist of JOB is a troubled woman who “must eliminate some of the most incomprehensibly egregious content from the internet,” according to Playbill.

“People doing this work now are not getting the pay or protections needed,” Havlicek said.

Moderators have filed related lawsuits in multiple countries. Eye-opening accounts in news reports have been published for at least a decade. Havlicek’s organization, which works with governments and platforms to chart a safer and more stable way forward, has its own team performing related research; strict rules are in place governing the hours anyone can spend on that research, and counselling is mandatory.

All of that seems to make content moderation a prime candidate for automation. But skilled people are still necessary for more complex matters – judgements on the use of irony, for example, or scrutiny of in-depth takes on a particular political situation.

Bigger picture, Havlicek said, it’s not just about removing or labelling problematic pieces of content. It’s really about curation systems. That is, how an environment can be distorted through algorithms designed to keep users engaged, and ad revenue flowing – which pushes people into more “extreme spaces.”

That’s a particular concern as geopolitical intrigue mounts and elections hang in the balance. What if all it takes to spread disinformation is to simply amplify voices that would exist anyway – as the US Department of Justice alleges one company did in a recent indictment?

By this point, platforms have a lot of practice identifying “state actor information operations,” Havlicek said, though they struggle to keep pace with constantly evolving tactics.

Founder and CEO of Telegram Pavel Durov delivers a keynote speech during the Mobile World Congress in Barcelona, Spain February 23, 2016. REUTERS/Albert Gea
Telegram's CEO has said social media apps are 'easy targets for criticism.' Image: REUTERS/Albert Gea

Havlicek said measures like a fully fledged EU Digital Services Act could be immensely helpful. A key aspect of the legislation is enabling access to platforms’ data for independent researchers. That’s meant to increase transparency, and better assess risk.

“We’re going to need to show that something like the DSA can work,” she said.

Only online platforms deemed “very large” have obligations under the act. Services should be accountable for user safety “especially at the point that they’ve got user bases big enough to have impacts on society writ large,” Havlicek said.

Telegram isn't quite there yet, but it’s getting close.

The service has been one of many to argue that when moderation works well, no one notices. “The claims in some media that Telegram is some sort of anarchic paradise are absolutely untrue,” Durov has written on his channel. “We take down millions of harmful posts and channels every day.”

A recent posting for a content-moderator job at Telegram seeks applicants with strong analytical skills and “quick reaction.”

About five months prior to his arrest, Durov had mused that “all large social media apps are easy targets for criticism” of their moderation efforts. Telegram, he pledged, would approach with problem with “efficiency, innovation and respect for privacy and freedom of speech.”

Following the arrest, he wrote that “establishing the right balance between privacy and security is not easy,” though his company would stand by its principles.

The following day, accompanied by a celebratory emoji, he announced that Telegram had reached 10 million paid subscribers.

More reading on content moderation and current threats

For more context, here are links to further reading from the World Economic Forum's Strategic Intelligence platform:

  • Content moderation just isn’t a long-term solution, according to this veteran researcher. Investing more in local journalism, boosting media literacy, and providing better tools to factcheck will all be necessary. (Cornell University)
  • Politicians long complained about how Telegram was run, according to this piece – what’s changed is their response. (Wired)
  • Democracies have to proactively protect themselves from the “hybrid” threats posed by content on social media services with more stringent regulation, according to this analysis. (Australian Strategic Policy Institute)
  • “Since the coup, pro-junta actors have taken advantage of Telegram’s non-restrictive approach to content moderation.” This analysis goes deep on the highs and lows in the history of self-policing at popular services. (Carnegie Endowment for International Peace)
  • “The distinction between proper and improper speech is often obscure.” The US Supreme Court recently heard a case observers hoped would shed light on how far the government can go to press platforms to squelch content. They were disappointed. (EFF)
  • “Subversion through the exploitation of free-flowing information is the weapon of choice short of war.” This piece explores the role of “foreign actors” in the recent, social media-fuelled unrest in the UK. (RUSI)
  • When does free political expression become “radicalization?” This analysis ties France’s current political crisis to the ease with which echo chambers of extreme-right content can appear on social media. (LSE)

On the Strategic Intelligence platform, you can find feeds of expert analysis related to Disinformation, Media, Justice, and hundreds of additional topics. You’ll need to register to view.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Generative AI is rapidly evolving: How governments can keep pace

Karla Yee Amezaga, Rafi Lazerson and Manal Siddiqui

October 11, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum