Davos Agenda

Here's how to boost digital safety now and in the future

The invasion of Ukraine has resulted in numerous digital safety challenges. Pictured here: A laptop with lines of codes on the screen in a dark room.

The invasion of Ukraine has resulted in numerous digital safety challenges. Image: Unsplash / @markusspiske

Farah Lalani
Global Vice President, Trust and Safety Policy, Teleperformance
Cathy Li
Head, AI, Data and Metaverse; Member of the Executive Committee, World Economic Forum Geneva
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Davos Agenda?
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Davos Agenda

This article is part of: World Economic Forum Annual Meeting

Listen to the article

  • The invasion of Ukraine has created and amplified numerous online safety issues.
  • Major digital platforms have been changing their policies and practices to address these issues, which include the exploitation of people, mis- and disinformation, and the ongoing rise of extremism globally.
  • Taking these issues into account, new regulations aim to address platform accountability, transparency and safety of users online.

The digital safety challenges that have grown out of the invasion of Ukraine are numerous, including a rise in human trafficking and exploitation, a proliferation of mis- and disinformation, and an influx of violent extremist and terrorist activity.

Have you read?

Human trafficking and exploitation

An unprecedented number of people have been fleeing armed violence in Ukraine and heading west. Bad actors are using this as an opening to target women and children for trafficking and exploitation. Global search traffic for “Ukrainian porn” has increased 600% since the start of the humanitarian crisis, according to data from Thompson Reuters Special Services, while searches for “Ukrainian escorts” increased 200%. Such demand provides financial motivation for traffickers to recruit and exploit Ukrainian women at scale at a time when the trafficking business model has largely shifted online, according to the Office of the OSCE Special Representative and Coordinator (OSR/CTHB).

Mis- and disinformation

False information about the reasons for the invasion of Ukraine – such as to "denazify" a "failed state" – have been spreading. Much of the misinformation about the war in Ukraine has been generated and spread in the form of images and videos as opposed to the text-based content that was predominant during the Covid-19 pandemic. Moderating false information is much more challenging when it comes to visual content due to the additional time required for moderators and automated systems to evaluate visual content. It is also less reliant on a shared language and can provoke a stronger emotional appeal compared to text-based content, making it easier and more likely to be shared across regions.

Digital safety amidst extremism, hate, and violence

Even before the war in Ukraine, there was already a documented rise in white supremacist violence globally in tandem with a growth in online extremist communities. With the invasion, a complex set of implications for various white supremacist and extremist groups globally has emerged.

Proactive trust and safety start-up ActiveFence, has found that extremist neo-Nazi and white supremacists around the world have been pushing their agendas under the semblance of supporting Ukraine. Counter-terrorism experts have noted that a key component of the Kremlin’s campaign to exploit fissures in the West is to use transnational white supremacists to promote racially and ethnically motivated violent extremism.

Unfortunately, digital channels have made it easier to recruit, fundraise and drive activities tied to violent extremist and terrorist causes, especially through alt-tech platforms. A report by The RAND Corporation describes how growth in digital channels has coincided with an increase in far-right extremist online activity. Recent activity has shown that publicising attacks and exploiting social media is the new normal for terrorism.

The response of digital platforms

Major digital platforms have taken an overarching stance against the invasion and have been changing policies and practices to counter Russian disinformation and state media. These platforms have generally rallied to protect Ukrainian users and solidify their ties to democratic governments, rather than remain neutral. Human Rights Watch and others are closely monitoring social media companies’ responses, including actions that go beyond blocking Russian state-affiliated and state-sponsored media.

Loading...

These decisions set precedents for the future and raise questions about past and present conflicts. They also highlight key questions that must be answered:

1. How should broader wartime protocols and practices be developed to reduce bias in decisions that platforms take upon themselves?

Experts suggest that a process and protocols for wartime – loosely akin to the protocols developed after the 2019 Christchurch massacre in New Zealand – which could be applied flexibly and contextually when fighting breaks out could help ensure consistent responses in wartime.

When it comes to removing or otherwise moderating content in a conflict, platforms should do so based on an established set of practices – to avoid bias and arbitrary actions – and to drive a proactive and systematic approach to platform governance.

2. How should metrics and expectations around content moderation speed, accuracy and other quality measures be adjusted during crisis periods?

While it is even more critical to get digital safety decisions right during significant conflicts, the difficulty increases for content moderators and systems to maintain accurate enforcement amid frequent policy changes and a large volume of content. Currently, there are no enforceable standards around “sufficient” remedies, response times and other key metrics when it comes to how platforms moderate content and conduct on their sites.

3. How should violent content be archived and saved by platforms for evidentiary purposes?

Content that depicts violence and other media that violates platform rules is rightly being removed by companies based on their stated policies. But how should such content be archived, given potential evidentiary value in future prosecution of war crimes?

Research by Human Rights Watch, as well as by the Berkeley Human Rights Center, Mnemonic, and WITNESS, has shown that potential evidence of serious crimes is disappearing, sometimes without anyone’s knowledge. Consistent practices and rules for archiving and otherwise storing such content should be established across major digital platforms to ensure it remains accessible if needed in the future.

4. How should policy and other digital safety decisions be shared for transparency, and what accountability mechanisms should be in place to ensure consistency across conflict regions and languages?

While many platforms have justifiably taken a strong approach to the Russian invasion, there aren’t currently enforceable rules or guidelines for transparency covering such decisions. It has also been reported that language capacities of moderation teams employed/used by many platforms are inadequate to address the proliferation of harmful content in non-English speaking regions.

Sufficient language coverage and transparency metrics to assess adequate or equal moderation coverage across languages and regions is not currently available, making platform accountability difficult. This issue becomes even more consequential in times of war.

Forthcoming regulations including the Digital Services Act in Europe, the Online Safety Bill in the UK, and Australia’s Basic Online Safety Expectations aim to address platform accountability, transparency and safety of users online in ways that aim to address some of these areas. Initiatives like the Global Coalition for Digital Safety have also brought together public and private sector leaders to discuss cross-jurisdictional approaches to digital safety that are systematic, consistent, and future-proof.

While the long-term implications of the war in Ukraine are yet to be seen, it is clear the impact of actions taken by major digital platforms to protect users online will reverberate for years to come.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Davos AgendaFourth Industrial RevolutionMedia, Entertainment and Sport
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

From 'Quit-Tok' to proximity bias, here are 11 buzzwords from the world of hybrid work

Kate Whiting

April 17, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum