Safer Internet Day 2023 – bolstering the fight to protect children online

A child using her laptop, illustrating the importance of Safer Internet Day 2023

Safer Internet Day 2023 highlights the need for greater online safety for children Image: Photo by Annie Spratt on Unsplash

Akash Pugalia
Global President, Media, Entertainment, Gaming and Trust and Safety, Teleperformance
Farah Lalani
Global Vice President, Trust and Safety Policy, Teleperformance
Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale

Listen to the article

  • With over 175,000 children coming online every day worldwide, a greater number of kids will experience the benefits and risks of navigating the web.
  • There are major gaps when it comes to addressing Child Sexual Abuse Material on digital platforms.
  • On Safer Internet Day 2023, (7 February) we take a look inside the company practices that could improve children's online safety.

With over 175,000 children coming online every day across the world, a greater number of children will experience both the benefits and risks of navigating the web. A study by Common Sense Media found that children as young as 8 are using social media more than ever. The rise in internet usage amongst children has been accompanied in parallel by a rise in the abuse and exploitation of children online; shockingly, there has been a 360% increase in ‘self-generated’ abusive images since March 2020, according to the Internet Watch Foundation. With this trend unlikely to be reversed unless further steps are taken by both industry and government, the need to create accountability, proactiveness and transparency of the practices in place to address this abuse is urgent.

As part of Australia’s Online Safety Act 2021, online service providers are required to report on how they are implementing the Basic Online Safety Expectations (BOSE) as and when these reports are requested by the eSafety Commissioner. Based on the first set of responses from industry, published by eSafety in December 2022, we now have better knowledge of what steps platforms are taking to protect children from abuse and exploitation on the internet.

On Safer Internet Day, February 7, 2023, we are highlighting some of the major gaps when it comes to addressing Child Sexual Abuse Material (CSAM) on digital platforms and some of the potential solutions for closing these gaps:

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Detection of new CSAM

While many companies are using hash (matching) technologies to detect existing CSAM, fewer providers are detecting new material. Hash-matching tools can only prevent the continuous sharing of previously identified and confirmed content. On the other hand, artificial intelligence (‘classifiers’) can be used to identify potential that is likely to depict abuse of a child by looking for key markers. In this way, the use of AI can help prevent the dissemination of CSAM when it is first created before it is categorised or logged in any database. The value of this is immense.

Tools such as Thorn’s classifier (which Thorn, a non-profit focused on child protection, reports has a 99% precision rate) is an example of technology that can detect new CSAM. Classifiers, such as these, can help prioritise cases for human review and verification, as part of the overall process to combat such illegal material.

Have you read?

Detection of CSAM on live streaming and video content

Online service providers are still developing processes, tools and measures to detect child sexual abuse and exploitation in cases of live streams or video calls/conferences. Research from the WeProtect Global Alliance and Technology Coalition has found that only 30% of companies surveyed are using video-based CSAM classifiers. And, only 22% are using classifiers to detect CSAM in live-stream contexts.

For adult sexual content, innovative technology is in use to help detect violative content in videos. Teleperformance, for example, leverages a tool that can automatically measure the time and extent of nudity in a video clip to help moderators make more effective decisions about whether the level of exposure violates stated platform policies.

The adoption of moderation technologies for audio and video content on live streams and video conferences can have a huge impact on the fight against CSAM and other illegal material, given the explosive rise of this content format.

Detection of grooming

Online grooming refers to the “tactics abusers deploy through the internet to sexually exploit children.” In a blog post, Thorn describes how “Grooming relies on exploiting insecurities and trust, and in an online setting trust can be built through a variety of methods.” It goes on to explain how “any content produced as a result of grooming can then be used to threaten and blackmail a child, playing on a child’s fear of getting in trouble, to force the victim into performing more acts, which can become increasingly explicit.”

When it comes to detecting predatory behaviour, including grooming, only 37% of surveyed companies use AI classifiers to proactively detect this activity on their platforms. Given the various types and severity of grooming practices, it may be more difficult to define thresholds for flagging and reviewing such content in a scalable manner.

Global Threat Assessment 2021, Survey of Tech Companies by WeProtect Global Alliance and Technology Coalition. Safer Internet Day 2023
Global Threat Assessment 2021, Survey of Tech Companies by WeProtect Global Alliance and Technology Coalition Image: WeProtect Global Alliance

Detection could be bolstered through clear reporting mechanisms for users to flag such violative content. “Fundamental to safety by design and the Basic Online Safety Expectations are easily discoverable ways to report abuse. If it isn’t being detected and it cannot be reported, then we can never really understand the true scale of the problem,” says Julie Inman Grant, Australian eSafety Commissioner.

Beyond detection technologies for grooming and/or CSAM, providers could also leverage age-assurance technology to help prevent unwanted contact between adults and children. In addition, safety prompts that provide pathways to help for those who may be seeking out this material is a prevention-driven way to tackle this behaviour; a chatbot recently developed by the IWF and The Lucy Faithfull Foundation seeks to do just that. According to reporting in WIRED, there is “some evidence that this kind of technical intervention can make a difference in diverting people away from potential child sexual abuse material and reduce the number of searches for CSAM online.” Such innovations are critical in driving behavioural reform and embedding approaches that are centred on safety-by-design. These interventions, focused on addressing the root cause of such behaviour, are crucial to preventing the victimisation and re-victimisation of children.

The importance of proactively addressing harms online cannot be overstated; the more effectively and quickly that content violating child safety, or other platform policy areas, can be identified and actioned correctly, the less of a negative impact it is likely to have on the victim. Recent thought leadership produced by MIT Technology Review Insights, “Humans at the center of effective digital defense,” in association with Teleperformance, highlighted the growing importance of trust and safety. While, Gartner research suggests that nearly one-third (30%) of large companies will consider content moderation services for user-generated content a top priority by 2024.

Loading...

To innovate in this space, it is critical that technology and people work more closely together. In the context of content moderation, the MIT Technology Review Insights Paper highlights how people help close the “machine learning feedback loop” by flagging content that escapes the algorithm. The AI then uses that data to make more accurate decisions in the future. Julie Owono, Executive Director of Internet Sans Frontières (Internet Without Borders) and Affiliate at the Berkman Klein Center for Internet & Society at Harvard, predicts in the MIT Technology Review Insights Paper: “Content moderation rules and practices, which I include under the umbrella of content governance, will evolve. Under increasing pressure from users, advertisers, and governments for safe online spaces, we may see the emergence of more common standards and, perhaps, common procedures. This will require a multistakeholder approach through which industry, civil society, governments, and academia collaborate.”

Protecting children online is where such collaboration can start to forge the path towards improved content governance and a proactive approach to trust and safety.

On Safer Internet Day 2023, Teleperformance joins the growing list of stakeholders that are working towards improving digital safety, with the power of its people, technology and process excellence. Because we know that each interaction matters.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum