Cybersecurity

Online dangers for children are rife. We must both pre-empt them and treat the consequences

An end-to-end approach to child safety online could both help victims and disincentivize harmful behaviour in the first place.

An end-to-end approach to child safety online could both help victims and disincentivize harmful behaviour in the first place. Image: Getty Images

Louis-Victor de Franssu
Managing Director, Tremau
Theos Evgeniou
Professor of Decision Sciences and Technology Management, INSEAD;, WEF Academic Partner on AI; co-founder, Tremau; external expert, BCG Henderson Institute
Jacqueline Beauchere
Global Head of Platform Safety, Snap Inc
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Cybersecurity?
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Cybersecurity

Listen to the article

  • A majority of child internet users experience some form of online sexual harm.
  • Most child online safety focuses on safeguarding the 'midstream' user experience.
  • A more end-to-end approach, incorporating the 'downstream' consequences, is needed.

With every third internet user being under the age of 18, online child sexual abuse has become a global public safety issue — producing a generation of victims. The WeProtect Global Alliance estimates that a staggering 54% of those who regularly used the internet as a child (now aged 18-20) were the victims of at least one online sexual harm. The stigma that still surrounds child sexual exploitation and abuse makes it likely that what we know is only the tip of the iceberg, and that our statistics underestimate the prevalence of the issue.

Though highly alarming, sexual exploitation and abuse are just one form of illegal or harmful content or conduct impacting young people online. Cyberbullying, impersonation, trolling, harassment, exposure to hate speech, encouraging self-harm, identity theft and phishing aimed at children are also on the rise. Consequences range from cautionary tales to harrowing tragedies. For example, Italy ordered TikTok to block anyone whose age could not be confirmed, following the death of a 10-year-old who attempted a dangerous challenge. We are also just learning that young people, regardless of gender, are susceptible to eating disorder trends that can be amplified by social media.

Child-proofing the web

Never before has safety and the prevalence of online dangers been more visibly front and centre on the world stage – for governments, the technology industry, law enforcement agencies and civil society. From tragedies befalling young people and families across the globe, to new or anticipated laws and regulations in multiple jurisdictions enacted to avoid such events, safety online has reached a tipping point. Technology companies as a collective are operating in an era of heightened safety expectations from all sectors and stakeholders.

Have you read?

Australia pioneered the world’s-first “eSafety” government agency; the UK (Online Safety Bill) and the EU (Digital Services Act and Digital Markets Act) have already proposed sweeping legislative reforms, while bipartisan momentum is growing in the US for various proposals to further safeguard children’s online safety and privacy. Regulations aimed at protecting minors online are already in place in the US. The Children’s Online Privacy Protection Act (COPPA) was actually signed into law decades ago by President Bill Clinton – but is now widely seen as insufficient. France recently adopted a protocol designed to protect minors from exposure to pornographic content. The UK has implemented its Age Appropriate Design Code, with many jurisdictions around the world looking to emulate at least parts of it. German lawmakers made sweeping changes to the country’s Jugendschutzgesetz (Youth Protection Law), and across many of these laws and proposals, uncooperative platforms risk fines up to millions of euros, and even jail time for executives.

Some regulatory measures and principles, including safety by design, have already compelled large market players like Snapchat, YouTube and TikTok to implement youth-friendly controls, with Instagram and Apple following suit. These are laudable steps, but it can still be easy for children to evade safeguards, for example, by entering an earlier birth date at registration.

Machine-learning and AI detection, coupled with necessary human moderation, are critical to protecting young people from harmful content and activity online. Nevertheless, any euphoria about the immediate promise of these tools is likely misplaced. For example, content upload filtering and automated content detection face serious ethical challenges – including threats to freedom of expression. The metaverse will undoubtedly create new safety challenges, too.

It is high time to design a smarter approach – with minor users’ online journeys at its heart. So far, regulators and the tech industry have focused largely on the “midstream” stages of young people’s online experiences: content upload, access and detection. But one key to creating a safer environment for children may also lie further downstream: What happens after kids have encountered harmful online content?

Reports of child sexual abuse imagery both received from external sources and proactively sought by the UK's IWF
Reports of child sexual abuse imagery both received from external sources and proactively sought by the UK's IWF Image: Internet Watch Foundation

Safety from end-to-end

One approach: an end-to-end child user safety process. This, in addition to helping those victimized and in danger, could disincentivize harmful behaviour online in the first place. To help, we must recognize that assisting young people who have fallen victim to online harms is a shared responsibility among a variety of stakeholders, including tech platforms, parents, caregivers, government, law enforcement, educators, civil society - as well as teens and young people themselves. All sectors have distinct roles to play and, together – through collaboration and partnership – we can evolve the internet into a safer environment.

A first approach should be to provide children with the tools and resources to help them report and denounce cyberbullying or harmful content they have been subjected to, while protecting their privacy and benefiting from appropriate assistance. Young people should be aware of, and know how to contact, hotlines and helplines in their respective countries that can work on their behalf to remove violating content from platforms and services, and provide them with other resources. These include the National Centre for Missing and Exploited Children in the US, the Internet Watch Foundation in the UK and Telefono Azzuro in Italy. In France, 3018 is the French national number against online violence and cyberbullying managed by the NGO e-Enfance. E-Enfance recently launched an innovative phone application powered by a dedicated trust and safety platform, specifically designed to facilitate reporting by children and victims. Global NGOs like WeProtect and INHOPE could champion making such services available in various countries.

Second, we must keep in mind that younger users often benefit from the support of their families, parents, caregivers, guardians, friends and educators. Fortunately, a vast multitude of printed and online materials from colouring books to online teaching plans and e-learning resources are available to support minors’ online activities. Software including content filtering and privacy-conscious real-time monitoring tools, such as SafeToNet are available for home use; schools and teachers can consider tools like CyberSafeKids. But this is not enough. It is also essential to invest in educating young people to develop their critical thinking and analytical skills to help enable them to effectively evaluate online risks and opportunities.

Third, the role of law enforcement is also critical. The Canadian services NeedHelpNow and CyberTip are already in place; the UK has launched initiatives, too. All help to protect children and young people as they literally grow up online.

Finally, online harms cannot only be fought online. Real-life human connection can help survivors immensely. Initiatives like NCMEC’s Team HOPE – connecting victims with volunteers who have experienced the trauma of having or being a missing or sexually exploited child – can make a difference.

Discover

What is the World Economic Forum doing to improve digital intelligence in children?

Integrating and joining up already available approaches in an end-to-end child user safety process has significant potential. Business/government/civil society collaborative efforts can help to mitigate online harms, care for those affected, prompt user engagement, and help unite us all in a singular mission: protecting and growing a healthier, safer and more equitable future generation.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Quantum computing could threaten cybersecurity measures. Here’s why – and how tech firms are responding

Simon Torkington

April 23, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum