'Adolescence' has sparked a digital safety debate: A new report offers a fresh approach to tackling online harm

Netflix' Adolescence has put online harm in the spotlight: The Forum's Roadmap to Effective Digital Safety Measures offers solutions. Image: Unsplash/Gilles Lambert
- Netflix drama Adolescence has broken records and fuelled the global debate around children, smartphones and social media.
- The World Economic Forum’s Roadmap to Effective Digital Safety Measures report recommends a holistic, collaborative approach that goes beyond policy.
- Modelled on the public health intervention journey, the report urges a shift to prevention, with online harm treated like a disease.
Digital safety – and digital harm – came into sharp focus in March 2025. The Netflix drama Adolescence broke viewing records and ignited the global conversation around children and social media after it premiered on 13 March.
Four days later, the UK government’s Online Safety Act came into effect, followed by the World Economic Forum’s Global Coalition for Digital Safety publishing its Roadmap To Effective Digital Safety Measures the week after.
As the month came to a close, British Prime Minister Sir Keir Starmer hosted Adolescence co-writer Jack Thorne and producer Jo Johnson for a roundtable discussion with children’s charities in Downing Street.
Netflix announced that, from 1 April, the series would be free in the UK to stream in schools, alongside guides for teachers, parents and carers to help navigate conversations around its themes.
How Adolescence ‘hit home’
Adolescence tells the story of a 13-year-old boy who is arrested for murdering a female student after exposure to misogynistic content online. Actor and series co-writer Stephen Graham said he was shocked by news of knife crime committed by boys and wanted to “shine a light on it”.
Sir Keir admitted watching the show with his teenage children had “hit home hard”, emphasizing the importance of open dialogue: “As I see from my own children, openly talking about … the content they’re seeing, and exploring the conversations they’re having with their peers is vital if we are to properly support them in navigating contemporary challenges, and deal with malign influences.”
But he also warned: “This isn’t a challenge politicians can simply legislate for. Believe me, if I could pull a lever to solve it, I would. Only by listening and learning from the experiences of young people and charities can we tackle the issues this groundbreaking show raises.”
Addressing digital safety in the UK
The UK now has legislation for keeping children and adults safe online in the form of the Online Safety Act, which makes social media companies and search engines responsible and accountable for users’ safety.
Platforms must now remove illegal content, including content relating to child sexual abuse, controlling or coercive behaviour, extreme sexual violence and extreme pornography. It also ensures companies prevent children from accessing harmful or age-inappropriate content, including pornography and content depicting self-harm, bullying, violence and dangerous stunts.
The Act stipulates that social media platforms assess the risks of algorithms exposing children to harmful content – and take steps to mitigate this – as well as dealing with disinformation and misinformation online.
The laws apply to companies based anywhere in the world and online regulator Ofcom can hand out fines of up to £18 million or 10% of their qualifying worldwide revenue, whichever is greater. By the end of the year, Ofcom will also have produced guidance on measures to tackle abuse against women and girls online.

Smartphones and kids
Before meeting the Prime Minister, Adolescence writer Thorne, who has an eight-year-old son, called for a ban on smartphones in UK schools, as well as a “digital age of consent”, similar to the Australian government’s ban on social media for children under 16
"I think we should be doing what Australia is doing, and separating our children from this pernicious disease of thought that is infecting them," Thorne told the BBC.
It’s a view shared by Jonathan Haidt, author of The Anxious Generation, who spoke at the Forum’s Annual Meeting in Davos in January in a session called Collapsing Youth? He told the Radio Davos podcast:
“The biggest thing [countries] can do for teen mental health is follow Australia's lead: raise the age to 16 for social media. Brazil is going phone-free [in schools]. Indonesia say they’re going to follow Australia's lead [and] raise the age [for social-media use]. They haven't said what age, but I'm hoping 16.”
Experts writing in the British Medical Journal said banning smartphones and restricting social media alone would not help children learn to use technology in a healthy way.
They called for a rights-based approach, similar to the United Nations Convention on the Rights of the Child, to protect children from harm while also enabling them to develop a healthy relationship with smartphones and social media.
Beyond policy: The intervention journey
The Forum’s Global Coalition for Digital Safety brings together leaders from politics, business, academia and civil society, including Australia’s eSafety Commissioner, Julie Inman Grant, to tackle harmful content online and promote digital media literacy.
Its latest report, The Intervention Journey: A Roadmap to Effective Digital Safety Measures, takes a holistic approach to digital safety, advocating interventions that combine technological, behavioural, educational as well as policy-related measures, following the principles of Safety by Design.
Modelled on the public health intervention journey, the report urges a shift to prevention, with online harm treated like a disease.
“Digital harm lacks a definitive solution,” asserts the report, written in collaboration with Inman Grant. “The focus should be on mitigating risks before harms occur, while recognizing that some level of risk will always persist.
“Each intervention must be carefully evaluated, weighing its costs against its benefits while also considering its effectiveness.”
The four key phases on the intervention journey are:
1. Identification: understanding and assessing risks
2. Design: developing tailored interventions
3. Implementation: deploying and monitoring interventions
4. Feedback, measurement and transparency: evaluating impact.
The four intervention categories are:
- Technical: These use technologies, engineering solutions or technical approaches for measurable and tangible changes. For example, hybrid content moderation systems where AI flags content and human moderators make the final decision, improving moderation accuracy for nuanced or context-sensitive content.
- Educational: Where knowledge is shared and skills are enhanced through structured learning programmes or other informational resources, such as parent guides and community workshops.
- Behavioural: This centres on promoting safe online behaviours and discouraging risky or harmful activities, including identifying repeat offenders and detecting banned users trying to rejoin a platform.
- Policy-related: These focus on developing, modifying or enforcing policies and regulations at both a company level and governmental level.
While digital safety interventions tend to focus on the biggest organizations, the report notes, SMEs often face distinct challenges due to limited resources and expertise.
“Ensuring that SMEs have access to effective, affordable digital safety resources is vital to protecting their user bases and maintaining the broader safety of the digital ecosystem,” it says.
Digital safety recommendations
While it’s clear there is no “silver bullet response” as Sir Keir says, or a “one-size-fits-all” approach to digital safety, as the Coalition notes, the report advocates for holistic, transparent and inclusive interventions tailored to each platform and its user demographics.
It recommends accessible, user-friendly reporting tools that allow people to flag harmful content or behaviours, with special response protocols in place for vulnerable groups, such as children, marginalized communities or victims of gender-based violence.
Digital literacy is also key to tackling online harm: “Equipping individuals with the knowledge and skills to navigate safety tools effectively can help users to better protect themselves and make informed decisions,” says the report.
With AI and emerging technologies making the future of digital safety ever-more complex, monitoring evolving trends to anticipate new risks is crucial, while strengthening collaboration with NGOs, academics and industry peers helps to develop best practices and create a unified approach to digital safety.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Cybersecurity
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.