The UK’s Online Safety Bill could transform the internet. Here's how

The Online Safety Bill has generated concern among internet advocacy and civil society organizations over privacy, censorship and government intrusion.

The Online Safety Bill has generated concern among internet advocacy and civil society organizations over privacy, censorship and government intrusion. Image: REUTERS/Henry Nicholls

Spencer Feingold
Digital Editor, Public Engagement, World Economic Forum
The Big Picture
Explore and monitor how Digital Communications is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Digital Communications

Listen to the article

  • The UK is debating vast new regulations around online safety.
  • The legislation would make social media companies and search engines more responsible for the content on their platforms.
  • Ofcom, the UK's regulatory agency, says it is “prioritising action on illegal content.”

The United Kingdom is expected to implement some of the most advanced online safety regulations through a sprawling piece of legislation that is poised to upend internet liability rules.

The Online Safety Bill—which was first introduced in parliament in March 2022 and is still under debate—is aimed at protecting internet users, particularly children, from illegal and harmful content. This includes child sexual abuse, hate crimes, fraud, incitements of violence and terrorism, among other harms.

“The internet can be a powerful force for good, but illegal and harmful content and activity is widespread online,” the UK government's Online Safety Data Initiative notes. In a report published last year, Ofcom, the UK’s communications regulatory body, found that six in ten internet users said they had encountered at least one potential harm online in the last four weeks.

“This is a once in a generation opportunity to change the digital world so that it benefits the children of today and tomorrow,” a coalition of child welfare organizations including The Diana Award and Save the Children said in a statement.

Shifting the onus

The bill seeks to limit online harm by making internet companies more responsible for the content on their platforms.

Companies impacted would include social media platforms that host user-generated content, such as Twitter, Facebook and TikTok, as well as search engines such as Google. Other smaller sites that facilitate user-to-user sharing would be affected, too.

The law would mandate companies to use various systems and processes to identify risks and harms. Platforms would then be required to prevent and stop the proliferation of harmful content. The law would mandate that users have explicit ways to report harm and require companies to maintain and enforce stricter age verification systems.

The onus for keeping young people safe online will sit squarely on the tech companies’ shoulders.

Michelle Donelan, Secretary of State for Digital, Media, Culture and Sport.

Once enacted, the implementation and enforcement of the Online Safety Bill will be the responsibility of Ofcom. Internet companies that fail to comply will be fined up to £18 million or 10% of their annual turnover. Ofcom will also be empowered to seek court rulings to stop payment platforms and internet service providers from working with harmful sites.

Already, Ofcom has received funding to prepare regulatory mechanisms that can be implemented once the legislation is in place. “A huge effort is underway in Ofcom to mobilise the regime as quickly as possible, to create a safer life online for UK citizens while protecting freedom of expression,” Melanie Dawes, the CEO of Ofcom, said in a statment in April.

Dawes added that Ofcom is “prioritising action on illegal content” and aims to “produce regulation that does not falter in the face of legal challenge and which services and users can trust.”

A threat to privacy?

The Online Safety Bill has generated concern among internet advocacy and civil society organizations over privacy, censorship and government intrusion.

Critics have warned that the law may force companies to break end-to-end encryption to monitor content for illicit material. End-to-end encryption—which ensures that no third party, not even the platforms themselves, can see the content being shared—is used by many secure messaging platforms such as WhatsApp.

“The bill could break end-to-end encryption, opening the door to routine, general and indiscriminate surveillance of personal messages of friends, family members, employees, executives, journalists, human rights activists and even politicians themselves,” WhatsApp and other messaging platforms including Signal and Viber said in an open letter published last month.

Messaging platforms and cyber experts stress that end-to-end encryption helps protect internet infrastructure and shields users from online scams and data theft.

Last year, a coalition of 70 civil society groups, companies and cybersecurity experts said that undermining end-to-end encryption “opens a backdoor for cyber criminals,” which makes the internet less safe—especially for children and vulnerable users. The group also warned that the bill would hurt UK businesses as there could be “less protection for their data flows than their counterparts in the United States or European Union.”

The government, however, has insisted that the bill does not threaten encryption and that systems are in place to prevent government intrusion. In a question-and-answer video, UK Digital Secretary Michelle Donelan said that the bill will “not be used to encroach” on private messages.

The Online Safety Bill is being debated in the House of Lords, parliament’s second chamber. It is expected to be passed into law in some form later this year.

'Fostering best practices'

The UK is far from the only country working to boost online safety.

The European Union has been working closely with major social media platforms to curb the proliferation of disinformation and illegal content online. Last year, lawmakers in Brussels approved the Digital Safety Act, which includes a number of measures ranging from advertising transparency to content removal requirements.

The World Economic Forum has also been advocating for public and private cooperation as well as international coordination around online safety.

In January, the Forum’s Global Coalition for Digital Safety released a set of guiding principles that seek to integrate international human rights standards into digital safety efforts. The coalition also released a report in May that assesses digital safety risks and outlines a comprehensive methodology for how stakeholders can evaluate risk factors within the digital ecosystem.

“By fostering best practices in online safety and taking coordinated action against online harms, we strive for a safer digital landscape,” said Minos Bantourakis, the Forum's Head of Media, Entertainment and Sport Industry. “Together, we accelerate progress and build a brighter digital future for all.”

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum