• Governments across the world have started designing new frameworks to regulate online platforms.
  • The coming era of regulated internet governance will require tech companies to adapt and develop new risk-management processes and tools.
  • Ultimately, complying with the new regulations will require a shift towards a culture of responsibility.

Almost two decades after the rise of online platforms, governments across the world including in the European Union, the United Kingdom, Canada and Australia, have started designing new frameworks to regulate these services. Debate and controversy has been rife, including the recent Congressional hearing of the Facebook whistle-blower Frances Haugen and Apple’s reversal of its plan to scan images and videos on its devices to detect child sexual abuse material (CSAM). Increased regulation will have important implications both for these businesses and for wider society.

Democratic governments face numerous challenges in attempting to regulate the online space. They grapple with sometimes contradictory objectives, as they attempt to find a balance between keeping the internet safe, while also protecting fundamental rights, including freedom of speech.

Challenges for regulators

1. The sheer volume of content posted online means that it is practically impossible to ensure judicial review prior to every removal of content. Governments will need to rely on setting out obligations for the private sector to moderate illegal content based on specific regulatory principles. The more stringent the rules, the higher the risk of content over-removal – the more lenient the rules, the higher the risks of illegal or harmful content spreading.

2. Legislators must define what constitutes “illegal content” in a way broad enough to cover the targeted harms, but specific enough to avoid the risks of 'censorship creep'. Impractically broad definitions could leave substantial 'grey zones', requiring companies to use their sole discretion to decide on content removal. If the definition of illegal content is ambiguous, it increases the risk of over-censorship, with important potential repercussions on freedom of expression.

3. Regulators need to define minimum new obligations for online platforms without creating barriers to innovation or market entry.

The regulatory approaches taken by policy-makers can be divided into two broad categories:

A content-specific approach

Legislation is designed to target one specific type of online harm such as copyright infringements, terrorist content, CSAM, illegal hate speech or illegal products, and focuses on its timely removal. Examples of such regulations include the EU Terrorist Content Online Regulation, the French law on disinformation, the German Network Enforcement Act (NetzDG) as well as the Directive on Copyright in the Digital Single Market.

A systemic approach

This aims to provide a cross-harm legal framework, in which Intermediary Service Providers (ISPs) must demonstrate that their policies and processes are designed and implemented to counter abuses of their services. This is the direction proposed by the EU in its Digital Services Act (DSA), which does not define what constitutes illegal content. Instead it sets new due diligence obligations for ISPs to effectively act on content when they become aware of its illegality. Failure to comply may result in penalties of up to 6 percent of global turnover.

Challenges for businesses

Complying with complex (and sometimes contradictory cross-jurisdictional) demands while maintaining customer trust will pose new challenges for all ISPs. It will require them to develop new risk management frameworks and create new roles and responsibilities, ultimately leading to organizational and cultural changes. It would be useful to learn lessons from other sectors, such as the financial industry.

For example, according to the first draft of the DSA, ISPs will need to put in place adequate processes to address the illegal content they have been notified of by a range of different sources. To track these, the subsequent decisions, and communication with users, ISPs will need to develop robust content moderation management processes. These will also need to fulfil obligations such as “statements of reason”, complaint handling and transparency reporting. Data collection across this complex network of stakeholders will require dedicated tools, which need to be secure and up-to-date to ensure compliance across different jurisdictions.

Online platforms will also need to abide by new transparency obligations for online advertising, requiring them to provide information to their users concerning the advertisers and their target audience. In addition, online marketplaces will be required to set up 'Know Your Business Customer' policies to collect identification information from sellers operating on their platform – an obligation also largely inspired by the financial industry. This will also require new processes and tools.

The largest online platforms will be subject to further requirements, including the obligation to conduct annual assessments of systemic risks. Again, these companies may benefit from compliance risk assessments methodologies and best practices developed by the financial sector.

All these changes will unavoidably add costs for businesses: according to estimates made by the European Commission, annual costs could reach up to tens of millions of euros for the larger players.

media, digital,

What is the Forum doing to improve online safety?

The Global Alliance for Responsible Media has partnered with the World Economic Forum to improve the safety of digital environments, addressing harmful and misleading media while protecting consumers and brands.

The Global Alliance for Responsible Media, now a flagship project of the Forum's Media, Entertainment and Sport Platform, is led by the World Federation of Advertisers (WFA) and brings together advertisers, agencies, media companies and industry organizations to improve digital safety at scale.

Together, they collaborate with publishers and platforms to address harmful and misleading media environments, developing and delivering against a concrete set of actions, processes and protocols for protecting brands and consumers.

Partners involved in the Global Alliance for Responsible Media include companies such as LEGO Group, Procter & Gamble, Unilever, NBC Universal - MSNBC, Dentsu Group, WPP (through GroupM), Interpublic Group, Publicis Groupe, Omnicom Group, Facebook and Google.

Together with members and partners on our Media, Entertainment and Sport platform, we are aggregating solutions to major industry disruptions while driving greater social cohesion and helping companies remain accountable to the global social good.

Read more about our impact.

A new culture

Beyond these processes and tools, effective risk management will require a well-balanced enterprise organisation and risk management culture, aligning regulatory obligations, business models, and reputational risk management, and possibly setting up independent compliance functions – much like in the financial sector.

Organizational changes will not be sufficient, however. As they grow, online platforms will need to adapt to a different culture, in which the regulatory risks are understood by all, and employees are empowered to make responsible decisions. This will pave the way towards a new and more responsible use of technology.