- The amount of harmful content online is increasing rapidly.
- Obfuscation and lack of a common approach are barriers to progress.
- A new initiative aims to foster collaboration across the media sector.
The internet has made our lives easier in many ways. We are able to purchase items online and have them delivered almost immediately. We can find people who like that same rare dog breed as us, and share an endless number of photos with them on Instagram. We can react to content – be it funny memes or breaking news – in real-time.
But the same frictionless experience most of us enjoy for education, entertainment or connection with others has also been leveraged by those looking to do harm - and the internet does not discriminate in the speed, reach, access and efficiency it provides to all users.
Content moves freely and abundantly online. Every minute, 500 hours of video are posted to YouTube and 243,000 photos are uploaded on Facebook. Unfortunately, this proliferation has also applied to harmful content. In the past year, the number of reports of child exploitation images circulating online has doubled to 45 million. On Facebook alone, 11.6 million pieces of content on child nudity and sexual exploitation of children were removed in Q3 of 2019, a substantial increase on the previous quarter. Harassment and bullying, terrorist propaganda and the use of fake accounts to spam or defraud is also spreading across many sites.
Have you read?
It is hard to delineate how much of the increase in harmful content is attributable to a greater circulation of this type of content versus improvements in detecting and enforcing action on this content. Regardless, spaces online are being used by predators and other bad actors to accelerate illegal and harmful activity in an unprecedented way. Many have argued that this type of activity has always existed, but that the open web is just now uncovering it. However, digital disruption, which has created a frictionless user experience, and a shift toward advertising-based business models based on maximizing engagement, has made it quicker and easier for all types of content to reach a massive scale. But with so much technology and knowledge at our fingertips why haven’t we been more successful in ‘cleaning up’ spaces online?
One reason is that the problem itself has been obfuscated. Some tech executives are juxtaposing freedom of speech with a censored internet in a way that leads people to take an absolutist stance on this topic. Accepting the argument that noxious content online needs to be weathered out as a test of our willingness to uphold free speech means that those who are responsible can avoid taking action and continue with business as usual. As Berkeley law professor, John A. Powell stated in this New York Times article, “We need to protect the rights of speakers, but what about the rights of everyone else?”
What is the World Economic Forum doing about improving online safety?
With almost 3.8 billion people now online globally, the digital world offers significant benefits, but also poses the risks of harmful content.
The Global Alliance for Responsible Media (GARM), created by the World Federation of Advertisers, is scaling its impact by partnering with the World Economic Forum's platform for Media, Entertainment and Culture to improve the safety of digital environments, addressing harmful and misleading media while protecting consumers and brands.
GARM focuses on ensuring viewer safety for consumers, reducing risks for advertisers, developing credibility for digital platforms and, more broadly, ensuring a sustainable online ecosystem.
The Alliance is working with the Forum’s network of industry, academic, civil society and public-sector partners to amplify its work on digital safety and to ensure that consumers and their data are protected online within a healthier media ecosystem.
Businesses can join the Forum’s Platform for Shaping the Future of Media, Entertainment and Culture and apply to partner with the Alliance and similar initiatives. Read more in our Impact Story or contact us to find out more.
While private companies are not bound by the First Amendment, most of us still agree on the importance of upholding free expression in public digital spheres. But this is where the conversation should begin rather than end. To advance this important dialogue, we also need to recognize that people may not infringe on another's human rights in the name of free speech. As just one example of the real-world implications of digital harm, a report by the US Department of Justice identified sextortion as the most significantly growing threat to children, and quoted an FBI study which found that more than a quarter of sextortion cases led to suicide or attempted suicide. The way the problem has been framed so far presents a misleading and dangerous false choice; this is not about threatening free speech, but about valuing all protections with appropriate measures so that people are not subjected to harm in the name of free speech. This will help ensure that everyone can feel safe and have a voice in the long run.
The other major difficulty in addressing this problem is the lack of a common approach, terminology, and understanding of the trade-offs when it comes to harmful content online. Terms like hate speech and fake news have homogenized a content problem with stark differences. The responsibilities across the public and private sectors, the expected timeline for taking action, and the risk it poses to the public vary quite substantially based on the type of content. Each platform has their own categories and transparency metrics; each consumer brand which advertises on the platform has their own risk settings regarding which content they want their products advertised alongside. While consistent usage of terminology and reporting across the media ecosystem will be challenging, areas where there is more consensus, such as child exploitation and extremist content, can be starting points from which to tackle harmful content more collaboratively.
The problem at a user level is also complex. Given that the vast majority of adults are concerned with how companies are using the data they collect about them, most would likely prefer to use services with greater privacy and encryption. However, estimates show that the number of child sexual abuse reports made through CyberTipline would halve with end-to-end encryption. The trade-off between privacy and detection of harmful content has not been made explicit to consumers. Regardless, users do not have the spending power to influence decisions on platforms where most of the revenue is driven by advertisers.
The current media business models are not inherently bad from a market perspective. In fact, it is a win all round when all goes well for consumers (who get a free service), brand advertisers (who get reach), platforms (who earn revenue) and content creators (who get funded). However, when this engagement is exploited by those looking to create or share harmful content, the risk to society far outweighs the market-efficiency benefits. An initiative called the Global Alliance for Responsible Media has been created to help drive uncommon collaboration across the media industry, recognizing that brands’ advertising dollars are the primary funders of content creators on platforms – and that with this comes both the responsibility and power to help drive change.
Whether it be the horrific terror attacks livestreamed in Christchurch or the depraved murder of a student uploaded online a decade ago, the implications of harmful content on society are significant. As our digital and physical worlds continue to collide, our online safety - based on the content we create, see, and share - will become our personal safety, full stop.