Media, Entertainment and Sport

The latest technology shaping the future of digital safety

Image shows a woman using a laptop, mobile phone and memory stick to illustrate how important it is to continually assess digital safety

With so much reliance on technology, it is vital we continually focus on digital safety Image: Photo by Firmbee.com on Unsplash

Farah Lalani
Global Vice President, Trust and Safety Policy, Teleperformance
Cathy Li
Head, AI, Data and Metaverse; Member of the Executive Committee, World Economic Forum Geneva
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Media, Entertainment and Sport?
The Big Picture
Explore and monitor how Media, Entertainment and Sport is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Media, Entertainment and Sport

Listen to the article

  • Over 250,000 URLs containing or advertising child sexual abuse materials were found in 2021, a 64% increase from 2020.
  • A number of technologies have been developed to combat online harm.
  • By adopting these latest technologies, companies can prioritise the trust and safety of their users online.

The scale of harm online is growing and bad actors are becoming more sophisticated when perpetrating such harm. The Internet Watch Foundation (IWF), which works to tackle child sexual abuse material (CSAM) online, found 252,194 URLs containing or advertising CSAM in 2021, a 64% increase from 2020.

When it comes to terrorist content, platforms have been weaponised to live stream terrorist attacks from the Christchurch massacre, the synagogue in Halle, and, more recently, the Buffalo rampage. There has also been a concerning growth in cyberbullying, with the U.S. having the highest rate of racially motivated bullying online.

Discover

What is the World Economic Forum doing about improving online safety?

At the same time, time spent online is growing, leading to potentially greater exposure to digital safety risks. UK regulator Ofcom, for example, found that 67% of people aged between 13 and 24 had seen potentially harmful content online, although only 17% reported it. Technology to proactively detect these harms and prevent exposure is becoming increasingly important given the significant gap between people exposed to this content and people reporting it on platforms. Below are some of the latest tech trends shaping the future of digital safety:

1. Client-side scanning

Client-side scanning (CSS) broadly refers to systems that scan message content (e.g. pictures, text or video) for matches to a database of illegal or objectionable content before the message is sent to the intended recipient on an encrypted channel. An example of this is anti-virus software used to prevent your computer from being infected by malware.

In recent discussions about tackling CSAM and other illegal material, CSS has become a hot topic as it is seen by some as a way to find this material without breaking the technology behind end-to-end encryption (E2EE). There are two main methods for CSS: the first is on-device and the second is on a remote server.

Ian Stevenson, CEO of Cyacomb and Chair of the Online Safety Tech Industry Association (OSTIA), states: “Over the past few months we have seen some really good quality exploration of what is possible with client-side and split or multi-party compute technologies. This technology doesn’t tamper with the encrypted part of the system, leaving all of its protections intact. There is no ‘back door.’ Instead, these technologies act as a border check for content entering and leaving the encrypted domain and do so in a way that maintains the privacy of the user. The content they are sending or receiving cannot be identified, tracked or matched by any third party.”

Many organizations, such as Access Now and the Internet Society, have voiced their concerns about CSS, however. Access Now wrote a letter to the European Commission highlighting the risks to privacy, security and expression. Stevenson and other experts (including the heads of GCHQ and NCSC) do not share these concerns. Stevenson says that these technical capabilities for matching and blocking known CSAM provides excellent privacy protection for users and suggests that metadata leakage from mainstream E2EE apps is far more of a threat to privacy than these newly developed systems.

In addressing fears that certain governments could use CSS technologies to suppress free speech or identify dissidents, he suggests that these should be considered in the context of existing threats, rather than in isolation. “Autocratic governments intent on blocking particular content are unlikely to be very concerned about protecting privacy and, therefore, could easily mandate application of various solutions that exist today. The additional risk arising from deploying these new technologies is very small, with huge potential benefit to society,” he says.

Australia’s eSafety Commissioner, Julie Inman Grant, argues that there is a need to look at safety, privacy and security as the three pillars of digital trust. “It is important to balance safety with privacy and security – they are not mutually exclusive and healthy tensions amongst these imperatives can lead to much better outcomes... But, to continue to pit privacy against safety or as values that are mutually exclusive is totally missing the point – in many cases reported to us, particularly in the area of image-based abuse, privacy and safety are mutually reinforcing,” she says.

Have you read?

2. Artificial Intelligence and Natural Language Processing (NLP) Models

Artificial Intelligence (AI) systems can help increase the speed and scalability of content moderation by automating content moderation processes, as well as the detection of a range of harmful content through Natural Language Processing (NLP) models. One of the big challenges to its advancement, however, according to Bertie Vidgen, CEO and Co-Founder of Rewire, is that every platform is different – they have different users, different kinds of content, different media, different hazards and different norms. This is a huge problem for developers because the traditional one-size-fits-all approach in software development just doesn’t work.

“Over the past two years, we’ve seen the emergence of incredibly powerful models that can do 'zero shot' and 'few shot' learning. Practically, these advances mean that software can achieve very high performance with relatively little data. We have a way to go, but this has opened up exciting new possibilities to create scalable AI that is fully customised to each platform, without the huge costs and development timelines that would otherwise be needed,” Vidgen says.

There is still scepticism and distrust amongst some around the use of AI for online safety given a lot of software has struggled to handle issues such as nuance, intent, context and jokes. Increasing reliability, flexibility, cost-efficiency and accuracy of AI – together with human supervision to create effective feedback loops – will help increase its uptake.

Justin Davis, CEO and Co-Founder of Spectrum Labs, highlights how better detection of toxic behaviour paves the way for measurement and transparency tools that help online platforms make better policy decisions and create better user experiences. “When that's combined with the ability to identify and encourage healthy behaviour, trust and safety teams can align with the customer experience and product teams in a data-driven way to reinforce #SafetyByDesign principles," Davis says.

He believes investing in NLP and AI tools today will help the industry stay ahead of the curve against emerging threats, and drive the growth of healthier communities online.

3. Image and video recognition

Image hashing aims to essentially create a digital fingerprint so that duplicate images can be found; it is the process of using an algorithm to assign a unique hash value to an image. Since replicas of a picture have the same hash value, it enables the detection and removal of known CSAM without requiring further human assessment.

When this technology first came about, if an image underwent small changes, such as cropping or colouration alteration, then each edited version of the image would have a different hash value, reducing the effectiveness of this technology. In 2009, however, Microsoft collaborated with Dr. Hany Farid of Dartmouth College to develop PhotoDNA. This is based on hash technology, but it can recognise when an image has been edited and, therefore, still gives it the same hash value. This makes it harder for criminals to evade detection when it comes to distributing CSAM.

Cloud-based hashlists, such as those maintained by IWF, can prevent CSAM from being uploaded and the hashes cannot be reverse engineered back to the images. New technology from IWF allows for contextual metadata to be added to hashes and enables compatibility with multiple legal jurisdictions, worldwide.

Loading...

4. Age and identity verification: biometrics and facial analysis

The ability to authenticate users safely, securely and accurately in cases where the identity of an individual needs to be verified to access certain products, services or experiences online is key to online safety. A growing trend is using biometrics – such as voice and iris scans – to verify identity. Apple’s FaceID was a massive move forward in its adoption.

In the future, as many companies begin to think about safety and security in the metaverse, decentralising the identity verification stack, as opposed to centralising on the big tech platforms, will provide users with more control or self-sovereign identity (SSI). Users can then choose which set of unique identifying information to share with a company, minimising the amount and sensitivity of data shared to meet access requirements. Given concerns around bots, impersonators, fraud and misrepresentation, verifying identity, or at least identifying when an avatar or identity has been verified to a real human, will be key to safe experiences online.

In the online dating world, Tinder and Bumble have already added options for profile verification to build trust and safety. In gaming, the ability to verify a user’s accomplishments and digital assets would build trust in avatars and in the overall environment.

In addition, age verification is crucial to ensuring age-appropriate content and experiences, however, many current models for age verification, such as entering the date of birth, are easily bypassed. Facial analysis technology for age verification has been growing in popularity. This software looks for a face within an image and then analyses it for whatever information it needs, such as wrinkles, sunspots, grey hair, etc. to estimate the age of the user. This is distinct from facial recognition, which aims to identify and collect information on the person within the photo. Yoti, one of the companies in the area of using facial analysis for age verification, highlights how its AI reads each pixel and analyses for individual facial features that indicate age, emphasising that users can not be identified or recognised by the model.

Julie Dawson, Chief Policy and Regulatory Officer at Yoti, says: “Age verification will play a pivotal role in safeguarding young people and providing age-appropriate experiences online. Once you know the age of a child, then you can meet the requirements of the Children’s Code or Age Appropriate Design Codes. You can provide age-appropriate content, prevent children from stumbling across explicit content or accessing age-restricted goods or services, be certain the online community is within the same age threshold and turn off excessive notifications.”

Partnerships with Yubo and Instagram show the growing demand for social media platforms to verify users’ age more accurately. In future evolutions of the internet, age-gating will grow in importance as new experiences in the metaverse, such as gambling or watching a movie in a virtual cinema, will need to be age-restricted.

Inman Grant says the time to implement safety by design practices and protections for these new applications is now, while they are in development. “It is easier and more effective to build in safety protections at the start, rather than trying to bolt on or retrofit solutions after harm occurs. In addition to minimising harm and building trust, consistently applying a safety-by-design lens should be seen as an enabler that can help companies lift overall safety standards and improve their compliance with the Australian and other online safety regulatory frameworks.”

Where to next?

There are questions as to whether technological advancements are being adopted quickly enough to deal with these harms. The trust in – and effectiveness of – these technologies will be one of the key tools in the ongoing work to increase safety online. Justin Davis asserts: "Executive leadership must prioritise trust and safety so companies can drive the adoption of technology and sufficiently allocate resources for online safety. The most effective way to align executives behind trust and safety is by demonstrating its impact on revenue and growth through quantitative measurement.”

Similarly, Vidgen highlights that: “The reality is that trust and safety are always going to face substantial budget pressures, at least until platforms start seeing it as a revenue centre – i.e. an integral part of how they attract and retain their users – rather than a cost centre.” In the meantime, reducing the cost of these technologies whilst increasing their effectiveness – and public trust in them – will be instrumental to their adoption.

“Safety innovation is happening all the time, but it doesn’t occur in a vacuum. The right policy and regulatory settings need to be in place and the right balance struck between a range of imperatives involved in ensuring digital trust, including security and privacy,” Inman Grant says.

The Global Coalition for Digital Safety is working with key stakeholders to advance a range of principles, technologies and tools and policy frameworks that provide a holistic approach to improving safety online. Technology advancements are a core, tangible way to begin seeing progress in improving digital spaces for users worldwide.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How Paris 2024 aims to become the first-ever gender-equal Olympics

Victoria Masterson

April 5, 2024

1:44

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum