Only a global response can tackle the rise of online harms. Here's why
Non-consensual sharing of intimate images is on the rise. Image: Unsplash.
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:
Digital Identity
- Reporting of online harms increased significantly during the COVID-19 pandemic.
- Only a few governments around the world have started to seriously tackle online abuse.
- What more can be done to legislate against online harms and educate wider society?
The “new normal” of working, learning, and socialising from home during COVID-19 saw the significant rise of online harms being reported. According to Australia’s eSafety Commissioner, in 2020, reports of illegal online content increased by 90%, reports of non-consensual sharing of intimate images rose by 114%, and reports of online harassment and cyberbullying increased to approximately 40% and 30%, respectively.
While the global pandemic significantly increased the prevalence and reporting of online harms, many of these problems are not new. In a world-first international study published in 2020 on the non-consensual sharing of intimate images across Australia, New Zealand, and the UK, it was found that among 6,109 respondents, 1 in 3 had experienced image-based sexual abuse, with ethnically and sexuality diverse groups experiencing higher rates of victimisation.
In 2019, a report on deepfakes (non-consensual computer-generated fake videos) found that 96% of deepfakes are pornographic, with 100% of those pornographic deepfakes being of women. According to research by the Pew Research Center, in the US approximately 1 in 4 US adults have experienced online harassment, with more severe encounters reported since 2017.
These countries are responding to online harms
For over 20 years, social media companies have remained largely unregulated by governments and it’s only recently that countries around the world are beginning to seriously tackle online harms such as cyber-abuse, cyberbullying, image-based sexual abuse (the non-consensual sharing of intimate images) and more recently, deepfake abuse.
In the UK, the US, and Australia, recent landmark and world-first legal efforts have been made by governments to tackle online harms. So, how are they responding?
United Kingdom
In the UK, a recent Online Safety Bill was published proposing a new statutory duty of care for social media companies toward its users, including a duty to undertake an “illegal content risk assessment” to assess the level of risk of users encountering terrorism content, child sexual exploitation and abuse content, and illegal content, among other things. This duty of care would also require social media companies to take “proportionate steps to mitigate and effectively manage the risks of harm to individuals.”
The UK’s Online Safety Bill also proposes to empower OFCOM, the UK’s communications regulator, with online safety regulatory powers to impose on certain regulated services, including social media companies, a maximum penalty of the “greater of £18 million and 10% of the person’s qualifying worldwide revenue” for failing to comply with such duties.
Australia
In Australia, taxpayers fund the operation of an Australian online safety regulator tasked with tackling online harms. Australia’s Office of the eSafety Commissioner is responsible for responding to reports of the non-consensual sharing of intimate images. Under Australia’s intimate image abuse regime, the eSafety Commissioner has the power to issue fines of AUD$105,000 to individuals who share intimate images without consent, and companies who fail to comply with removing such content can be fined AUD$525,000.
In 2021, new online safety legislation was passed in Australia to “create a new complaints-based, removal notice scheme for cyber-abuse” perpetrated against Australian adults. The Australian eSafety Commissioner will have to power to ensure that social media companies “take all reasonable steps” to remove the cyber-abuse material within 24 hours. Failure to comply will be subject to a civil penalty.
United States
In the US, the Violence Against Women Reauthorization Act of 2021, passed the House and has been received in the Senate. The proposed federal legislation included the SHIELD Act (Stopping Harmful Image Exploitation and Limiting Distribution Act) as an amendment, which proposes to criminalise distributing and intentionally threatening to distribute non-consensual intimate visual depictions of an individual, punishable by imprisonment of two years and a fine.
Are these responses enough?
Countries around the world are making significant strides to enact and implement legal, regulatory and policy frameworks and infrastructure to keep their citizens safe online, as well as hold tech companies accountable for the online harms that occur on their platforms. But we are living in an increasingly globalised and inter-connected world and one of the most significant limitations the global community must reckon with is the urgent need for a global and collaborative response to online harms. Governments, law enforcement and global industries must tackle these issues, to not only hold perpetrators accountable, but to provide access to justice for victims and survivors.
Countries introducing domestic legislation to tackle online harms is a great start, but its effectiveness is limited by jurisdictional issues. Unless these are overcome they will always fall short as an approach to tackling online harms, because to tackle this borderless and global issue, we need a borderless and global response. In 2021, it won’t be enough for countries to criminalise online harms, remove illegal content, and require social media companies to comply with removal requests, without also holding perpetrators accountable across national borders. Addressing the issues of accountability and enforcement across borders will remain a significant hurdle to overcome as countries continue to respond to online harms.
What more needs to be done?
Aside from the need for a global, collaborative response – domestic legislative, regulatory, and policy efforts will only go so far in this fight against online harms. We also need a multi-faceted, whole-of-society approach to tackling online harms through education in schools as well as by-stander education initiatives, specialist training for law enforcement, trauma-informed counselling services for victims and survivors, employment policies and practices to assist victims and survivors, and compensation for victims. There is a whole raft of solutions that ought to be implemented domestically and internationally and based on recent movements in this area to create a safer online and offline world for all – the future is looking up.
The World Economic Forum’s Global Future Council on Data Policy is leading a multistakeholder initiative aimed at exploring these issues, Pathways to Digital Justice, in collaboration with the Global Future Council on Media, Entertainment and Sport and the Global Future Council on AI for Humanity. To learn more, contact Evîn Cheikosman at evin.cheikosman@weforum.org.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on CybersecuritySee all
Moises Benedict Carandang and Roxanne Jacutan
October 15, 2024
Akshay Joshi
October 14, 2024
Kate Whiting
October 1, 2024
Sameer Kenkare, Filipe Beato and Anna Sarnek
September 25, 2024
Itai Greenberg
September 16, 2024
Akshay Joshi
September 16, 2024