- A growing body of evidence illustrates the ways social media has radicalized users and resulted in violence.
- There is growing debate about the need to check extremism and a desire to enable freedom of expression.
According to a pair of reports published recently by a media watchdog, TikTok can swiftly channel young users from relatively benign interests to more troubling topics – and even into the arms of some of the extremist movements involved in the deadly attack on the US Capitol in January.
A steadily-building body of evidence that becoming radicalized by way of the internet is a very real and dangerous phenomenon appears to be reaching a critical mass. So, what should be done about it?
Twitter CEO Jack Dorsey acknowledged during a congressional hearing last month that the social media service played a role in the white supremacist attack on the US Capitol. Dorsey told lawmakers that Twitter is working to address extremism and misinformation.
Earlier this month, YouTube made its first public disclosure of the percentage of views coming from videos later removed for rules violations including promoting violent extremism. But it stopped short of sharing what would likely be the “eye-popping” total number of views these videos garner before they disappear.
As our digital and physical worlds continue to collide, our online safety - based on the content we create, see, and share - will become our personal safety, full stop.—Farah Lalani, Community Curator, Media, Entertainment and Information Industries, World Economic Forum, and Cathy Li, Head of Media, Entertainment and Sport Industries, World Economic Forum
Social media’s algorithmic tentacles require surprisingly few prompts to pull people into a cascade of xenophobic, racist, anti-Semitic, and religious extremist messaging. The real-world results have piled up, and while some content has only had an indirect impact it’s been no less deadly, like the anti-science propaganda blamed for killing thousands.
According to a report published in December, social media platforms including YouTube helped radicalize the perpetrator of a 2019 terrorist attack on mosques in Christchurch, New Zealand that left 51 people dead. The report noted the assailant’s belief in the “Great Replacement” theory, which holds that white populations are being disempowered and replaced by people of colour – a theory also popular among rioters at the US Capitol.
A French writer is credited with popularizing the Great Replacement theory roughly a decade ago. Since then it has been heavily promoted online by groups like Generation Identity (Génération identitaire), which was banned by the French government last month.
In India, Facebook has struggled with its response to violent religious extremists; it refrained from banning them due to a fear of endangering the company’s staff and business prospects. In Australia, a government official recently drew a parallel between ways right-wing extremists there are recruiting online, and methods used by the Islamic State.
(While the Islamic State has suffered defeats in the Middle East, recent efforts to bolster its profile online have involved talk on forums of establishing a new caliphate in Africa.)
In Germany, one study found a direct link between anti-refugee sentiment online and violent attacks. It suggested that a right-wing political party's social media posts have likely pushed “some potential perpetrators over the edge.”
One way to try to curb online extremism is by deplatforming the most popular and troublesome instigators. However, they can often simply migrate to seedier corners of the internet and bring their followers with them.
Stiffer rules and regulations may therefore be in the works for an industry that's mostly been left to its own devices. One American lawmaker opened last month’s congressional hearing on social media’s role in promoting extremism by declaring that, “self-regulation has come to the end of its road.”
The Global Alliance for Responsible Media, in partnership with the World Economic Forum, is working to improve the safety of digital environments, address harmful and misleading media and protect consumers and brands.
For more context, here are links to further reading from the World Economic Forum’s Strategic Intelligence platform:
- This US study based on interviews with white supremacists and Islamic extremists showed that in more than two-thirds of cases, exposure to propaganda via the internet and other media played a role in their radicalization. (RAND)
- Pakistan’s Shia community is being targeted, according to this report; between August and September 2020 nearly half of all social media mentions of Shias in the country were negative, and the most frequently used term was the Urdu word for “infidel.” (The Diplomat)
- We’re going to have to create a vocabulary to talk about how friends fell down the wrong YouTube hole and came out speaking another language, according to this piece – which argues for more explanation of how social networks prey on hopelessness and fear. (NiemanLab)
- India has banned dozens of Chinese apps amid tensions between the countries, and launched indigenously-developed alternatives. According to this piece, it’s unclear whether these services are equipped to resist co-opting by extremist groups. (Observer Research Foundation)
- “No one wants to believe that they’ve created something terrible.” A reporter covering online disinformation and conspiracies delves into tech companies’ lack of accountability for amplifying our worst instincts. (NiemanLab)
- COVID-19 has left young Australians isolated and vulnerable to far-right hate messaging, according to this piece. It notes that all of the country’s “Five Eyes” intelligence-sharing partners have sought to list right-wing extremist organizations as terrorist groups, and argues that Australia should follow suit with the same rigour. (ASPI)
- When government efforts to clamp down draw criticism: India’s “draconian” new rules about what can be said online, ostensibly designed to combat misinformation, will have profound implications for privacy and freedom of expression, according to this analysis. (EFF)