Artificial Intelligence

YouTube's AI is so good at finding offensive content it needs more staff to keep up

A 3D-printed YouTube icon is seen in front of a displayed YouTube logo in this illustration taken October 25, 2017. REUTERS/Dado Ruvic/Ilustration

YouTube is using algorithms to flag and remove questionable content. Image: REUTERS/Dado Ruvic/Ilustration

David Meyer
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

YouTube has for the first time revealed a report detailing how many videos it takes down due to violations of the platform’s policies—and it’s a really big number.

The Alphabet-owned site removed more than 8 million videos during the last quarter of 2017. But how did it decide to take them down? Machine learning technology played a big role.

According to YouTube, machines rather than humans flagged up more than 83% of the now-deleted videos for review. And more than three quarters of those videos were taken down before they got any views. The majority were spam or porn.

Machine learning—or AI, as the tech industry often likes to call it—involves training algorithms on data so that they become able to spot patterns and take actions by themselves, without human intervention. In this case, YouTube uses the technology to automatically spot objectionable content.

In a blogpost, the YouTube team said the use of the technique had a big effect.

Regarding videos containing “violent extremism,” which is banned on the platform, only 8% of such videos were flagged and removed in early 2017 before 10 views had taken place. After YouTube started using machine learning for flagging in the middle of the year, “more than half of the videos we remove for violent extremism have fewer than 10 views,” the team said.

However, the use of machine learning does raise serious questions about content being taken down that should stay up—some depictions of violent extremism, for example, may be satire or just reportage.

Several news organizations, such as Middle East Eye and Bellingcat, found late last year that YouTube was taking down videos they had shared, depicting war crimes in Syria. Bellingcat, which played a key citizen-journalist role in investigating the downing of Malaysia Airlines Flight 17 over Ukraine in 2014, found its entire channel suspended.

Have you read?

“With the massive volume of videos on our site, sometimes we make the wrong call. When it’s brought to our attention that a video or channel has been removed mistakenly, we act quickly to reinstate it,” YouTube said at the time.

In its Monday blog post, YouTube said its machine learning systems still require humans to review potential content policy violations, and the number of videos being flagged up using the technology has actually increased staffing requirements.

“Last year we committed to bringing the total number of people working to address violative content to 10,000 across Google by the end of 2018,” the team said. “At YouTube, we’ve staffed the majority of additional roles needed to reach our contribution to meeting that goal. We’ve also hired full-time specialists with expertise in violent extremism, counterterrorism, and human rights, and we’ve expanded regional expert teams.”

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceMedia, Entertainment and SportEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

AI and cyber, cancer-care, trade tech, and green skills: Top weekend reads on Agenda

Gayle Markovitz

March 1, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum