Connect with us

Technology

YouTube puts human content moderators back to work

Published

on

The humans are tagging in.

YouTube is re-assigning the work of content moderation to more actual humans, Neal Mohan, YouTube’s chief product officer, told the Financial Times.

At the start of the pandemic, YouTube had to reduce the staff and workload of in-office human moderators. So rather relying on that 10,000-person workforce, the company gave broader content moderation power to automated systems that are be able to recognize videos with harmful content and remove them immediately. 

That led to the removal of 11 million videos between April and June, a higher number than usual. However, YouTube’s AI systems erred on the side of caution, which meant they removed more videos that actually broke no rules. 

According to the FT, YouTube reversed content moderation decisions on 160,000 videos. Usually, YouTube reverses its rulings on less than 25 percent of appeals; under AI moderation, half of the total number of appeals were successful.

“One of the decisions we made [at the beginning of the pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in a slightly higher number of videos coming down,” Mohan said.

Now, the company is able to reassign some of that work back to humans who can make more nuanced decisions. Since the coronavirus pandemic is still raging, Mashable has reached out to YouTube to learn how this is possible and what has changed for in-person staff. We’ve also asked how many more human moderators are getting back to work, and how they will work with AI systems (typically, humans review videos that AI has initially flagged). 

We’ll update this story when and if we hear back.

Continue Reading
Advertisement Find your dream job

Trending