Popular video-sharing website YouTube was forced to remove more than 8million inappropriate videos during the last three months of 2017.
YouTube released a transparency report that shows a high number of inappropriate content being uploaded, however automated flagging is speeding up the removal process.
It’s easy for the internet to get cluttered with spam and inappropriate content, which means major clean-up for big internet companies who receive massive amounts of uploads and traffic.
Seeking more transparency and less spam, Google, which purchased YouTube in 2006, has published an update regarding the ongoing removal of content that violates its policy. The company has released astonishing figures, along with a quarterly report on how Community Guidelines are being enforced.
According to Google eight million videos that have been removed from the popular video sharing platform were “mostly spam or people attempting to upload adult content,” and represent a fraction of a percent of YouTube’s total views during this time period.”
Machines were the first to flag 6.7 million videos and of those, 76% were quickly removed before they received a single view. These machines are allowing the company to flag content at scale and they claim the technology is paying off in terms of high speed removals across high-risk, low-volume areas (like violent extremism) and in high-volume areas (like spam).
More than half of all “violent extremism” videos have fewer than 10 views, whereas in the beginning of 2017 that number was eight percent.
Although the deployment of machines may suggest a lesser need for humans, that has not been the case for YouTube. Their systems supposedly rely on human review, and thus the company has been busy hiring.
“At YouTube, we’ve staffed the majority of additional roles needed to reach our contribution to meeting that goal. We’ve also hired full-time specialists with expertise in violent extremism, counterterrorism, and human rights, and we’ve expanded regional expert teams,” stated the company’s official blog.
As for this year’s goals, the brand is committed to bring the total number of people working on addressing violent content to the grand total of 10,000 across Google. Furthermore, there are plans to refine reporting systems and add additional data, including data on comments, speed of removal and policy removal reasons.
Edited by Neo Sesinye
Follow Neo Sesinye on Twitter
Follow IT News Africa on Twitter