YouTube has announced that its team of human content reviewers are working with machine learning technology to tackle the problem of violent extremist content and comments uploaded to the platform. This is part of a wider effort to ensure content that doesn’t meet their guidelines is either prevented from being uploaded in the first place, or removed as quickly as possible once it is flagged.
Machine Learning is key to tackling extremism on YouTube
YouTube CEO Susan Wojcicki said in a post on the official YouTube blog that since June, nearly 2 million videos had been reviewed by their Trust and Safety teams alongside machine learning technology. The human members of the team have been teaching the artificial intelligence technology how to identify extremist content.
This has been coupled with an increase in the number of human reviewers working to constantly review the content. The blog post indicated that Google is looking to increase the number of people working to address content that doesn’t meet their guidelines to 10,000 people by the end of 2018. This will be across Google’s products and will include YouTube.
Here are some key figures from the impact that machine learning has had at YouTube…
Since machine learning was adopted to tackle violent extremist content…
YouTube has removed over 150,000 videos featuring violent extremism
Human reviewers can review nearly 5 times as many videos as they could without the help of machine learning
Machine-learning algorithms flagged 98% of the videos removed for violent extremism
Nearly 70% of videos containing violent extremist content are taken down within 8 hours of upload, nearly 50% within 2 hours, and speed continue to accelerate.
The productivity of flagging and reviewing content has increased dramatically! The amount of content flagged with the help of machine learning since June would have taken 180,000 people working 40 hours a week to do the same.
Increasing productivity & appeasing stakeholders
The platform’s responsibility to viewers, creators and advertisers is at the forefront of this effort to tackle violent extremism, and machine learning will be used in the future to also tackle issues including hate speech and child safety.
Wojcicki is refreshingly open about YouTube’s desire to ensure that their advertising partners are included in the conversation as well as the content creators, recognising that they cannot succeed without each other. Advertising allows content creators to make money from their videos, and successful creators enable advertisers to reach engaged audiences.
Stricter rules mean that some creators may see their monetisation opportunity diminish if their content doesn’t fit with the advertisers’ brand values. On the other hand, more manual reviews of adverts will help to ensure that ads are appearing alongside relevant and appropriate content.
Artificial Intelligence and the fight against extremism
It isn’t just YouTube that is using machine learning and artificial intelligence to tackle extremist content, fake profiles and hate speech.
Mounting pressure from European and US governments had led all of the major social media networks to invest in anti-extremism technology.
Twitter uses AI to discover and remove accounts that propagate terror related content. In September 2017 it announced that 300,000 accounts had been taken down between January and June 2017 and 95% of the accounts were flagged by its AI technology. Impressively 75% of accounts were flagged as potential terror-linked accounts and taken down before they could even send their first tweet.
At Facebook Monika Bickert, head of global policy management, and Brian Fishman, head of counter-terrorism policy, wrote about the platform’s use of artificial intelligence in detecting terror-related content.
“Today, 99 percent of the ISIS and al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site.”
These platforms will continue to be pressured to detect and remove extremist, violent, hateful and dangerous content from their sites by governments and their citizens. As artificial intelligence and machine learning develops and improves, flagging and detection will be more accurate, faster and lead to a safer online environment for the platforms’ users.
Working with human reviewers to teach the technology how to detect the content will be important, and that is why YouTube and other organisations are increasing their human resources as well as their technology.
Date published: 24/04/2018