Google has a problem with YouTube, after many of its large advertisers left the video platform after seeing their brands associated with content that promotes hate and racism, the service has initiated a long series of changes to avoid losing much more money in the future.
One of the first measures was YouTube’s restricted mode and more recently they exposed their new policies explaining the type of inappropriate content that can not be monetized. This time they have announced that they will be taking four more steps to combat online terror, and eradicate extremist YouTube content.
A combination of artificial intelligence and human labor
First, Google will devote more engineering resources to its machine learning research to detect and remove videos containing extremist and terrorism-related content. They say their current models can detect more than 50% of these videos, and will now train more “content classifiers” to identify and remove inappropriate content faster.
Second, the number of independent video marketers will increase. The program will expand with the integration of 50 non-governmental expert organizations and 63 specialized organizations working on issues such as hate speech, self-harm, and terrorism.
Third, they will be tougher with videos that do not clearly violate their policies but contain religious or supremacist religious content. In the future this type of videos will be warned before they can be reproduced and can not be monetized, besides they will be harder to find on the web.
Finally, YouTube will extend its role in efforts to combat radicalization with its “Creators for Change” program mainly in Europe, perhaps because the continent is one of the most hard working in forcing social networks to eradicate hate speech. In addition to this, he has committed to working with other big companies like Facebook, Microsoft and Twitter to fight terrorism online.