YouTube CEO Susan Wojicki published a blog post this evening, addressing some of the ongoing controversies around trust and safety that have roiled the video platform over the last year. Wojicki began by emphasizing all the ways in which she has seen YouTube become a force for good over the last decade. But the YouTube chief exec also acknowledged that she’s “seen up-close that there can be another, more troubling, side of YouTube’s openness.”
Wojicki is referencing the variety of extremist videos, child exploitation schemes, and other disturbing content on YouTube that’s resulted in massive backlash and pulled advertising campaigns. “I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm,” she added.
Wojicki says that YouTube has learned some valuable lessons from its efforts to stamp out extremist content. She writes that YouTube has removed over 150,000 videos for violent extremism since June of this year, when it began using machine learning technique to help identify extremist videos. The company says that 98 percent of the videos it removes are flagged by its algorithms and that 70 percent of violent extremist content is taken down within eight hours of being posted.
The plan is to take those same machine learning techniques and turn them toward the issues surrounding child safety and hate speech. Of course, there is a clear database of extremist content that machine learning systems can work from. Deciding which children’s video cross the line from strange to inappropriate to exploitative, or when a video moves from angry opinion to hate speech, may be much harder for an algorithm to identify. So the company is promising to add more humans to the mix as well. Wojicki writes that the company is “bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.”
As Google has worked to tighten its policies and tweak the algorithms that police content on its service, many creators have seen safe, inoffensive videos lose the ability to earn money from advertising. Wokicki promised to provide creators with more transparency and tools to help get their business restored if they believe their videos have been flagged in error.
Finally, Google has seen big advertisers leave the platform twice this year, once following a Wall Street Journal report about marketing from major brands playing next to hate speech and extremism, and a second time after a report highlighted how ads were running beside videos that were rife with creepy comments from pedophiles.
Wojicki says YouTube will narrow the group of videos that are eligible for advertising. “We are planning to apply stricter criteria, conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should,” she wrote. “This will also help vetted creators see more stability around their revenue. It’s important we get this right for both advertisers and creators, and over the next few weeks, we’ll be speaking with both to hone this approach.”
Because YouTube’s finances aren’t broken out in detail when Google parent company Alphabet reports its quarterly earnings, it’s hard to know how big these advertiser boycotts have been, or whether they have had any meaningful impact on the company’s bottom line. So far the hit, if there was one at all, appears minimal. And reports from this summer indicated that almost all the big brands that had spoken out have since returned.
Still, two major crisis in one year may sour some marketers on YouTube for good. A lengthy op-ed from YouTube’s CEO is a clear sign that the company is concerned about the issue, its impact, and its optics.
Let’s block ads! (Why?)