Timeline: Google’s path to tackling YouTube brand safety problems

Addressing brand safety took on a new urgency after an advertiser revolt in the spring of 2017. Here's the rundown of what the company has done to address advertiser concerns.

Chat with MarTechBot

Youtube Logo Videowall Tp 1920

This post has been updated to include new efforts and updates on previously announced initiatives.

As adoption of programmatic and retargeting ad buying has grown, brand safety — while certainly a concern — has often taken a back seat to reach and the ability to follow customers and site visitors no matter where they went on the web. However, in March 2017, brand safety took on a new sense of urgency in the advertising community after ads were reported showing next to extremist videos on YouTube, precipitating a boycott of more than 250 advertisers.

The boycott occurred at a time when Google and Facebook, in particular, had been under fire for months for facilitating the proliferation of fake news and extreme hyper-partisan content in the wake of the US presidential election cycle. Groups like Sleeping Giants had been publicly calling out advertisers to stop running their Google network ads on Breitbart.com, for example.

Rick Summers, Google’s global lead for publisher policies, told Marketing Land in April that in the summer of 2016, his team had noticed a trend of increasingly aggressive tones from people feeling freer to lodge personal attacks and express hateful thoughts online. In November, Summers’ group updated the Misrepresentative Content policy to address the growing number of fake news sites with domains that mimic legitimate news outlets that had been popping up on the AdSense network.

Additionally, advertisers had long been calling on Google (and Facebook) to provide greater transparency and third-party auditing of ad campaigns. In February, YouTube said it had initiated an MRC audit of the data collection and measurement practices of DoubleVerify, Integral Ad Science and Moat, the third-party measurement firms already integrated with YouTube.

Since the advertiser revolt in mid-March, Google has taken several steps to improve brand safety controls and keep ads from appearing on offensive content on YouTube and sites in its ad networks. To help keep track of what happened and when, we’ve compiled the following timeline of events and actions that Google has taken since the spring of 2017.

The timeline

March 16: The Guardian reports it pulled Google and YouTube advertising after its ads were spotted alongside extremist content and that the British government found similar extremist ad adjacencies and summoned Google to address the problem.

March 17: UK managing director Ronan Harris responds in a blog post that the company “will be making changes in the coming weeks to give brands more control over where their ads appear across YouTube and the Google Display Network.”

March 20: As more UK brands report pausing ads on Google platforms, Google’s EMEA head, Matt Brittin, apologizes at an industry conference to advertisers that had been affected.

March 21: Google’s chief business officer, Philip Schindler, says in a blog post that the company would be “taking a tougher stance on hateful, offensive and derogatory content that is attacking or harassing people based on their race, religion, gender or similar categories. This change will enable us to take action, where appropriate, on a larger set of ads and sites” and implementing more controls to shore up advertiser confidence, including:

  • new default settings that exclude potentially objectionable content.
  • account-level site placement and YouTube channel exclusions.
  • more fine-tuned controls.
  • new machine learning algorithms that can now find five times more non-brand-safe videos than before.

March 23: In response to a report by The Times of London, many large US brands follow UK brands’ lead, including Starbucks, Dish, AT&T and Pepsi. General Motors says it will advertise only on the YouTube home page, while Walmart and Johnson & Johnson said they’ll continue buying ads on YouTube Preferred channels.

April 6: YouTube updates its monetization eligibility rules. Channels must receive 10,000 views before creators can be eligible for the YouTube Partner Program and videos can be monetized. Once channels reach the threshold, they undergo a new review process.

April 26: Google expands the scope of its so-called Hate Speech policy for AdSense and launches page-level actions for publishers. The policy now applies to dangerous and derogatory content, as well as content that promotes discrimination or disparages an individual or group based on any characteristic “associated with systemic discrimination or marginalization.”

Early May: The exact date isn’t clear, and it may have been a bit earlier, but YouTube paused ads in its search results, called TrueView discover ads, in order to implement brand safety controls and visibility into where video ads appear. The ads are expected to come back online in Q3 2017.

May 15: First applied to hate speech, page-level actions can now be applied for all AdSense policy violations. Google says it started working on the technology in 2015 and began testing with publishers in the fall of 2016.

June 18: Google’s general counsel, Kent Walker, outlined four steps Google is taking to address extremist-related content on YouTube. Videos don’t have to expressly violate a policy to be ineligible for ads. For example, videos that contain inflammatory religious or supremacist content may not violate the hate speech policy but will appear behind an interstitial warning. Videos that carry this type of warning are not eligible for advertising or user comments or endorsements.

July 21: YouTube’s partnership with Google tech incubator Jigsaw begins rolling out to employ Jigsaw’s Redirect Method technology on the platform. When people search for keywords deemed to suggest a user’s positive sentiments toward an extremist organization like Isis, using the Redirect Method, YouTube will instead show a playlist of videos aimed at “changing the minds” of people who might be at risk for recruitment.

August 1: YouTube says its program to restrict visibility and remove ads from videos may not expressly violate YouTube policies but are found to contain supremacist or inflammatory religious content will be rolling out in the coming weeks to desktop, followed by mobile. In other updates on the steps announced in June, YouTube says its machine learning systems are catching extremist content on the platform at twice the rate and volume as before, and that it is working with more than a dozen organizations, including Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue to inform its policies and efforts to identify extremist content.

September 20: Google launched a $5 million innovation fund to support efforts to counter extremism.

October 18: YouTube says it manually reviewed more than a million videos to improve its flagging technology for monitoring content. Over the past month, it says, more than 83 percent of the videos removed for violent content were taken down without a human intervention.

We will continue to update this timeline as needed.


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About the author

Ginny Marvin
Contributor
Ginny Marvin was formerly Third Door Media’s Editor-in-Chief, running the day-to-day editorial operations across all publications and overseeing paid media coverage. Ginny Marvin wrote about paid digital advertising and analytics news and trends for Search Engine Land, Marketing Land and MarTech Today. With more than 15 years of marketing experience, Ginny has held both in-house and agency management positions. She can be found on Twitter as @ginnymarvin.

Get the must-read newsletter for marketers.