The web has opened a door for new communities and platforms that help people find diverse views and have a voice. Today, anyone with a smartphone can be a content creator, app developer or entrepreneur. And Google has enabled millions of content creators and publishers to be heard, find an audience, earn a living, or even build a business. Much of this is made possible through advertising. Thousands of sites are added every day to our ad network, and more than 400 hours of video are uploaded to YouTube every minute. We have a responsibility to protect this vibrant, creative world—from emerging creators to established publishers—even when we don’t always agree with the views being expressed.
But we also have a responsibility to our advertisers who help these publishers and creators thrive. We have strict policies that define where Google ads should appear, and in the vast majority of cases, our policies and tools work as intended. But at times we don’t get it right.
Recently, we had a number of cases where brands’ ads appeared on content that was not aligned with their values. For this, we deeply apologize. We know that this is unacceptable to the advertisers and agencies who put their trust in us. That’s why we've been conducting an extensive review of our advertising policies and tools, and why we made a public commitment last week to put in place changes that would give brands more control over where their ads appear.
I wanted to share that we've already begun ramping up changes around three areas: our ad policies, our enforcement of these policies and new controls for advertisers.
We know advertisers don't want their ads next to content that doesn’t align with their values. So starting today, we’re taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories. This change will enable us to take action, where appropriate, on a larger set of ads and sites.
We’ll also tighten safeguards to ensure that ads show up only against legitimate creators in our YouTube Partner Program—as opposed to those who impersonate other channels or violate our community guidelines. Finally, we won’t stop at taking down ads. The YouTube team is taking a hard look at our existing community guidelines to determine what content is allowed on the platform—not just what content can be monetized.
Every company has brand guidelines that inform where and when they want their ads to appear. We already offer some controls for advertisers that respond to these needs. In the coming days and months, we’re introducing new tools for advertisers to more easily and consistently manage where their ads appear across YouTube and the web.
We’ll offer advertisers and agencies more transparency and visibility on where their ads are running, and in the coming months we’ll expand availability of video-level reporting to all advertisers.
We'll be hiring significant numbers of people and developing new tools powered by our latest advancements in AI and machine learning to increase our capacity to review questionable content for advertising. In cases where advertisers find their ads were served where they shouldn’t have been, we plan to offer a new escalation path to make it easier for them to raise issues. In addition, we’ll soon be able to resolve these cases in less than a few hours.
We believe the combination of these new policies and controls will significantly strengthen our ability to help advertisers reach audiences at scale, while respecting their values. We will continue to act swiftly to put these new policies and processes in place across our ad network and YouTube. But we also intend to act carefully, preserving the value we currently provide to advertisers, publishers and creators of all sizes. In the end, there’s nothing more important to Google than the trust we’ve built amongst our users, advertisers, creators and publishers. Brand safety is an ongoing commitment for us, and we’ll continue to listen to your feedback.
But we also have a responsibility to our advertisers who help these publishers and creators thrive. We have strict policies that define where Google ads should appear, and in the vast majority of cases, our policies and tools work as intended. But at times we don’t get it right.
Recently, we had a number of cases where brands’ ads appeared on content that was not aligned with their values. For this, we deeply apologize. We know that this is unacceptable to the advertisers and agencies who put their trust in us. That’s why we've been conducting an extensive review of our advertising policies and tools, and why we made a public commitment last week to put in place changes that would give brands more control over where their ads appear.
I wanted to share that we've already begun ramping up changes around three areas: our ad policies, our enforcement of these policies and new controls for advertisers.
Raising the bar for our ad policies
We’ll also tighten safeguards to ensure that ads show up only against legitimate creators in our YouTube Partner Program—as opposed to those who impersonate other channels or violate our community guidelines. Finally, we won’t stop at taking down ads. The YouTube team is taking a hard look at our existing community guidelines to determine what content is allowed on the platform—not just what content can be monetized.
Increased brand safety levels and controls for advertisers
- Safer default for brands. We’re changing the default settings for ads so that they show on content that meets a higher level of brand safety and excludes potentially objectionable content that advertisers may prefer not to advertise against. Brands can opt in to advertise on broader types of content if they choose.
- Simplified management of exclusions. We’ll introduce new account-level controls to make it easier for advertisers to exclude specific sites and channels from all of their AdWords for Video and Google Display Network campaigns, and manage brand safety settings across all their campaigns with a push of a button.
- More fine-tuned controls. In addition, we’ll introduce new controls to make it easier for brands to exclude higher risk content and fine-tune where they want their ads to appear.
Increasing resources, accelerating reviews and improving transparency
We'll be hiring significant numbers of people and developing new tools powered by our latest advancements in AI and machine learning to increase our capacity to review questionable content for advertising. In cases where advertisers find their ads were served where they shouldn’t have been, we plan to offer a new escalation path to make it easier for them to raise issues. In addition, we’ll soon be able to resolve these cases in less than a few hours.
We believe the combination of these new policies and controls will significantly strengthen our ability to help advertisers reach audiences at scale, while respecting their values. We will continue to act swiftly to put these new policies and processes in place across our ad network and YouTube. But we also intend to act carefully, preserving the value we currently provide to advertisers, publishers and creators of all sizes. In the end, there’s nothing more important to Google than the trust we’ve built amongst our users, advertisers, creators and publishers. Brand safety is an ongoing commitment for us, and we’ll continue to listen to your feedback.
Expanded safeguards for advertisers
Reviewed by MCH
on
March 20, 2017
Rating:
No comments: