/ An inside look at the business of digital content
Google and Facebook must do more for brand safety
September 21, 2017 | By Mark Glaser, Founder and Publisher – MediaShift@mediatwitWith the great power that Facebook and Google have accumulated in online advertising comes great responsibility. For years they tried to play it neutral as platforms and not make editorial decisions. That’s now changed dramatically as they have had to take actions on many fronts, from fake news and spammy websites to Russian interference in the U.S. election. But have they done enough?
The explosion of fake news on Facebook and its potential interference with the U.S. election last November initially put the social media giant on the defensive. But now, Facebook has admitted that it found about $100,000 in ad spending from June 2015 to May 2017 on the platform connected to inauthentic accounts that likely operated out of Russia. Google too faced loads of backlash after brands realized their advertising was appearing alongside racist and extremist videos on YouTube and other sites, thereby marking them as supporters of hate.
It’s no wonder then that Facebook and Google are making bolder moves to restrict algorithmic ad targeting. However, ongoing eruptions of how their advertising is backfiring casts a stronger light on the pitfalls of programmatic and self-service ads, and the checks necessary to ensure brands maintain their safety.
Targeting racists with ads
ProPublica’s damning report of how Facebook enabled advertisers to reach audiences who had specified interests in “Jew hater,” “How to burn jews,” or “History of ‘why jews ruin the world’” cast an international spotlight on the company, which was already under intense scrutiny. Acting on a tip, ProPublica reporters spent $30 on Facebook’s automated advertising platform to target these audiences — admittedly tiny, though Facebook did suggest additional categories that might boost the audience size. And Facebook’s automated platform approved the targeted ads within a span of 15 minutes. Facebook then immediately censored the anti-Semitic content after ProPublica informed the company.
BuzzFeed’s Alex Kantrowitz then piled on and discovered that Google allowed advertisers to target people who had been typing racist, bigoted, and derogatory terms into its search bar. It would also suggest similarly loaded racist and bigoted terms within its ad-buying tool.
Given that Google’s Adsense only monitors content at the page level and not the site level, brands also run the risk of their advertising appearing on so-called “safe” pages of extremist sites, despite Google’s efforts to monitor hate speech.
Are current restrictions enough?
The tech giants have taken action, including removing those search terms from ad targeting after those stories ran. But they are in a Catch 22, because the more they restrict, the more they become arbiters of free speech vs. hate speech. And not even they want that kind of role.
Facebook, for one, said it was adding new standards — enforced by a combination of human and automated review — to ensure fake news videos and objectionable content had nothing to do with advertising, and vice versa. Facebook has 5 million advertisers on the platform. And its newest ad opportunities will come through its new video section, Watch, as the company pivots more toward video and in-stream video advertising.
Facebook has announced “monetization eligibility standards” to offer clearer guidance on what kind of content, publishers, and video creators can profit from advertising. It has also stated that it will start releasing “post-campaign reports” to advertisers outlining where their ads actually appeared, as part of an overall effort to monitor their monetization.
Google too has insisted that a video channel now must reach a 10,000-view threshold in order for it to make money from ads. It hopes that this will better police extremist and hateful content, and calm the fears of advertisers. After BuzzFeed’s report, Google senior vice president of advertising Sridhar Ramaswamy admitted it had to step up. “In this instance, ads didn’t run against the vast majority of these keywords, but we didn’t catch all the offensive suggestions. That’s not good enough and we’re not making excuses. We’ve already turned off these suggestions, and any ads that make it through, and will work harder to stop this from happening again.”
Not to be outdone — especially since news reports revealed ad campaigns utilizing derogatory terms appeared on the platform — Twitter has also announced it has fixed what it calls the “bug” that allowed some advertisers to use racial epithets and offensive terms.
Finding the right guide
But the elephant in the room is whether these companies themselves should be doing the policing, or whether a third-party group (or even the government) needs to step in to ensure accountability. More human oversight is an obvious answer to the problems of automated advertising. But consider this: What might have happened had not ProPublica, BuzzFeed, and other news organizations stepped in to counter-check the kind of ad targeting possible? These ad campaigns would likely very well still be in use is the probable answer.
Google, Facebook and other major advertising platforms have incentives to keep themselves in the clear only when their own brand safety comes into question. Creating more collaborations with third parties, as the platforms have done in the fight against fake news, seems all the more necessary to ensure brand safety for everyone.