When it comes to cheating advertisers out of their ad spend and stealing revenue from publishers, fraudsters are upping the ante. By monitoring of invalid traffic (IVT) associated with digital advertising, our team tracked an unrelenting, sophisticated operation first publicly reported as 404bot.
By using a broad range of tools, our researchers were able to pinpoint the actual proxy software that had been installed on consumer PCs. This key discovery allowed us to closely track the fraud as it morphed. We found this scheme to be particularly devious for two reasons:
The offending program was typically installed unknowingly through various seemingly safe entry points. (This is (often referred to as a PUP, or potentially unwanted program.) It would then turn the downloader’s computer into a botnet carrying out ad fraud in the background.
Much like software companies roll out bug fixes and updates to their code, the scheme similarly evolved its mechanisms for fraud. Each adjustment was made with the intent to elude detection.
The following is a debrief on the uncovering of the scheme. We outline what tipped us off to foul play. We also discuss the steps we are taking to mitigate the activity and why it’s so important to monitor invalid traffic to protect all sides of the programmatic supply chain.
The deep dive into botnet activity
In mid-2019, our team observed suspicious botnet activity. We spent weeks investigating and tracking various suspicious identifiers surrounding this activity. Finally, we were able to pinpoint a unique signature. We then matched it to a binary of a desktop application called NotToTrack. This free VPN, readily available for any consumer to download, masked itself as software meant to secure the installer’s computer.
Our team was able to obtain the VPN’s actual malware binary. We ran it in our clean room, where we can safely download malware ridden software and de-obfuscate code. This allowed us to:
Record its web activity and thus reconstruct the behavior seen in impressions from the relevant time period
Validate that it is based on Chrome Embedded Framework (CEF), a popular framework for web based applications that can be used both legitimately and by bots
Develop new detection methods based on these observations
The discovery of this VPN and its executable file allowed us to see the full breadth of the operation. What we found was very persistent, targeted ad fraud.
Summary of the bot’s evolution
The VPN front was just one point of entry for the operation’s fraud. It was probably not even the most common distribution vector.
Following the newly discovered evidence, we have found forum complaints that indicate this operation has been underway since as early as November 2016. Based on evidence we’ve gathered, we also believe the origins of this bot are the same as 404bot, a recent fraud scheme made public by another verification provider.
However, our findings reveal that this bot has not ceased its fraudulent activity. And as it attempts to elude detection, it has moved from its initial domain spoofing practice onto other mechanisms.
In the graph, we see the subset of spoofed domains tracked by Integral Ad Science (blue) alongside the rest of the malware-originated activity that Moat has been tracking (orange). Where the domain spoofing ceases (blue activity falls flat), we see the fraudulent impression activity actually has continued as the perpetrators evolved their tactics.
What is the impact that we are seeing?
The question of impact always comes up when discussing the discovery of an ad fraud scheme. This is usually in relation to stolen impressions and ultimately, lost ad spend. But even as we bring the pieces of the puzzle together, the true size of any botnet is hard to measure. So, we caution against assigning too much value to a market impact analysis.
Even more important is how this bot illustrates the ability for these schemes to quickly mutate. So when the conversation turns to impact, we firmly believe it should not be about size, especially as it relates to protecting our clients and doing what is right for the industry. A botnet that is “small” by our measurement today could easily evolve into a much larger threat tomorrow.
What we do know, from the portion of traffic we are able to measure, is that at its peak, this bot was clocking in at around 11 million impressions per day. And, as recently as March, it was still showing activity at just under 2 million impressions.
Further, the nature of how this bot is operating—by targeting the computers of unknowing consumers through shifty mechanisms—is particularly invasive. The distribution of malware can have severe consequences on hardware. The complaints of users who unknowingly downloaded the offending proxy software mention various, flag-raising behavior. For example, at the Bleeping Computer forum a user describes, “I have a bunch of programs running that I did not allow to start… My computer in-turn slows down due to all these programs running their bleep [sic]. Earlier today, I had 3 virus programs scanning my computer. I never downloaded, let alone installed them.”
How to protect your investments from IVT and ad fraud
Unfortunately, ad fraud is a pervasive and pesky byproduct of programmatic advertising. This is thanks to an environment that affords ample anonymity as well as produces a significant amount of profit. Having meaningful data is the first step in enabling us all to ask the right questions about these schemes that will continue to occur.
Moat has numerous methods for detecting various forms of IVT. As we identify and track fraudulent botnets, we use our findings to understand how they evolve. We continuously develop detection mechanisms that can identify and separate bot activity from human. We believe everyone in the ecosystem has a role to play in eliminating ad fraud and addressing IVT. It is through continued collaboration, information sharing, and advancing our technology that will help us build trust and ultimately, stop the cheaters from lining their pockets.
On January 29th, at the 2019 DCN Next: Summit, Rappler CEO Maria Ressa outlined the role social media and concerted, well-orchestrated disinformation campaigns played in perpetuating false information and media distrust in the Philippines, as well as attacks aimed at Rappler.
She then went on to have a wide-ranging discussion examining the various pressures on media credibility (and safety) worldwide with interviewer extraordinaire Kara Swisher, Co-founder of Recode.
Less than two weeks later, on Wednesday, February 13 at 5 p.m. local time in Manila, plainclothes officers from the National Bureau of Investigation, an agency within the Department of Justice, arrested Ressa on charges of cyber libel. As Ressa wrote in a statement: “We are not intimidated. No amount of legal cases, black propaganda, and lies can silence Filipino journalists who continue to hold the line. These legal acrobatics show how far the government will go to silence journalists, including the pettiness of forcing me to spend the night in jail.”
The Board of Directors of Digital Content Next (DCN), a trade association representing nearly 80 high-quality media companies, said, “The arrest of Maria Ressa is deeply troubling. Maria traveled to the U.S. to share her developing story with our members only two weeks ago. It is vital we value and protect the independence of media organizations and journalists around the world. Any effort to silence journalists or use intimidation to reduce their reporting is an affront to freedom. We encourage global leaders and the press community to make it clear this cannot be tolerated.”
In light of Ressa’s arrest, and to reinforce our support of a free press everywhere, DCN is pleased to share the video of Ressa and Swisher’s interview (full transcript below):
And, for those who would like to show support for Rappler and Ressa’s work, she has provided a link to their crowdfunding page.
Below, we’ve shared a full transcript of Ressa’s conversation with Swisher.
Alexandra Roman: [00:00:00]
I am truly honored to introduce this next conversation interviewer
extraordinaire Recode’s Kara Swisher. She’ll be speaking with a very special
person in our world these days. Named Time magazine’s Person of the year as one
of the guardians of journalism, please welcome the CEO of Rappler, Maria Ressa.
Kara Swisher: [00:00:33] So we’re going to start…first Maria is going to make a presentation then we’re gonna have a full fantastic discussion. Maria was on my podcast recently. It was, it was an amazing experience for me and I’m so glad she’s here and safe in the United States right now. We’ll be talking about that more. But first Maria go ahead.
Maria Ressa: [00:00:52] So
I like that Jason [Kint, CEO of DCN] talked about trust. And this is stuff I’ll
show to you from our perspective in the Philippines because it’s got the data
to prove the thesis and then I think you guys are not quite… I think you’re
not seeing the termites eating at the the credibility that you have as news
organizations and those termites are coming from geopolitical power plays. We
go back to information is power and with that that let me show you what’s
happened in the Philippines.
January last year, there were two surveys that came up exactly the same time but they’re almost complete opposite results. The top is real world Pew Global Attitudes Survey: How do Filipinos look at traditional media? And they came back they said 86 percent think traditional media is and the right quote is “fair and accurate.” But the Philippine trust index, which is part of the Edelman Trust survey, they came out with a survey that same month a year ago. And they asked people on social media and they came out with 83 percent “distrust traditional media.” Right. So how did that happen? We tried to figure out why is the world upside down? That’s really the question right. Why is the world upside down?
We have a database that we started gathering in July of 2016 when the when the drug war began in the Philippines because the attacks all came on social media. In our case it’s Facebook. But this is a timeline of attacks on traditional media. And in Rappler because we were the main focal point for a period of time, [which] started January 2015 and then moving to April 2017. January 2016 was when the campaigns began and the social media machine of then Mayor Duterte. He Was elected to office May 2016. You see that one? And you can see the fracture line Byaran means corrupt. Bias. So Bayaran is the one in the middle. The first long line and bias is the last one. If you look at that it’s a fracture line of society right.
There were mentions before but it was constantly pounded until it became a straight line after president Duterte was elected the weaponization of social media happened after he was elected because it was repeatedly pounded until it became fact. A lie told a million times its truth.
Right. So, then what happened? Here: This This is the database I was telling you about right? We call it The Shark Tank. The one on your left is the URLs that are spreading fake news in the Philippines. The middle column are the court the Facebook pages that are spreading that page. And I always look at the average reposting time which is the one all the way to your… my right, sorry it’s flipped.
I want to show you when the real attacks began against Rappler and it was after we came out with a three-part series on the weaponization of social media. It was October 2016. I went to Facebook with the data August 2016. So, October 2016 this is what it looked like. In October 2016, if it’s more than 10 times reposting, it turns red, You can see how it turned red. This Facebook page Sally Might Die accomplished its goals by April 2017. It’s been deleted from Facebook but you can see … This was something we created for our social media team so that you can see it’s a cut and paste account. And they post; look at how many times they post in one day! Each one of those squares is just one day. And this is where they post the groups. They posted to go viral in the campaign pages of Duterte it and Marcos, the son of former President Ferdinand Marcos.
oneI’m going to just show you the last thing which is how can we figure out who’s attacking us. Well you can gather the data and it looks like this but if you put it in a network map, It looks like this. This is the network that was attacking Vice President Leonie Robredo about a year ago and it is the same network that constantly attacks me, Rappler, and every traditional media. It is so systematic that the content creators of the network are broken down by demographic. For the Motherland, it is pseudo-intellectual and tries to target the one percent but pseudo-intellectual. The middle class is targeted by thinking Pinoy and the mass base is this Moka Olsen blog who is former singer dancer. They used to use to build her Facebook page by having it like she has a singing group called the Mocha Girls and they do pillow fights every Sunday. That was how they first built her Facebook page. Then she became the head of social media for the presidential palace and it became a whole other thing.
Anyway, you can see this is what attacks what attacks
journalists systematically. And it happens so many times. I just want to show
you one last thing which is something we did for Rappler. Natural Language
Processing to pull out. So, we looked at the entire Lexis Nexis right to try to
figure out … What do we need to learn? What Is the data telling us about the
articles that were written about us at the time when I was about to come home
for bail to file bail? Yeah, I had an arrest warrant then.
Right. So, the Philippines wrote 34 percent of the stories.
The United States wrote 27 percent. You guys are a potent a potent force for
us. But what was most interesting is that the Filipino stories are part of the
reason it’s in a line like this is because they essentially just regurgitated
the press release of the Department of Justice. It was the American news
organizations that talked about it as a Duterte rights crackdown. That wrote
about it in context. That was an amazing thing. I want to leave you with sorry
I don’t know what wrong thing, I think. I want to move forward. I want to leave
you with this information warfare. Yeah, I guess this is the right one.
So, with information warfare I’m going to bring it to to Russia. Dezinformatsiya. This was really interesting because. For Duterte to end the drug war. Sorry about that my slides were. OK so… I don’t know if you remember Yuri Andropov. He was the former KGB Chairman. This quote stuck with me because it fit the Philippines. Dezinformatsiya works like cocaine. If you sniff once or twice it may not change your life. If you use it everyday though it will make you into an addict. A different man. I think this is the impact on our democracies and we’ve seen it.
The first reports came out in November of 2017 saying that cheap armies on social media are rolling back democracies all around the world. And at that point it was something like 28 countries. By last year, it was 48 countries. It’s doubling. We started looking at Ukraine to try to understand how we can use the data the way Ukraine started fighting back. It is information warfare. It is political. It is about power and the money part of it … or the people who are actually or who are actually catering to the politicians. Russia backed Facebook post, this was November of 2017, this is the first time that I saw Americans really starting to look at it. But even when I saw this ok they reached 126 million Americans. I think what people missed is it happens all the time. It wasn’t just ads, it was it’s all the time …
I talk about termites. This bot is interesting to me because
it tweeted about U.S. elections. First, remember the Philippine election of
Duterte was one month before Brexit. After Brexit, there were U.S. elections
and then the Catalan elections. This little bot Ivan tweeted about all of
those. So, we found him from the Catalan elections. And when I’ve looked at his
account, it was specifically only tweeting about the Philippines. When we
posted this story, within 24 hours Twitter took his network down.
On Facebook, this is the last part I want to show you, the most recent thing that I found fascinating. In December, two groups came out with reports based on data that was given to the US Senate Intelligence Committee. This is the chart that is from new knowledge. And this thing at the bottom, I want to show you the connection between the Philippines and that chart. It’s this: So we tend to map the networks around us. Let me just. Try to get this so that you can see it. There. This is the attack network. Not connecting.
OK. So, this attack network was from November. Sorry it’s frozen. There. Yay. OK. November 8, December 7th. This network. And you know what I used to map the network is this free tool called Flourish. It’s a startup. This is little Rappler. And what’s so interesting and this is where I will make the pitch that I don’t think we have any other choice but to actually collaborate together. Rappler is here. This, all of this, is a disinformation network that’s attacking us and you can kind of literally see it right.
But what’s so interesting is in the Philippines this
overshadows the information landscape. The traditional media groups are so set
aside they’re desperate. I’ve been trying for the last two years to get our top
television networks our newspapers to work together like retweet re share each
other so that we can rise up together in the algorithms. We refuse to do it
because people think it’s competitive. But you know what? You’re competing
against disinformation not against each other now.
I want to show you this because and I’ll end with this one… so this disinformation network is so interesting right. But this is the most fascinating one. When we saw this, I was surprised because this was created a year ago. It’s only one year old the daily sentry dot net. And yet the larger the circle, the larger the eigenvector centrality, the more powerful the account is. This is exponential pushes behind it. What’s interesting about it is that this is the first time we saw a direct connection to the Russian disinformation landscape because daily century dot net uses experts in quotes from this network. Sorry I can’t I can’t do the thing but on that chart there is an American man who who’s often interviewed by our de Sputnik by Iranian television … His name is Adam Gary. He is now an expert who’s popping into the Philippine ecosystem. He came in through the Daily Century and he’s from the Daily Century and he jumped into traditional newspapers from there. There’s a direct link to him because he writes for Global Research dot ca a group in Canada and connected to two other groups: one is Eurasian affairs dot net here. Another site both of whom come from a Russian IP address.
All that data in the chart came from the data that was given
to the Senate Intelligence Committee and published last December. This is
what’s happening in my country. I think you’re finding out what’s happening in
yours. But I think we’re only a small case study of what is happening globally
and that scares me.
Kara Swisher: [00:14:13] OK. All right. So, how was prison? [laughter] No really. How was prison?
Maria Ressa: [00:14:20] I, oh, I hope I won’t get there but you know…
Kara Swisher: [00:14:25] You were arrested. Explain what happened to you? We we did a podcast and I said you should not go back to the Philippines because you will be arrested. And what happened.
Maria Ressa: [00:14:32] Of
course I went back. Right. But I wasn’t arrested. OK. I thought I would be so
our lawyers told me … My flight arrived on Sunday night at 9:30 p.m. The
court, which is supposed to be an all night court, well it closes at 9:00 p.m.
So, if they had picked me up that night I couldn’t have filed bail until Monday
morning when courts opened. In the Philippines, if you have an arrest warrant
you’re not told you have an arrest warrant. They just come get you. I came I
went home and I was I wasn’t going to change anything and it went OK. I filed
bail. I I filed bail once I filed. I posted bail five times actually in that
Kara Swisher: [00:15:16] But
to be. And you weren’t actually arrested.
Maria Ressa: [00:15:18] No
I wasn’t arrested. I wasn’t arrested.
Kara Swisher: [00:15:20] Please
explain to everyone here who doesn’t know why they [are going to] arrest you.
What are the charges?
Maria Ressa: [00:15:26] Well
charges are ludicrous. Tax evasion. It’s really one event, the same event, that
I have four other cases of. They’re alleging, the government is alleging, that
I am working for, well, that Rappler is owned by Americans one and that I am
essentially working for them to take down the government. Very Putin-esque.
None of that is true. And then on top of that, the arrest warrant came from
taking that same charge: the investment instrument that we used, which was
constitutional. They then decided that… we didn’t pay the right taxes. And
the reason why they said we didn’t pay the right taxes was because they
reclassified Rappler into a stock brokerage agency.
Kara Swisher: [00:16:16] Rather
than a journalist.
Maria Ressa: [00:16:17] Rather
than a newsgroup.
Kara Swisher: [00:16:18] Right.
Maria Ressa: [00:16:19] And
that’s what I have to post bail for.
Kara Swisher: [00:16:21] The
reason I’m asking what this is I want people to understand how people can use
social media to create trumped up charges and then arrest you for them arrest
you for it.
Maria Ressa: [00:16:32] Well it is interesting that you said that because all of these charges. I laughed off because they first appeared on social media. And they were thrown at me. CIA you’re a foreigner. I am a dual citizen. But, all of that. Like termites you know they just came at it and then a year and a half later it comes out of President Duterte’s mouth during the State of the Nation address. He said that you are a journalist; I’m covering the State of the Nation address. And then President Dutertet says look at Rappler: They are American. So, then I just tweeted back. President, no we’re not owned by Americans.
Kara Swisher: [00:17:12] Right, right. So, let’s talk about the state. Well, last we talked you, you made a very passionate plea to Facebook to do something about what’s happening. What you’re showing here is essentially organized disinformation campaigns to pull you down because you’re doing critical coverage of the president in the Philippines. And so they’re employing a very slow moving but powerful network to do so and using in the Philippines as you said most people get their news at not just the Philippines but across the world from Facebook. This is the purveyor of news. And these malevolent forces have created pages and news organizations and fake organizations to try to battle that. Talk a little bit about that. About what where you are right now because at the time. You were sort of subject to the biggest news organization attacking you being used to attack you.
Maria Ressa: [00:18:06] OK.
So I think that there’s a whole information ecosystem that has been
manufactured and it is manufactured reality. And we went down to a point where
we were looking at you know how how powerful is it really. We manually counted
the impact of 26 fake accounts. 26 fake accounts can actually reach up to three
million other accounts in the Philippines and it wasn’t we were the first
targets because we expose them. I was so naive.
You know, I thought wow we can just do a hashtag no place
for hate campaign and people will come back because you think these are real
people. They Are not. And after we did that, we became the target. And as you
saw in the first slide it’s not just us it is traditional media because the
main goal is to kill any trust in any institution that can that can push back.
All we have done is challenge impunity. Impunity here in information warfare and impunity in the drug war. You don’t know how many people have been killed in the Philippines during this drug war because they keep changing the numbers. At most recent count the Philippine police will admit to killing 5,000 people. Even that number alone is huge compared to the fact that 3,200 were killed in nine years of Marcos rule. Right. But. There’s this other number they never rule out. It’s the homicide cases under investigation and there are 30,000 people who’ve been killed there. So, If you think about it since July 2016 you can have more than it is tens of thousands. Thirty five thousand. I know the way they parse the number and I’m even cautious in the way I tell you how many people have been killed.
Kara Swisher: [00:19:58] So
what they’re doing is trying to use social media to stop you from writing about
Maria Ressa: [00:20:03] Not just trying to use it they’ve used it effectively. iI think this is the first the first weapon it’s a new tool against journalists and against truth. And part of the reason we’re having a crisis of trust is because this is global.
Kara Swisher: [00:20:17] Right.
So, talk a little bit about your efforts with Facebook to do this initially. You
ran into Mark Zuckerberg and told him about this.
Maria Ressa: [00:20:28] F8 April 2017. There was a small group of us who had lunch together. It was founders groups of companies that were working with Facebook and I invited him to come to the Philippines because I said you know you have no idea how powerful Facebook is. Ninety-seven percent of Filipinos who are on the Internet are on Facebook. We’re 100 million people. And he was frowning and I was going so why are you frowning. And he just said, “Maria what are the other three percent doing?”[laughter]. We laughed: huh.
Kara Swisher: [00:21:05] Ah.
Ha. Ha. That’s how the board talks. But go ahead.
Maria Ressa: [00:21:09] But
that’s when you realize that that they didn’t understand their impact. What
they understood was their goal. And so I think now that’s changed.
Kara Swisher: [00:21:21] Right.
So they did that and then you brought this information to them. What happened
Maria Ressa: [00:21:27] Nothing.
You know by the time Mark Zuckerberg was in Congress for me everything that you
guys were finding out here is you know “been there done that.” We’ve
talked about this. I feel like Cassandra, you know. I’ve talked to maybe more
than 50 different officers and friends inside Facebook.
But we’re the Philippines and maybe people think you know
you’re out there. But, when He appeared in Congress and he said it would take
five years to fix this with AI. I was like you can’t do five years. Because In
the global South in my countries in Myanmar Sri Lanka and the Philippines every
day that it isn’t fixed means people die… I think they’re getting it. I think
partly your coverage you know the 2018 has spotlighted this but I don’t think
enough because it’s still being used.
The good thing is there have been take downs take downs of
Russian networks, Iranian networks, they’ve been to take downs in the
Philippines. The most recent take down was about three weeks ago of a network
we identified and did a story on 13 months earlier. You know so it’s a little
too little too late but you know what. I will take everything because at least
it cleans it up. But the fundamental problem is that. our gatekeeping power …
So, we used to create [and] distribute the news and when we distributed the news we’re the gatekeepers. Now that power has gone to the social media platforms. Facebook is now the world’s largest distributor of news and yet it has refused to be the gatekeeper. And when it does that when you allow lies to actually get on the same playing field as facts, it taints the entire public sphere. And it’s like introducing toxic sludge in the mix. And this I think that’s the fundamental problem. They have to actually at some point say take down the lies instead of allowing it to spread.
Kara Swisher: [00:23:33] So
what do you face when you go there and say you need to take down these lies?
Tell me what happens or how are they now working with you.
Maria Ressa: [00:23:41] It’s
it’s significantly different now. And that’s part of the reason.
Kara Swisher: [00:23:46] Well
they’re very sorry now. But they’re very very sorry and also very very very
Maria Ressa: [00:23:53] I think they’re starting to understand what they’ve done. And I think they’ve started to hire the right people. In January of 2017, Nathaniel Glaser who was in charge of counterterrorism in the Obama White House. You know he was hired and shortly after that, well took a while, because this is a manual effort right? Tracking these networks down like counterterrorism requires somebody like a law enforcement official to go look for them. And so that’s part of the reason you see the takedown start starting to happen. I think it goes. The main thing that they have to do is to go to the content moderation system that they’ve put in place.
Kara Swisher: [00:24:39] Right.
Maria Ressa: [00:24:40] As journalists we have values and principles. We call it the standards and ethics manual. As tech people they tried to atomized it into a checklist and then this checklist goes to content moderators in — you know the two largest for a long period of time we’re in Warsaw and Manila.
Kara Swisher: [00:25:01] Right.
Maria Ressa: [00:25:02] And in Manila … I don’t know if you saw the movie, it was done by..
Kara Swisher: [00:25:06] The
Maria Ressa: [00:25:08] The
Cleaners, right. And in that one you can see that that these content moderators
who barely make you know minimum wage here in the States but they they have
seconds to decide whether to delete or whether to let content stay. And if they
just go by a prescriptive checklist they’ll just go up delete delete and let it
stay. And the guy who took down Napalm Girl was a Filipino and he took down
Napalm Girl because check list naked.
Kara Swisher: [00:25:35] So there’s no famous photograph of the girl running from napalm in Vietnam. Pulitzer Prize-winning photograph. It was news.
Maria Ressa: [00:25:44] So
these Filipinos who were in a call center in the Philippines are taking down
terrorist content potential are taking down supposed hate speech without any
cultural context without understanding the content.
Kara Swisher: [00:25:59] So
what do you what is your solution to them. I’m using Facebook as a broad thing
but they really are the game. Twitter is sort you have the same problems with
Twitter and other social networks?
Maria Ressa: [00:26:10] Twitter
is only 7 percent of penetration in the Philippines.
Kara Swisher: [00:26:13] So
it’s an unpopular service. So yeah.
Maria Ressa: [00:26:18] No, but it’s same right the same content moderation policy as YouTube. YouTube is huge. Also in the Philippines. And you know what this disinformation cuts across all of them. So I mean you saw it in our shark tank. We had the you or else I would love to give that to Google and have them down ran some of that. Right. Because.
Kara Swisher: [00:26:39] This
is just you doing their work for them. Correct?
Maria Ressa: [00:26:43] You
know I… I guess for me when you’re dealing with this stuff. and you’re
breathing it, it’s like toxic fumes every day. You just want a solution. And it
takes… Imagine if somebody from America comes to the Philippines and tries to
figure this out. It would take them a year. I already know it. Here take it. Do
something with it. I don’t look at it as their work. I think OK. This is where
I’ll be really generous. I know that they didn’t mean to do it. It is an
extremely powerful tool and the reason why I continue to work with Facebook is
because I think if they had the political will and the economic will to do
This is a game changer for the Philippines. Rappler couldn’t
exist without Facebook. We zoomed we grew 100 to 300 percent year on year
because of Facebook at the beginning in the good times. And I think they made a
crucial error in 2015 and that was instant articles when they brought all the
news groups in and then all of a sudden were at the same algorithms as the joke
that you heard or what you had for dinner. And when we became mob rule when
facts became determined by mob rule then it changed the ecosystem of democracy
in the world.
Kara Swisher: [00:28:03] And
what do you propose now that these… So, YouTube is a problem.
Maria Ressa: [00:28:09] YouTube
Kara Swisher: [00:28:09] A
huge problem. Are you getting the same responses from them: So sorry. They’re
really, really sorry. [laughter] No they really are. But they’re not in any way
Maria Ressa: [00:28:22] So
yeah. Tell me do you think they will act on it?
Kara Swisher: [00:28:27] You know I have an expression that was from one of my grandparents: You’re so poor all you have is money. I think they like their billions. I think they think they’re doing good for the world. And I think they’re careless. It’s sort of like from The Great Gatsby. They were careless people and they moved, they did damage and moved on.
Maria Ressa: [00:28:47] But they now know they’re not. And they’re killing people. They know that now.
Kara Swisher: [00:28:52] I
think they, what I’m getting now from a lot of people, is you’re so mean to us.
Maria Ressa: [00:28:59] Because
I do see them see this.
Kara Swisher: [00:29:01] When they say that I’m like fuck you. [laughter, applause.] You know what I mean. So it’s very hard for me to. But they are there’s a lot of victimy.
Maria Ressa: [00:29:11] I
mean until now. But you don’t know.
Kara Swisher: [00:29:14] No
I think they’re they literally get angry when people say hey hey now you know
hack democracy you really need to fix it. And they… I think one of the things
that I find interesting is when there is money to be made or whatever, they are
it’s their company. Yes.
And when there’s problems to be solved, it’s we all togethe have to solve it as a group. You know I mean and I’m like we didn’t get 64 billion dollars that I looked at. You know I have real old shoes. I don’t know. I mean we didn’t share in the upswing. And so I think again I joke. I’m so sorry but they feel badly but then I think are actually incapable in any way of taking care of it. I think they have they don’t have the mentality. They don’t have the talent. I think they’re incompetent to the task. That’s what I think.
Maria Ressa: [00:30:02] But
if that’s the case they will die. I mean it’s going to be a slow painful death.
But you know what I mean I guess for me I’m taking almost an opposite that it’s
there’s this phrase on enlightened self-interest that is…
Kara Swisher: [00:30:17] One
would think. One would think. No because this this will eventually… the
product will become terrible to use.
Maria Ressa: [00:30:24] Right.
Kara Swisher: [00:30:25] Or
it will become very addictive to use. And then what’s the difference? Like you
said with cocaine, I think. So, how do you … what are you wanting. What would
you like from them? You’d like them to become gatekeepers in other words.
Maria Ressa: [00:30:37] I don’t think they have a choice. I think they have to be. Otherwise we will leave. Right? Or again they’ll break be broken up by regulation or people will leave. In the Philippines so look at the immediate reaction. Alexa ranking of all the websites where do Filipinos go? From 2012 to 2016: number one Facebook. Undisputed. But then when the toxic sludge began mid-2016, by January 2017 on Alexa ranking Facebook dropped from number one to number eight. And then by January 2018, it went back to number five. In January 2019, right now, if you look at Alexa ranking in the Philippines, it’s number four.
So slowly they’re rising up but there’s no way. So, I mean
my thing is if they don’t fix it we will leave. We will leave. So that’s why I
think it is in their best interest they have no choice. But They are going to
have to suck it up and they’re going to have to have they are going to have to
hire real people. Machines can’t do this. But those real people will train the A.I.
and it will get better over time and they will have to lose money because they
will have to hire real people.
Kara Swisher: [00:31:52] So
talk to me a little bit about that business because you’re trying to create a
Maria Ressa: [00:31:57] Yeah.
2019 I’m trying to be a good CEO.
Kara Swisher: [00:32:00] Being
arrested attacked and essentially they’re trying to put you out of business.
Maria Ressa: [00:32:07] The
Kara Swisher: [00:32:07] Talk
about the actual business. Because it’s hard enough to do a digital effort. You
know that. I know that.
Maria Ressa: [00:32:14] Yeah.
So, in the Philippines and in many other parts of the world good journalism is
really bad business and I wear both an executive editor hat and I’m the CEO so
it’s my job to make sure our business survives. In 2017, when the attacks started
happening we realized that and we had a big board battle. You know “you
journalists, you know you gotta tone it down” from the business men. And then,
from the journalists, because we had we were the largest group of shareholders
in Rappler. We had 3 percent more votes. So we pushed forward and 2018 was
mission and a lot of anger management issues. But 2019, I have to be a good CEO
and we need to build the business. So what we’ve decided. So when you’re under
attack by the government your advertisers get scared almost immediately they
don’t want to be associated with the brand. They always say you know Maria
we’re behind you but they’re very very far behind. [laughter].
Kara Swisher: [00:33:18] And
nice Time cover!
Maria Ressa: [00:33:23] So
I found out about it on Twitter. And I had to check whether it was real! But
the time cover is the first time I saw the ecosystem come up like real people
who were afraid. Fear is very real in the Philippines and I’m sorry. Before I
before I talk about the fear and I just want to finish on the part about the
business. So businessmen the businesses… they’re not the protectors of
democracy. And even if their values say that they want to do that they just
don’t because the money isn’t there. So, you can’t attack Facebook in the same
way or if you’re run by businesses your values — sorry — they follow
afterwards after the money. So, well, what we did is: We came up. We were
forced to be agile. And A lot of the things that you saw–the mapping, trying
to understand unstructured big data ,all of these things– we came up and
pivoted and became a consultant. Like I essentially carved out another team
that can do the same things we do for Rappler for other companies.
Kara Swisher: [00:34:37] So
your business… so, in that environment what do you do? Because good
journalism like you said is bad business.
Maria Ressa: [00:34:44] Rappler
continues doing good journalism. And I’ve we’ve taken the business and pushed
it away and we actually found a new business. The two things that we did. We’re
the first in the Philippines… The crowdfunding part, actually I didn’t think
it would work in the Philippines. But when our legal fees became like a quarter
of the entire monthly spend, we asked our community and they helped. And that
that helped pay for some of the legal fees. And then we, just December, we
began a membership program we called it Rappler Plus. I don’t think it would
have worked in the Philippines because unlike the United States or Europe
unlike the more developed countries, we don’t have a history of that but not
even subscriptions. People don’t want to pay for news especially in a country
where you struggle to put food on your table three times a day. So the Rappler
Plus took off much faster than I had expected and I think it is because of the
fear. People are afraid and by standing up … By being the kid telling the
emperor he has no clothes. By telling him he cannot do this with impunity.
This Is the most powerful man that we have had in since… I
think he’s more powerful than Marcos was. He controls the executive. He owns
the legislative and by the time he leaves office he will have appointed 11 of
13 Supreme Court justices. You guys in the states worry about one Supreme Court
justice he’ll have appointed 11 of 13. This is our next generation. And It’s
extremely worrisome, especially with this information warfare, with the young
men in our country who are sucking up these fumes. You know the levels of
misogyny according to our data women are attacked at least 10 times more than
Kara Swisher: [00:36:40] Alright,
we have questions from the audience and then we are going to end. Are there
questions from the audience? Yes, you over here. Right here. Put your hand up.
Question: [00:36:51] Hi. Krishan Bhatia from NBCUniversal.
Thank you for sharing this story and the insights and everything that you’re
doing to uncover this. My question for you is in the US market, as we sit here
today as premium publishers most of whom have some sort of news business and we
serve large cap marketers in the US: What should we be doing differently with
respect to Facebook in particular but platforms in general that we’re not
Maria Ressa: [00:37:20] I think we have [to address the issue]: Who is the gatekeeper right now? But I think that ideas are very simple to me. If information is power. And the gatekeeping determines what information is taken by everyone. And we all focus … the debate in the US focus is on all of these different demographics and the polarization. The polarization happens because we don’t have the same facts. So it goes down to that. Please push. I think Kara asked the solution for me is when you have something like Facebook or YouTube moving beyond prescriptive to where we used to be which is what are the values? What are the principles like standards and ethics for journalism right? It can’t be prescriptive because. Ironically what they keep saying they defend free speech but free speech in this case is being used to stifle free speech. So, you’ve got to take the toxic sludge out of the body politic because that is killing us and everything else is organ failure you know because you’re not getting the oxygen that you need.
So please push you have far more power than little Rappler
does in terms of pushing for action in my part of the world I guess you know
maybe I’m happy with little because it’s been so long. We have elections in May
and these take downs will do a lot. I’ve seen the reactions of the people
running those those Facebook pages. But please look also do the investigations
here in the United States. The data is coming out now. I think that our
credibility are and I mean are for traditional media and the new ones coming
up. I think we’re getting eaten up by termites without realizing that that the
floorboards are about to crack. That’s why I think there’s a crisis of trust.
Kara Swisher: [00:39:19] Yes,
I would agree with that. Finish on this question of fear because I think it’s a
really important thing of fear of not speaking up of rocking the boat of all
kinds of stuff or just people just are exhausted by it because you’re not doing
journalism you’re spending time dealing with lawyers you’re spending time
moving businesses around you’re not doing the actual job which of which you
were.. used to do.
Maria Ressa: [00:39:43] Yeah
that’s also true. I know it just means I’m not sleeping that much. But you know
I find that the journalism… So look, Rappler has been mission-driven and all
of the friction of a normal organization is gone because everyone who stayed
with us and everyone did stay with us on the journalism side we lost sales and
tax strangely. But the mission is so clear and the purpose is so clear and I
think the challenge for all of our news groups is to be able to maintain that.
In a society, what fear does, what this stuff does is normal
people will not… When you get attacked like this I didn’t show you any of the
attacks, but when you’re attacked so viscerally when you’re threatened with
rape with murder, you just shut up. And that’s exactly it’s meant to pound you
into silence. But our community realizes this. So in a strange way. I. We’re
not just journalists anymore also that’s weird.
Like when I’m at the airport sometimes a family a family
came in and hugged me and I hug them back. I didn’t know who they were but it
was because they are also they are afraid to speak. So when you speak for them
you fulfill a role that I think that’s the mission of journalism. I think I
have a natural tendency to be more positive I should hang out with you a little
bit more. [laughter]
But you know when you’re in my place, I put one foot in
front of the other. The mission is clear. We’re going to have to deal with
this. And I think this is what Facebook has to realize. They have to get
through this because it’s not just us. We’re just the canary in the coal mine.
It’s here it’s happening here. Your problems are because of stuff like this. I
think. I think it’s global.
Kara Swisher: [00:41:37] Are
Maria Ressa: [00:41:39] No
because there’s too much to do. Not right now. You know there are times when I
think it was far worse when no one was paying attention because when the
attacks were so personal the first two weeks…I got 90 hate messages per hour.
Not one nine. Nine zero hate messages per hour. And when I got that, it took me
two weeks to just figure out how do how am I going to deal with this and what’s
real and what’s not and then do I need security? You know all of that stuff. So
no I’m not afraid because now I know what it is. And the data helps me
understand it. So that’s the certainty. That’s why I know it’s important to
have the facts. You cannot fight back if you don’t have the facts.
Kara Swisher: [00:42:24] All
right. On that note Maria Ressa. [applause]
Ensuring the high-quality of inventory across the PubMatic platform, as it flows from sellers to buyers, requires strong policy, which standardizes compliance enforcement and operational coordination across account teams so that we spot issues early and often. It also requires a strategic focus on identifying what lies ahead for quality.
I’d like to share my thoughts on a few growing trends I expect to see regarding inventory quality. These views come from an amalgamation of inputs: my 10 years of experience in managing inventory quality, the signals and other clues arising from my daily quality operational work, and deep-dive investigations. Buyers have shifted their emphasis to quality and are focused on working with other quality-centric professionals across the industry.
Here are three major inventory quality trends:
Over-Reliance on Fraud Detection Technology
The industry has clearly spoken – third-party fraud detection is now considered “table stakes” for any large player, buyer or seller, in the digital advertising ecosystem. Though fraud detection vendors play an important role in helping to identify and avoid invalid traffic (IVT), I would advise treating this service as one tool among others to help improve quality.
No vendor measures quality the same way, yet many of them share the same MRC certification for Sophisticated IVT (SIVT). PubMatic uses a combination of IAS and White Ops to monitor invalid traffic rates and identify problematic pockets of inventory. However, many buyers use different fraud detection vendors and they may report very different results for the same inventory. These variances can be explained by unique proprietary methodology and differences in sampling where one vendor may look at a completely different subset of the same inventory vs. another vendor.
For example, if “Buyer A” is reporting their inventory is 100% IVT, while PubMatic’s White Ops reporting is showing 1%, the promise of fraud detection technology as a standalone method to identify non-human traffic breaks down. Yet, when stepping back and viewing fraud detection as a starting point, conflicts between buyers and sellers are more likely to come to an agreement. I believe this because both parties can recognize that even with big differences in fraud reporting, a deep-dive investigation will uncover other signals that likely support one report or the other.
In this specific example, I may find other evidence supporting the buyer’s claim and could come to a mutually agreeable conclusion (e.g. refund, blacklisting, termination of supplier, etc.). However, as often as not, my investigation might raise no other red flags to indicate poor inventory, and thus I would propose limiting access to that inventory for this buyer.
Growing Importance of Content and Audience
GDPR has impacted the ability of marketers to fully utilize the targeting potential of cookies and audience profiles in EMEA due to the regulatory changes in consent and privacy. One could argue GDPR is a direct consequence of the rise of ad tech and the lack of self-regulation concerning how the data of consumers are used in the targeting of advertising. Therefore, it is not unlikely that similar consumer data privacy and consent laws will spread around the globe. This will further reduce the efficacy of cookies and precipitating the return of contextual targeting for online advertising.
What does this mean? An increased focus on the quality of content—addressing fake news and brand safety concerns—as well as the quality of the audiences who consume this content as important quality trends in the marketplace.
Recognizing the importance of content and audience to buyers, PubMatic evaluates domains and apps not only on the level of IVT but also on the value of the audience and originality of the content. For instance, an organic, loyal audience is preferred to consumers acquired from other sources. We also avoid content farms and look-a-like sites that exist only as a necessary backdrop to sell ad impressions.
While ads.txt is a valuable tool to combat domain spoofing, it provides no inherent protection against IVT (bots and fraud). Further, it does not guarantee the quality of a domain’s content and audience. For example, a domain created solely for the purpose of driving bot and/or acquired traffic through pages filled with content stripped from other sources can have an ads.txt file, but still be a bad source of inventory.
Thus, working with a whitelist of trusted domains is the single best practice for both buyers and resellers of inventory. Being familiar with the domains on which ads are running, and avoiding all other domains, is the best prevention.
I strongly suspect that most of the behavior leading to poor quality inventory comes from the point where money changes hands—the domains and apps where advertising is consumed. By wisely choosing which domains to work with and working with only whitelisted domains, many quality issues will be avoided entirely. Alternatively, when a small group of domains isn’t enough to meet inventory requirements, working with trusted partners can also provide improved inventory quality and improved brand safety.
Bots are software applications that typically run repetitive and easily-automated tasks. You can find bots posting content or interacting with users online without any human involvement. They are a valuable tool, often employed to answer questions and post updates to articles. However, there’s also a dark side to bots. Their malicious behaviors include the ability to spread fake news and influence online rating and review systems.
According to the Pew Research Center’s new research, Bots in Twittersphere, bots (be they”good” or”bad”) create 66% of all tweeted links across all web content and among popular news and current event websites, respectively. Bot percentages are even higher (89%) among popular aggregation sites that compile stories from around the web. Interestingly, a small number of highly active bots are responsible for a significant share of links to known news and media sites.
The Pew study analyzed 2,315 of the most popular websites, examining a random sample of 1.2 million tweets over six weeks in the summer of 2017. To identify the bots, Pew used the Botometer, a tool created at Indiana University and the University of Southern California. The Botometer estimates the likelihood that any given account is automated.
It’s not surprising that automated accounts provide links to a higher-than-average number of sites without a public contact page. Lacking a contact page, omits the opportunity for reader feedback that provides an opportunity for corrections or additional reporting. This can sometimes be an indicator of malicious bots directing users to fake content sites.
Intriguingly, automated Twitter accounts share a larger volume of links, 57% to 66%, from sites that are more centrist and an ideologically mixed. The analysis does not show that automated accounts are politically biased toward liberal or conservative in their overall link-sharing. Bot accounts share approximately 41% of links to political sites of liberal audiences and 44% to political sites of conservatives.
Social media makes it easy for automated accounts to create content and interact with users with no direct human involvement. It’s seamless and usually undetectable to the user. Bot accounts play an important part in the social media ecosystem. The provide automated news updates, answering questions and offering immediate feedback and more. Nevertheless, what impact do bots have on the user experience? Additional Twitter research is needed to focus on bot engagement. Understanding to what degree bot accounts engage users and to what degree bot accounts, when combined with human-sourced accounts, impacts engage offers insight into the complete user experience on Twitter.
Ad Fraud is a popular topic of discussion in the digital advertising world today. Criminals set up networks of fake sites, create bots to drive a large number of phony impressions, and exploit the programmatic ecosystem to make money from ads that no one sees. To combat such nefarious activity, advertisers and agencies require some form of invalid traffic verification to filter out bot traffic.
But you run a reputable site. You don’t use bots to inflate your impression numbers. So, why are verification companies reporting that a percentage of your traffic is generated by bots, costing you money and hurting your reputation with advertisers in the process?
The truth is that intentional ad fraud is only a portion of the problem. Most bots were created for a purpose unrelated to advertising. But they have huge unintended consequences for the industry.
Verification Companies Get It Wrong
A verification company’s primary role is to make sure an ad ran where it was supposed to. They check for viewability, ensure multiple ads aren’t loading on top of each other, protect brand safety, and a host of other things. Identifying bots is not at the core of their business. In fact, they’ve only recently started to address it. As a result, they use rudimentary detection methods, like IP and domain blacklisting. But those methods just don’t work anymore.
Criminals are smart, and as verification methods have improved, they have become more sophisticated. In 2017, 74% of bots were Advanced Persistent Bots that cycle through IP addresses and switch user agents. They accomplish this by using malware to hijack legitimate devices. When a user installs an irreputable browser plugin, he or she could initiate bot activity in the background that results in their device being added to a blacklist. Worse yet, this blacklisting will remain even after the user removes the malware from their device. That human user may never see another one of your ads.
You Have Bots on Your Site and Don’t Want Them There
Ad fraud is only one of many reasons the bad guys create bots. In fact, last year more than 42% of all internet traffic came from bots, and most bots are not involved in ad fraud. If you create content, chances are someone has written a web scraping bot to steal that content from you and monetize it for themselves. Sites that have a paywall are an even bigger target.
If your visitors log into an account, hackers want to steal those login credentials so they can sell them on the black market. They also want to use your site to test the validity of credentials they may have stolen or purchased elsewhere. On average, sites face login attacks two to three times per week. But after a data breach, like ones we’ve seen recently from Best Buy, Lord and Taylor, and Panera, that number triples.
Ad fraud bots will visit your site to pick up one of your cookies, making them appear more real and less suspicious. A rich history of cookies with intent data also makes them more targetable, and thus more valuable. None of these bots care about generating an impression on your site, but if you don’t take measures to prevent it, you will unwittingly show them an impression.
Bots aren’t just wasting your advertisers’ money, they’re hurting your bottom line as well. While bots can watch videos and click on ads, sometimes on purpose and other times as an unintended consequence, they don’t ever convert or make real purchases. Filtering bot impressions produces higher click-through rates, increasing campaign conversions. The same applies to your audience segments as well. When your inventory performs better, advertisers buy it more often.
You Can Do Something About It
You have a bot problem. Every website does. Bots are stealing your content, hurting your relationship with advertisers, making your campaigns perform worse, and siphoning ad dollars away from you. A post-campaign report does nothing to help you with any of these problems. It’s frustrating, but there are a few things you can do right now to stop bots before they hit your site:
Block outdated user agents and browsers
Many bot tools have default configurations that contain outdated user-agent string lists. This step won’t stop the more advanced attackers, but it might catch and discourage some. And the risk here is very low. Most modern browsers force auto-updates on users, making it more difficult for real users to browse using an outdated version.
Monitor your traffic sources carefully
Do any have high bounce rates? Do you see lower conversion rates from certain traffic sources? These can be signs that a source is sending you bot traffic. Buying traffic or using audience extension platforms only exacerbates the problem. If you are incentivizing someone to send you large volumes of traffic, chances are they are going to game the system, often using bots. Avoid them if you can.
Bots mitigation is an arms race. As long as the incentives and potential rewards are great, fraudsters and hackers will keep innovating to create new bots that get past whatever measures you put in place. You aren’t in the business of catching bots, and you just can’t keep up on your own. A third party expert on bot mitigation will help you stay ahead of the bad guys, and also help you push back on false positives from verification companies.
Reid Tatoris is VP Product Outreach and Marketing at Distil Networks. Reid was previously the co-founder of Are You A Human, a Detroit-based company that analyzes how real humans interact with the Internet. Prior to starting Are You a Human, Reid was a technology consulting working in strategic roles and leading development teams. Reid holds both an Engineering Degree and an MBA from the University of Michigan and is a mentor for Techstars Mobility.
(Note: I work for Mozilla. None of this is secret. None of this is Mozilla policy. Not speaking for Mozilla here.)
A big objection to tracking protection is the idea that the tracker will always get through. Some people suggest that as browsers give users more ability to control how their personal information gets leaked across sites, things won’t get better for users, because third-party tracking will just keep up. On this view, today’s easy-to-block third-party cookies will be replaced by techniques such as passive fingerprinting where it’s hard to tell if the browser is succeeding at protecting the user or not, and users will be stuck in the same place they are now, or worse.
I doubt this is the case because we’re playing a more complex game than just trackers vs. users. The game has at least five sides, and some of the fastest-moving players with the best understanding of the game are the ad fraud hackers. Right now ad fraud is losing in some areas where they had been winning, and the resulting shift in ad fraud is likely to shift the risks and rewards of tracking techniques.
Data center ad fraud
Fraudbots, running in data centers, visit legit sites (with third-party ads and trackers) to pick up a realistic set of third-party cookies to make them look like high-value users. Then the bots visit dedicated fraudulent “cash out” sites (whose operators have the same third-party ads and trackers) to generate valuable ad impressions for those sites.
If you wonder why so many sites made a big deal out of “pivot to video” but can’t remember watching a video ad, this is why. Fraudbots are patient enough to get profiled as, say, a car buyer, and watch those big-money ads. And the money is good enough to motivate fraud hackers to make good bots, usually based on real browser code. When a fraudbot network gets caught and blocked from high-value ads, it gets recycled for lower and lower value forms of advertising. By the time you see traffic for sale on fraud boards, those bots are probably only getting past just enough third-party anti-fraud services to be worth running.
This version of ad fraud has minimal impact on real users. Real users don’t go to fraud sites, and fraudbots do their thing in data centers and don’t touch users’ systems. The companies that pay for it are legit publishers, who not only have to serve pages to fraudbots—remember, a bot needs to visit enough legit sites to look like a real user—but also end up competing with ad fraud for ad revenue. Ad fraud has only really been a problem for legit publishers. The adtech business is fine with it, since they make more money from fraud than the fraud hackers do, and the advertisers are fine with it because fraud is priced in, so they pay the fraud-adjusted price even for real impressions.
What’s new for ad fraud
So what’s changing? More fraudbots in data centers are getting caught, just because the adtech firms have mostly been shamed into filtering out the embarassingly obvious traffic from IP addresses that everyone can tell probably don’t have a human user on them. So where is fraud going now? More fraud is likely to move to a place where a bot can look more realistic but probably not stay up as long—your computer or mobile device. Expect ad fraud concealed within web pages, as a payload for malware, and of course in lots and lots of cheesy native mobile apps. Ad fraud makes way more money than cryptocurrency mining, using less CPU and battery.
So the bad news is that you’re going to have to reformat your uncle’s computer a lot this year, because more client-side fraud is coming. Data center IPs don’t get by the ad networks as well as they once did, so ad fraud is getting personal. The good news, is, hey, you know all that big, scary passive fingerprinting that’s supposed to become the harder-to-beat replacement for the third-party cookie? Client-side fraud has to beat it in order to get paid, so they’ll beat it. As a bonus, client-side bots are way better at attribution fraud (where a fraudulent ad gets credit for a real sale) than data center bots.
Advertisers have two possible responses to ad fraud: either try to out-hack it, or join the “flight to quality” and cut back on trying to follow big-money users to low-reputation sites in the first place. Hard-to-detect client-side bots, by making creepy fingerprinting techniques less trustworthy, tend to increase the uncertainty of the hacking option and make flight to quality relatively more attractive.
Malware — or malicious code used during a cyberattack or intrusion — is meant to achieve something. In many cases, the most common form of it is designed to spy or report sensitive information back to its source. Maybe it’s collecting passwords and account info and reporting that data back to someone. Maybe it’s just logging keystrokes and keeping an eye on what you’re doing when you browse.
One thing you don’t hear often, however, is that these malicious tools are used to surreptitiously click on ads. Yep — all those banner and brand ads you see plastered over web pages and search engines. It sounds crazy, right? Why would a hacker go to all kinds of trouble to create a software tool that simply clicks on ads?
The answer is money.
What Is Click Fraud?
This is a particularly recent form of attack, thanks in part to the rise in PPC (pay-per-click) or performance-based ads. It’s called “click fraud” and, surprisingly, it’s an incredibly lucrative industry — not just in the shady world, in the real world, too. One in five paid clicks were fraudulent during the month of January 2017, according to paid advertising experts.
A competitor, for example, can intentionally click on ads or promotions to drive the marketing costs up for a rival. Before you know it, the competitor has ballooned the activity cost of an ad while generating little to no income for the associated business.
You see, pages and websites displaying performance-based ads make more money for higher click rates whether made by a human or something else. Associated costs also increase based on exposure for an ad, meaning the more people that are reached, the more expensive they are. Generally, high click counts are conducive to higher exposure ratings depending on the advertiser. As unethical as it may be, someone who has it out for your brand could definitely do significant damage simply by running an automated tool, referred to as a “bot.”
Why Are Bots a Problem?
Click fraud generally has one of two possible purposes:
Sabotaging the competition by driving the costs of performance ads up and/or reaching budget caps early in the business day/week.
Generating excess revenue by clicking on performance ads continuously.
For both scenarios, it requires constant interactions, engagements or “clicks” with various ads, promotions and media. Outside of someone sitting at their computer all day long clicking on the same ads over and over, it’s a process that’s better done in bulk, which means it’s tedious.
Developers have therefore created automated tools or systems to do the work for them. In some circles this is called a macro, where a unique extension or tool is designed to operate autonomously without interruptions. You could do something like send the same email hundreds of times to an endless stream of contacts. Or, in the case of click fraud, interact with the same ads and media over and over to boost the cost or revenue earned.
This is where bots come into play. Not all bots are used for nefarious ends. In fact, some are designed to make our lives easier and better — especially in marketing. They can automate or speed-up tedious and dull tasks that would otherwise take up most of our workday. For example, you can run a script with your ad campaign to pause spending when you’ve reached a certain threshold, or increase your bids when a specific keywords perform well. You can also use bots to detect when content has been plagiarized.
Unfortunately, they can also be manipulated and used for more devious motives. It is estimated that click fraud or bots cost advertisers over $11 billion per year. That’s a lot of money lost that benefits almost no one except the creators or source of these tools. We — the marketing and advertising industry — have a problem and need to stop spending money on bots.
Click fraud is the act of intentionally interacting with or clicking on PPC and performance-based ads. Advertisers often warn you that clicking your own ads for the purpose of driving up performance is a bannable offense. What they don’t explain is that there are many parties who are able to circumvent this issue thanks to modern automation tools.
There are two very different forms of click fraud, however: automatic and manual.
Automatic vs. Manual Click Fraud
As the name implies, automatic is based on an automated system or tool like a bot. Manual click fraud is carried out by human hands working to actively click on an element or performance ad. A great example of this is when an affiliate or brand actively requests that users click ad links to “support their business or channel” and raise figures. This is bad for advertisers for obvious reasons — plus it’s deceptive.
Automatic click fraud is based on an automated system or tool like a bot. Manual click fraud is carried out by human hands working to actively click on an element.
Then, you have neutral parties — often referred to as click farms — which bridge the gap between the two types. As you might expect, click farms are nothing more than a huge labor force or team hired specifically to (you guessed it) click on links and boost figures. It’s considered both because the workers are “automated” insofar as they operate as a sort of assembly line system.
When the Competition Plays Dirty
As previously discussed, rival brands or firms can launch click fraud campaigns to harm your business, marketing techniques or bottom line.
This is done in one of two ways. First, clicking on your PPC or performance ads drives the cost of your campaign up, effectively creating budget problems. The hope is that it will eventually ruin or damage your brand enough to lower your potential standing in the market.
Second, the goal is to drive up the CPC, making it more difficult for you to afford the campaign and ruining your chances at progress and improvement. More importantly, it eliminates a marketing solution for you that would otherwise be beneficial — even in a small way.
How Can I Prevent Click Fraud?
Preventing click fraud is not as difficult as it seems. In fact, there are some things you can do, including metrics you can pay attention to, in order to quickly identify an attack on your business or campaign.
Keep calm and follow these actionable steps in order to decrease or completely negate fraudulent click activities on your campaign.
1. Identify Bots
Step one is and always will be to identify the bots or offending parties. As with a data breach, the sooner you find the problem, plug the hole and protect yourself, the sooner you can reduce or eliminate further damage. Some things to watch out for include:
Abnormally high CTR or click-through-rates for your campaigns
Underwhelming engagement metrics compared to traffic, such as low time on site, short average session times and extremely high bounce rates
Sudden spikes in traffic, especially during certain hours or periods of the day you wouldn’t normally see them
High traffic rates on pages with PPC media compared to the rest of your site
No correlation between incoming traffic and higher advertising costs, as in you’re paying ridiculous fees for little to no return
Of course, these patterns are not always obvious or easy to detect — especially when you’re dealing with hundreds, or maybe even thousands, of data quality points. That’s where modern detection tools come into play. There are a variety of malware, bot, botnet and sniffer detection tools which can accurately identify or flag suspicious activities.
2. Use Filters, Scripts and Honeypots to Disable Targeting Bots
A captcha, as annoying as they may be, was designed solely to thwart bots and automated systems. The problem is, they can cause significant frustration for your users and hinder an otherwise pleasant experience. That’s why it’s a great idea to use another similar, yet hidden, technique called a honeypot field.
You can also use filters and scripts to block various sites, domains or users who are interfering with your traffic, referral data and revenue. Find identifying or associate information and then use a combination of web searches, WHOIS data and more to find domains or portals you should exclude from your analytics.
3. Perform Metric Audits of Your Own
As is true of most management strategies, you’ll want to spend some time checking up on analytics and metrics using your own intuition. Are there any abnormal spikes or patterns sticking out to you? Can you see one or two trends that just don’t make sense? Are costs suddenly ballooning beyond what they’ve ever been while revenue remains steady or declines?
The beauty of your position and having a supportive team is that you can get involved and really learn the ins and outs of these systems. The more familiar you become with what’s happening regularly and why, the sooner you can identify a traffic or performance problem.
This is also one instance where choosing the appropriate DMP is important. Lotame, for example, offers a ton of additional support other platforms do not, including custom-appointed representatives to review your traffic and referral data and go over the particulars with you.
4. Target Niche Sites and Demographics
As enticing as it may seem, try to avoid targeting broad or sweeping demographics that have the potential to walk away often. In other words, avoid targeting broad groups of people. Instead, stick with the audience and customers you know best and who you know will remain loyal and interested in what you have to offer. Of course, knowing this information is the key to running a successful marketing campaign, which, again, is where having an appropriate DMP in place will benefit you.
You can utilize major data providers, like Lotame, who are determined and focused to generate quality metrics, which means eliminating or blocking bots as quickly and frequently as possible.
5. Be More Careful About Ad Placements
It seems redundant, because you likely already spent a lot of time perfecting and choosing an appropriate ad placement on your site. The goal was likely to maximize performance and revenue, as it should be. But also important is its exposure to potential bots or automated tools that float across your site.
According to White Ops, an online fraud detection company, the “reputation” of a partner, brand or company is no longer a sufficient “benchmark to predict bot traffic.” Instead, the use of “technology to validate all assumptions” is required. It means that no matter how long you’ve been in this business, you probably don’t know the ideal placement for your ads. Technology, modern metrics and customer data is the only way to figure that out.
Reducing Ad Spend on Bots in the Real World
All of this, in theory, is easier said than done. The ultimate goal is to reduce the expenditure and costs associated with bots and automated systems, while improving or preserving the performance of real clicks and engagements. The question then becomes: is something like this possible in the practical world?
Actually, yes. Procter and Gamble recently announced a move to reduce their ad spending budget, trimming over $100 million in marketing expenses. They originally intended to steer clear of potential “bot” traffic and questionable content, but discovered that the change had little to no impact on their business. This proved that their digital ad campaign — at least the ones in question — were largely ineffective anyway. It also reduced ad spend on bots, indirectly.
If you aren’t familiar with P&G, the company is comprised of major brands like Crest, Tide, Bounty, Pampers and more. Again, this company decided to do it on their own, taking on the full risk themselves. Luckily, it paid off.
That’s where having a DMP provider or partner like Lotame is most beneficial. They work closely with Are You a Human to remove bots entirely from the Lotame Data Exchange (LDX) and just so happens to be the only provider to do such a thing. Since most of the interactions come from said exchange, it nearly eliminates the risk of dealing with a bot or automated system.
To End Fraud, We Must Evolve
It’s no secret that sometimes-widespread practices, techniques or policies can be harmful to a particular industry or group despite being used often in various circles. Bots and automated tools are one such example of this, which have been used to wreak havoc and chaos in the advertising world. More importantly, a swarm of effective and aggressive bots can entirely destroy months and months of metrics or customer data in one fell swoop.
If not for the protection of your revenue streams and advertising budgets, you should be concerned with how bots are being used to corrupt your valuable data. With egregious problems such as this, the only solution that makes any sense is to spread awareness, knowledge and experience. That means coming to grips with what these malicious tools are, how they are used and how you can identify them sooner rather than later.
More importantly, we need to come up with better ways — as a community — to circumvent and prevent them. Until that happens, it’s more about the technologies and tools you do use.
As Chief Technology Officer at Purch, John Potter brings a wealth of experience having spent more than a decade with CBS Interactive/CNET, where he held many roles. Most recently he served as Vice President, Software Engineering, managing a staff of 100+ developers in support of brands like CNET, CBS News, ZDNet and Download.com. John holds multiple patents for his system designs that improve Internet connectivity and document classification. At Purch, his role entails managing all aspects of technology, engineering and operations, and he has successfully participated in, and integrated 10 acquisitions.
Could you describe / define ad fraud?
John: Ad fraud is a persistent problem that, according to the IAB, costs the industry $8.2 billion a year in the U.S. While ad fraud is found in various forms, from a publisher’s vantage there are two main problems: One is fraudulent copies of sites that are created, and whose advertising inventory is then presented on programmatic platforms as coming from the original publisher sites. To add insult to injury, most of the traffic on these fraudulent sites is from bots. The other is non-human traffic on legitimate publisher sites from bots scraping the sites, attempting to insert comment links, or coming through content recommendation systems in an attempt to defraud them.
Each of these problems causes different issues and needs to be responded to differently.
How have issues such as bot traffic and audience verification impacted the digital advertising marketplace?
John: The prevalence of non-human traffic and fraudulent or non-viewable advertising inventory has led to an undermining of marketer trust in internet advertising. This directly harms all publishers. Just as importantly, it has led to the need for marketers to add software to their creatives to confirm viewability and detect non-human traffic. This increases the size of ad creative degrades the user experience on publisher sites. Then there are multiple, competing measurement systems in use. All this complicates the ability of publishers to deliver on marketing campaigns.
Are these issues particularly problematic given the rise of programmatic?
John: Yes, all of these issues have been compounded by the rise of programmatic. Marketer’s campaigns are running across a larger number of sites, most of which they have no direct contact with. This makes fraud a lot harder to detect than when you are signing a direct deal.
Why is it important to understand/have an accurate picture of the audience being reached?
John: In the end, all marketing is targeted at particular audiences. As publishers, it’s our ability to provide those audiences that makes us valuable to marketers. At a minimum, marketers should be able to expect that any ads they purchase will be viewed by real humans on a legitimate site that is brand-safe. Publishers and programmatic platforms need to do everything they can to make sure we meet that minimum expectation, and initiatives like TrustX can help with that.
How should the industry be addressing ad fraud?
John: First, publishers and the programmatic platforms need to cooperate to wipe out fraudulent advertising inventory. Ads.txt is a great start towards this, but it’s just a first step. I’m really enthused about the potential of blockchain solutions that will track advertising at every step of the process, and leave an auditable trail. Ideally, we get to the point where every advertising impression sold and served is auditable by all parties to the transaction.
Publishers also need to work hard to block fraudulent traffic on their sites. At Purch, we already do a lot to block bots from our sites, and to prevent advertising being served to any that get around our blocking attempts. We’ve now moved on to integrating real-time bot detection and ad blocking into our server-to-server header bidding platform. I know other publishers and programmatic platforms are taking this issue seriously as well, but it will need to be a continuing concern for a long time to come.
The illustration used in this article, Picco Robots, is reproduced, with modifications, under a creative common license.
With trust in mass media at a record low, publishers look to 2017 as an opportunity to distinguish quality journalism from the fake news landscape and bot traffic of big tech platforms.
CEOs, editors and digital leaders today recognize both the opportunities and the challenges in 2017. Reuters Institute surveyed 143 senior publishing executives in 24 countries to recognize current business sentiment, uncover trends and identify new developments in the digital marketplace. Interestingly, Reuters found that more than two-thirds (70%) of executives believe fake news offers them a chance to strengthened their brands. More than half (56%) say that Facebook Messenger will be an “important/very important” part of their offsite initiatives this year (53% for WhatsApp and 49% for Snapchat). And at the same time, just under half (46%) of these same respondents are more worried today than a year ago, about the role of offsite platforms.
Platform and algorithm changes allowing for easier reporting of false news (and feeding these signals back into the core algorithms so these sources get devalued).
Regulation threats to remove fake news from sites.
Algorithms are expected to challenge our bias.
Fighting for quality news brands.
2. Redefining publishers’ relationship with platforms
Publishers fight back by creating platforms of their own (e.g. Schibsted, started its own platforms for content and advertising to create the scale and data competence to compete with Facebook).
Platforms pay hard cash for content.
Mergers and acquisitions happen more often as scale will mean operating across multiple platforms.
3. Digital advertising and sustainable business models
Subscription payments will focus more about membership and less about paywalls. News publishers, especially, will need to attract new customers, offer new pay services and earn more money from their current subscribers.
Data, loyalty and personalization will help with converting unidentified web users into loyal customers by creating more relevant and personal experiences.
Mobile alerts and the battle for locked screens signifies the shift to mobile notifications to attract consumers back to apps and websites.
Acceptable ads and ad-blocking apply pressure on the marketplace to ensure a more positive user experience with non-intrusive advertising.
Sponsored content replaces the displayad model.
Pop-up newspapers and magazines will offer in-depth coverage on certain topics but for only a limited period of time.
4. Messaging applications and news bots
Voice news bots with voice-activated platforms gain strong penetration (e.g. Alexa and Google Home).
Fact-checking bots are activated (e.g. Full Fact, a UK based company, is already looking to developing a service that fact checks live press conferences).
Conversational commerce emerges (e.g. Crosby, a travel bot service, that reads your email or group messaging conversations and send you recommendations for where to eat, what to do and when to leave for the airport).
5. Voice as an operating system and the rebirth of audio
Podcasts and audio books get a big boost in the car.
Improvements to data and advertising around podcasts lead to significant investments by publishers. A new measurement system form Nielsen’s and a Swedish start-up platform Acast offer metrics for podcasts to support advertising models.
Businesses start to deploy Amazon Echo and Google Home speakers (e.g. hotels use voice-controlled devices to enable guests to order room service, check the weather and find TV channels).
6. Online video and the future of TV
Disillusion with Facebook Live shifts a live focus towards sport and exclusive music, both strong vehicles to attract audiences and advertisers.
Oversupply of short form video leads to falling advertising premiums.
New opportunities with feature-based videos for brands integrate messages into videos.
Video-selfies to experiment with new fantastical filters (e.g. Splash is a new app that allows you to create and annotate 360-degree experiences).
7. The blurring of television and online video
Top content will increasingly be watched on big screens.
Competition for talent and rights heats up to drive new subscription and retain existing users.
News bulletins lose audiences triggers new ways to appeal to young (e.g. CNN acquired video-sharing app start-up Beme, co-founded by YouTube creator Casey Neistat, is building a new brand around distinctive reporting and commentary for millennials).
New reality tech offers huge potential to shape experiences for entertainment, education and commerce. Forecasters suggest around 30m devices will be sold by 2020 generating revenue of around $21 billion.
9. AI and algorithms under fire (e.g. An example gone sour was Microsoft’s friendly AI driven chatbot Tay. Within its first 24 hours on Twitter with Millennials, it was spouting offensive and racist messages such as “Hitler was right”).
10. Automation and a jobless future means robo journalism on the way
More automated stories.
Intelligent content production systems.
Computer and networked assisted investigations.
Filters and alerts are being developed to help manage the information overload (e.g. SamDesk and Dataminr are deployed in newsrooms to pinpoint and manage breaking news in social networks).
11. Cyber-Wars, and Personal Security
Encryption and surveillance are on the rise.
12. New technology
Sharper screens, fold out phones, better batteries.
Faster and more reliable networks (e.g. 5G).
Clothes as a platform with wearable technology.
Biometrics, the end of passwords and checkout-free shopping. A new shopping experience, Amazon Go, that automatically detects when products are taken from or returned to the shelves using computer vision and sensor fusion.
13. Start-ups to watch
Cheddar is a new business news video network for millennial. Founder Jon Steinberg of Buzzfeed is charging $6.99 for premium services on its own website and is looking to drive carriage fees from services like Facebook Live, Twitter and Netflix.
Zipline is a small robot airplane designed to carry vaccines, medicine and blood in developing countries.
Houseparty is a new group video chat, up to 8 people can chat at a time, from the creators of Meerkat.
Accompany aims to provide an automated briefing of all the information you need before you walk into any meeting including relevant files, email conversations with attendees, details about their lives sucked from the web plus up-to-date information on company performance.
2017 provides many opportunities for publishers to rebuild consumer trust in digital media. The Reuters Institute predictions suggest that publishers must help scrub the digital environment from bot traffic, annoying ads and fake or misleading news. They must hold themselves and others responsible and keep a watchful eye on the algorithmic accountability tied to the digital experience. Importantly, publishers must continue to build their digital brand and diversify their revenue streams beyond digital display advertising.
Content marketing continues to mature and is now used by over 85% of all marketers (B2B and B2C). But with that maturity comes the hard realization that reaching meaningful results—for example, a significant lift in site visitors, increase in conversions or in brand perception—requires continuous learning and improvement.
It’s not a shiny magic tool that solves all business challenges. It’s a daily struggle that requires tremendous investment. CMI’s Joe Pulizzi has said “the industry is in the middle of “downhill slide (as part of the Gartner Hype Cycle) into a ‘trough of disillusionment.’”
By now, most brands have tried to create content. A couple of videos here, maybe a few blog posts there. For many of them, doing so didn’t really move the needle and inevitably, they have given up on their efforts.
But for the successful ones, their ability to stay on top has stemmed from an understanding that they need to address creative, cadence, and measurement in a very different way than previously.
To help prepare for what’s next, here are some of the main themes we’ll hear in 2017:
Chatbots are the new CRM Six months after Facebook launched chatbots for Messenger, there are already over 33,000 chatbots on the platform. Why are chatbots so exciting? Because they can offer brands the opportunity to interact with consumers in a direct and personalized way. Chatbots enable brands to create a one-on-one relationship. It’s like the future of CRM—a conversation rather than blast emails.
Bots can either be scripted or based on AI. Some serve a utility function (like ordering flowers or opening a support ticket) and some can provide answers using content.
In 2017, we’ll start to see many brands who will start to create sophisticated bots that engage with audiences using a combo of their social and support teams as well as their content to enable engagements at scale.
Video Storytelling Most online videos promoted online by brands are still repurposed TV commercials. In 2017, we’ll start to see the shift towards “made for web” videos, not as commercials but rather as stories, both long and short. When creative teams are not constrained to 15-30 seconds spots or how to create a pre-roll that isn’t skipped over after five seconds, they can really focus on storytelling and providing real entertainment or educational value.
Podcasts Just as season one of “Serial” changed the world of offline listening, transforming millions to into huge fans of podcasts, “the Message” by GE is a turning point in how marketers should think about leveraging the medium for content marketing. You can become a sponsor to podcasts, but that’s just another form of interruptive advertising like commercials on radio. Conversely, you could create your own podcast, and make it great. Ebay’s “Open For Business” collaboration with Gimlet Media (the company behind podcast hits like “Startup” and “Reply All”) is another great examples of how brands can evolve from interruptive push advertising to become real content marketers.
Bring Content Expertise In-House While it is important for all marketing endeavours, high-quality content is vital to connecting with Millennials. The challenge is how to keep the right balance between quality and quantity. Many brands are now realizing that the best way to create a scaled operation is to bring the expertise in house. Research by Curata shows that in 2017, 51% of companies will have an executive in their organization who is directly responsible for an overall content marketing strategy (e.g., Chief Content Officer, VP or Director of Content). These people are hiring content specialists (someone who may even be ex-journalists and editors) and having them produce great content that takes advantage of the in-house collective experience in the product and category while conforming to all corporate policies and regulations.
Content Analytics and Content Attribution With the evolution of creative skillset comes the realization that measuring content marketing success requires not only a different approach, but also different tools.
Google Analytics often falls short in providing a clear map of how each channel of promotion is driving engagement with the content and how an engagement with one piece of content at the top of the funnel influences a CRM registration at the bottom of it. Moreover, it is hard to use it to see how a blog view leads to a Facebook app download or a Google search and conversion.
That’s why more and more companies are turning to solutions like Trendemon or Chartbeat to better understand and optimize their true content attribution and engagement.
Mobile It is simply not good enough anymore to ensure that your content is responsive and looks okay in mobile. The whole experience with your brand needs to be evaluated from the perspective of mobile first. If you aren’t making sure of this already, then you are way behind. In 2017, the majority of engagement with your website will happen on a mobile device. How fast does your site load? How easy is it to navigate? How is the design optimized for a mobile vertical view and scroll mode? Do you have experiences especially made for mobile, like a one-click button to call your office, or a link to open Waze to navigate easily? Most brands websites are repurposed desktop experiences, and those will just not suffice.
VR/AR VR is still niche and not mass market. But don’t ignore the hype. It’s important to start thinking about how you can create content for that world, given the expectation that this tech will become mass market in 2018.
If 2016 was all about realizing how hard it is to break through the noise of content, 2017 will be about experimenting with new mediums, skillsets, and measurement techniques. With so-called disillusionment also comes breakthroughs and innovations. We hope to see more of that in the year ahead.
Gilad de Vries (@giladdevries)is the Senior Vice President of Strategy at Outbrain. Gilad brings more than 19 years of experience in the digital media and technology fields. Before joining Outbrain, Gilad was VP of Digital Media and Principal at Carmel Ventures, one of Israel’s top-tier venture capital firms, where he was highly engaged in Carmel’s investments in digital media, Internet and mobile startups. Before Carmel, Gilad was a Senior Director of Marketing and Product Management at Comverse Technology, a leading provider of value-added services for Telco Providers. Gilad holds a B.A. in economics and business management from Bar-Ilan University and a Global MBA cum laude from the IDC Herzelia. Gilad is also a first lieutenant (reserve) in the Israeli Army’s technology unit (Mamram).
It’s been a topsy-turvy year for publishers in 2016, with big pushes into video, native advertising and even VR. But the end of the year saw the rise of Donald Trump, and questions about the power of social media and filter bubbles, along with the upside of a “Trump bump” in paid subscriptions and donations at the New York Times, ProPublica and other places.
With 2016 soon coming to a close, let’s look ahead to how the biggest trends of the past year will influence the digital media business in the year ahead.
1. Addressing fake news and the filter bubble Fake news and the filter bubble, particularly after BuzzFeed’s explosive story on Macedonian teens reaping profits from pro-Trump news sites, have emerged as the topics du jour for media and technology companies following the 2016 election. With users now increasingly aware that red feeds and blue feeds exist as competing truths on their favorite platforms, all parties involved – from technology behemoths like Facebook, Google and Twitter, to media executives and publishers, to individuals themselves – must bear the burden of addressing this issue as 2017 unfolds.
Facebook and Twitter have both announced measures to help stop the spread of fake news, particularly by limiting advertising from fake news websites on its platforms. Facebook has reportedly filed a patent for a technology that would help users spot and report “objectionable content,” and is working with top news publishers to curate content directly into news feeds, both of which would presumably help curb an infestation of bogus content. And news consumers themselves will need to brush up on digital news literacy if they want to understand where content comes from and who’s behind it.
2. Love/hate relationship with platforms continues Speaking of ups and downs, publishers have had a tough time coping with the growing power of social platforms, especially Facebook, as they command attention but don’t always play fair. Facebook continues to err in disclosing accurate ad and video measurements, while Twitter confuses analysts and consumers alike by banning, and then reinstating, the account of white nationalist leader Richard Spencer. Their standards will have a wider influence on the industry – particularly when it comes to free speech, hate speech, and the limits and freedoms of platforms to censor content.
What publishers can do in working with these platforms is multifold. First, they must consider whether they want to continue to trust and sustain social media ecosystems (that were built in large part on the appeal of their content), while competing with them for advertising and distribution. And if it is the case that Facebook is not investing in publishers the way they should and Twitter is declining in importance, then perhaps publishers need to invest more in other platforms like Snapchat and Google AMP rather than Facebook’s Instant Articles.
More likely, publishers will need to work together to push platforms like Facebook to work more closely on initiatives that are a win-win for both sides. Making sure analytics are right, ferreting out fake news and developing long-term revenues for Facebook Live would go a long way toward collaborative goals.
3. Premium content demands premium subscriptions and donations Despite Donald Trump’s war with mainstream media, his election has helped the bottom line for news publishers and media outlets. From ProPublica to the New York Times, The Atlantic to Columbia Journalism Review and the Los Angeles Times – all have reported increased readership and interest from audiences, whether in the form of web traffic, donations or direct subscriptions.
Will this paid content boom last? The Financial Times had a similar “Brexit bump” and has largely kept most of its uptick in paid subscribers, so we will see. But it’s interesting that there’s less emphasis on eyeballs-for-eyeballs’-sake and more interest in paid premium content. Two new digital startups, The Outline (from Joshua Topolsky) and Axios (from Politico founder Jim VandeHei), have pushed quality over quantity, with manifestos against the chase for clicks. VandeHei, in particular, is pushing $10,000 subscriptions at Axios. We’ll see if people continue paying up.
4. Too much video, too many native ads This year saw so many publishers push harder into video, and push harder toward native ads and branded content. On the video side, publishers like Mashable and BuzzFeed went all-in on video, while startups like NowThis and OZY are basically built for video. And traditional publishers like the New York Times, Time Inc. and the Wall Street Journal have invested deeply in their in-house studios to deliver branded content. Vice Media and BuzzFeed have also said that branded and sponsored content offered a substantial chunk of their 2016 revenues.
And that parallels the push by Facebook, Instagram and Twitter to make big bets on video, including live-streaming, where Facebook paid publishers to produce content. But how much is enough, and when do you get to overkill? We know that advertisers love branded content and video ads, and publishers know that their video is eminently sharable on social media, but is this what consumers really want? When you have a glut, only the best quality can survive. People will gravitate toward quality content, toward timely content, toward content with a personality. That means data and surveys will play an outsize role in which types of branded content or video – or heck, branded video – will make it in the long run.
5. Beyond the cutting edge: Bots, VR and edge tech needs to add to bottom line Virtual reality was a media darling in 2016, but it’s still a far way off from mass consumption. More than anything, it’s going to take actual demand from the consumer, and widespread adoption, for it to become truly profitable. 2017 may not be the year we’ll see that, but the VR and AR markets are expected to grow in the long run as the hardware and techniques for investing in the tech become cheaper.
Bots, meanwhile, will become a hotter topic in the coming year, especially given Microsoft’s forthcoming Azure cloud service that will help developers build bots more easily. Yet again, consumer attention remains an important consideration. While bots may seem like a convenience, and have caught on in Asian countries, developers are going to have to work hard to convince people of their necessity in order for the tech to truly take off and become profitable.
6. The regulatory climate under Trump for M&A and privacy An incoming Trump administration has Net neutrality advocates worried that a reversal of federal policy is likely despite Trump’s silence on the issue. His main public comment on Net neutrality, from 2014, is a tweet asserting the FCC’s move to reclassify the internet as a utility was “another top down power grab of the Obama administration.” His appointees to the FCC transition team have all been critical of current Net neutrality rules, so this could lead to yet another battle over the rules.
Although Trump promised that he would fight against the $85.4 billion AT&T/Time Warner deal, which he lambasted as “too much concentration of power in the hands of too few,” it seems he may be tiptoeing back on that line. AT&T executives, at least, seem confident the merger will pass regulatory scrutiny after meeting with Trump’s transition team. Whether this suggests more mega-mergers in media and tech likely depends on just how fickle Trump proves to be.
And when it comes to privacy, all indicators suggest 2017 and the following three years will be turbulent times. With more reports of hacking and press freedom under threat, the Signal app – which security researchers say offers the most privacy protection out of all messaging apps – has experienced a 400 percent jump in daily downloads since November 8.
Programmatic advertising and its aggregation of inventory are often viewed as the key forces behind the commoditization of digital ad impressions. The shift to audience-centric media buying from earlier practices emphasizing context has left many questioning whether or not the environment surrounding advertising really matters. It is important that we examine what extent does quality drives advertising effectiveness.
Today’s release of comScore’s independent research, “The Halo Effect: How Advertising on Premium Publishers Drives Higher Ad Effectiveness” presents empirical findings that Digital Content Next member sites delivered significantly higher branding effectiveness results than other sites.* Importantly, the research finds that the primary driving force for the brand lift is the positive impact of the “halo effect” of the contextual environment in which an ad is seen. In other words: A good environment drives better ad campaign effectiveness. In fact, while some of the positive effect can be contributed to higher ad viewability and less invalid traffic on premium sites, comScore found that the most significant driver of increased effectiveness is the halo effect of appearing on premium sites. This “premium” designation is one that our own research has borne out as a distinguishing factor in other areas, such as the quality of ad inventory and the significantly lower bot traffic on DCN member sites.
What value does the halo factor have for marketers? Used properly, it can help a brand cut through the clutter and save money on marketing by using this momentum to operate effectively and efficiently throughout the marketing funnel, particularly in the brand consideration stage where the “halo” lift was 3x.
This is demonstrated by the research, which first compared the overall brand lift effectiveness of ads delivered on DCN members’ sites versus non-DCN premium publishers sites. DCN premium publisher sites significantly outperformed those on non-DCN sites—by 67% (0.89 brand lift vs. 0.53). Measuring brand lift answers some of the biggest questions marketers have such as are my ads influencing consumer behavior, are they influencing sales and to what degree. Knowing a campaign is 67% more effective in influencing consumer behavior and intent to purchase is a win-win situation giving marketers a lead in the marketplace.
comScore also identified ad effectiveness metrics in other parts of the marketing funnel. DCN publisher sites performed 32% better on top funnel metrics, which includes awareness, recall and message association (.56 brand lift vs. .42). The mid-funnel—where consumers have the potential to develop a stronger interest in your brand—performed more than three times as effective for DCN publisher sites with a 1.87 brand lift vs. 0.51. Premium publisher sites can influence the mid-funnel by 255% more effectiveness, a potential accelerator of brand sales. The lower and final part of the funnel includes purchase intent and share of consumer choice metrics. DCN premium publishers performed 9% better on the bottom-funnel metrics (.38 brand lift vs. .35).
comScore’s independent research provides a clear message that brands benefit from advertising on premium publisher sites. While the research found that premium publishers perform better across all phases of the marketing funnel, the value in driving mid-funnel metrics is especially important to convert awareness into positive brand consideration. So, while digital continues to create opportunities for targeting and increase opportunities for efficiencies, it is clear that placement within the context of quality environments provides a “halo effect” that drives ad effectiveness.
*This study was not commissioned by Digital Content Next (DCN) or any of its member companies. While the results were shared with DCN prior to publication, DCN did not have any influence over the design of the research or its findings.