Beyond the latest remix of Lil Nas X’s Old Town Road, the app has piqued the interest of social media managers at news publishers. Launched by Chinese tech company ByteDance in 2017, the service merged with Shanghai-based lip-syncing app Musica.ly before its popularity really took off in 2018. It became the most downloaded app in Apple’s App Store and hit one billion downloads on all mobile platforms in early 2019.
To catch everyone up: TikTok allows users to record 15- to 60-second looping vertical video. It has a built in editor to add background audio tracks and augmented reality-inspired overlay graphics. It’s huge adoption has been buoyed by using samples of pop music as background tracks. TikTok has partnered with music studios to navigate the tricky copyright status of those tracks and music stars have used the platform to leverage viral memes to spread awareness.
Why is this news?
Why should news managers care though? “To me, the why is obvious. It’s a whole new generation. [TikTok is] basically Gen Z’s biggest platform. Not all Gen Z likes TikTok, but a lot of them love it passionately,” says Washington Post video editor Dave Jorgenson.
Jorgenson has been behind The Washington Post’s push into the format. “There’s not a lot of news on TikTok. And, for someone who works for a newspaper or a broadcast network, that might seem kind of scary. But for me, I was like, ‘Oh, that’s amazing,’” he says. “I mean, why wouldn’t we use this app that — I think as of Friday — was number one in the entertainment section of iTunes.”
Other brands on the app include NBCNews, E! News, and The Dallas Morning News. Though Jorgenson seems to have truly tapped into the TikTok format, attracting a devoted audience to the Washington Post’s TikToks.
A culture of its own
Users spend an average of 52 minutes a day on the app. And, for Jorgenson, it’s been a way to introduce a newer generation to the venerable newspaper brand. But that’s taken a fairly specific strategy. “I’ve been very heavy handed, you’ll notice a Washington Post literal, physical newspaper cameo and we really want that to stick in there. It is funny how many people think that my name is Washington or something,” said Jorgenson.
The other main benefit Jorgenson sees is a pretty wide open space in the app for breaking news. There just aren’t many competitors. But he adds that anyone interested in using the platform should take some time to familairize themselves with the culture.
“Embracing the culture is really important,” he says. Sure, a brand or newspaper can try and jump to profit off the growing platform without figuring out what that culture is. However, audiences “know when they don’t understand the app, and they don’t know what’s going on.”
The scoop on TikTok
But if a brand is considering the space, there are a couple things to understand about going viral on the platform. Similar to Instagram, there’s an algorithmically organized “For You” section where the app surface relevant content for the user. Videos appear based on the Likes given by the user’s social circle.
There’s another unique way that TikTok puts its own spin on sharing called a “duet.” Users of the platform can record their reactions to be played concurrently beside the original video. And, if an account large enough does that, it ends up being a huge boost to viewing numbers.
“Another tip for a newsroom would be to use popular songs that everyone is using,” said Jorgenson. In fact, TikTok displays videos using the same songs together allowing discovery of content along similar themes.
At the end of the day, Jorgenson sees others in the news space following the Post’s lead. “I’ve started to be pretty open about how I do it because I think competition is good,” he says.
“A friendly rivalry is a good thing on any social app.”
Media companies have long declared that it matters where
advertising campaigns run. Unfortunately, when the ability to micro-target
users at scale became available, particularly on massive social media
platforms, it was simply too attractive for advertisers to ignore.
Despite countless media reports on the risk for brands running alongside toxic, user-generated content on YouTube or Facebook, behavior has never changed for the long-term. Instead, many advertisers used unsophisticated keyword blocking strategies or stopped advertising in the news category and considered themselves “safe.” Regrettably, that something amounted to little more than throwing the baby out with the bathwater. Last week, however, two new research studies released last week clearly demonstrate why this short-sighted strategy needs a serious rethink.
Reports from World Media Group (WMG) and Integral Ad Science (IAS) found that digital ads viewed on trusted editorial sites generate a more engaged audience for advertisers. These reports confirm comScore’s 2016 research that showed ads viewed within a premium publisher environment drive significantly better advertising effectiveness, particularly for mid-funnel consideration.
comScore’s independent research came at a time when the industry
was in the midst of proving the basics to the market: that ads were being
delivered to real computer screens and being seen by humans, not bots. comScore’s
empirical research showed that running ads on “premium” publisher sites had
significant impact on brand effectiveness and the lift to advertiser brands was
attributed to the “halo effect” from the premium environments.
Quality context delivers real results
Fast forward to 2019 and WMG, a global alliance of a dozen leading news and media companies (e.g. The Atlantic, Bloomberg Media Group, The Wall Street Journal, and The Washington Post) commissioned Moat, a digital marketing measurement service, to conduct research on the quality and engagement towards premium inventory across desktop display, mobile display and desktop video.
The findings showed that premium digital inventory running across quality content brands in Q3 2018 outperformed Moat’s benchmarks by between 13% and 144%. WMG’s research also concludes that the primary driver of increased engagement is the “halo effect” that comes from the value of the contextual environment in which the ads are seen.
Key performance metrics
Desktop display ads viewed achieved an active
page dwell time of 66 seconds, 39% higher than the industry average. The in-view
rate (where 50% of the ad is viewable for 1 second) and in-view time both
exceeded Moat’s benchmarks by 27% and 25% respectively.
Mobile display performed 35% more interactions
than average. Viewability rates were 32% higher than the benchmark and engaged
consumers for 13% longer.
Desktop videos viewed achieved 22% higher-than-average
rates for viewability. Consumer attention to videos was also high, with audible
and visible completion rates at 144% higher than the Moat Q3 2108 benchmarks.
People react to context
Additional biometric research by IAS, a technology company that analyzes digital advertising placements, identified the impact of high quality and low quality mobile environments on people’s reactions to digital ads. This neurological research – looking at the brain centers responsible for positive and negative affinity – further substantiates that the environment in which an ad appears has significant impact on consumers’ reactions to that ad.
IAS’ research found that ads viewed in high-quality mobile
web environments are perceived 74% more favorably than the same advertisements
seen in low-quality environments. Advertising in high quality content
environments resulted in 4% greater favorability, 20% greater engagement, 7%
greater emotional intensity, 29% memory encoding detail, and 30% memory
encoding global.
For several years now, advertisers have struggled to use
digital advertising to grow demand and desire for their products, particularly
as direct-to-consumer brands and other low-cost, “brandless” options (see
Amazon Basics) are starting to flourish. Like the earlier comScore Halo Effect
research, these new studies prove that advertising on high quality content
sites drives higher brand attention and engagement for both display and video
advertising. Context matters and research continues to prove this: Ad
performance is better in a premium content environment.
Anyone who watched the doctored video of congresswoman Nancy Pelosi in which she appears drunk—a clip that quickly went viral a few months ago and which Facebook refused to delete or flag as bogus—can appreciate how quickly and suddenly the world is changing. That unsettling event marked a turning point for many, one that signaled the dawn of the deepfake era.
While the Pelosi video was only mildly altered, it turned attention on the rising issue of deepfake videos. For the uninitiated, deepfakes use artificial intelligence and software to manipulate video and audio to such a clever extent that it becomes increasingly difficult to vet content and determine its authenticity. While deepfake videos and the means to create them have actually been around for a while—particularly in the form of counterfeit celebrity pornography and revenge porn—they’ve increasingly captured the public’s attention.
And that’s for good reason: People are
worried about the damage deepfakes can do and the inability of the news media
and the government to curb this disturbing tech trend. With a national election
looming and tensions between nations simmering, combined with the ability for
bad actors to sabotage these delicate circumstances, the stakes couldn’t be
higher.
Seeing is not
always believing
“We have seen deepfake technology improve dramatically over the past year, and it’s getting harder to identify AI-synthesized media,” said Till Daldrup, training and outreach programming coordinator for The Wall Street Journal. “We’re observing two unsettling trends. First, deepfake technology is being democratized, meaning that it’s getting easier for forgers to create convincing fakes. Second, several startups and internet users are commercializing deepfakes and offering to produce AI-synthesized fakes upon request for money.”
While he can’t name a major national news organization being hoodwinked by a deepfake to date, “we have already seen journalists from other news organizations become targets of disinformation campaigns, including deepfakes that are trying to humiliate, intimidate, and discredit them,” said Daldrup. He cited as an example the notorious deepfake of Indian journalist Rana Ayyub.
Mike Grandinetti, global professor of Innovation and Entrepreneurship at Hult International Business School, said the number of deepfakes has risen exponentially over the past three years as access to new tools has grown.
“Today, powerful new free and low-cost apps like FaceApp and Adobe’s VoCo enable anyone to manipulate facial images and voices, respectively,” said Grandinetti. “Think of this as fake news on steroids. As a result, deepfakes are becoming increasingly problematic for the news media.”
Deborah Goldring, associate professor of marketing at Stetson University, suggested that it’s easy for the public to be fooled by this fraudulent content. “According to a Buzzfeed survey, 75% of Americans who were familiar with a fake news headline perceived the story as accurate,” she said. “And because social media makes it so easy, consumers are more likely than ever to share news with their network, which further spreads inaccurate information.”
The fourth estate
fights back
What can the news media do to deter the decepticons,
call out the counterfeits, and prevent further Pelosi-like phonies from hogging
the headlines? Plenty, say the pros.
“Increasingly, it felt like the news cycle would become dominated by a clip of video that was false in some way. Our reporters were spending more time telling stories about manipulated video and the ways it was used to attempt to sway political opinion,” said Nadine Ajaka, senior producer of video platforms for The Washington Post. “Politicians, advocacy groups, and everyday users are sharing manipulated video, and there is a sense that the public can’t trust what they see. It feels a bit like the wild west.”
Consequently, Ajaka and the Post’s fact checker Glenn Kessler and video verification editor Elyse Samuels helped create a classification system for fact checking online video and a guide designed to assist consumers and journalists alike in navigating the treacherous trail paved by deepfake videos. The guide defines key terms and organizes the problem into the categories of missing context, deceptive editing, and malicious transformation.
“Fact checking video is different than
fact checking a statement. You’re not just parsing words but many factors—images
and sound—across the passage of time. With this initiative, we can give
journalists a shared vocabulary they can apply to the different ways video can
be manipulated, so their readers can be more informed about what they’re
viewing,” Ajaka added.
The Wall Street Journal, meanwhile, has
formed a committee of 21 newsroom members across different departments that
helps reporters identify fake content online. “They know the tell-tale signs of
AI-synthesized media and are able to spot red flags. Each of them is on call to
answer reporters’ queries about whether a piece of content has been manipulated,”
said Daldrup.
“We want to be proactive and are raising
awareness for this issue inside and outside the newsroom. We are also
collaborating with universities and researchers and are constantly
experimenting with new detection tools and techniques.”
Other weapons in
the war on deepfakes
Daldrup said there are several promising
detection methods emerging, most of which have been developed by researchers at
universities like UC Berkeley and SUNY; the Defense Advanced Research Projects
Agency (DARPA) has also been trying to perfect machine-learning algorithms that
can spot deepfakes.
“A potentially significant component of any meaningfully effective solution would likely include blockchain technology,” said Grandinetti. “Trupic, a photo and video verification system, uses blockchain to create and store digital signatures for authentically shot video as they are being recorded, making them easier to verify later.”
Additionally, partnering with media
forensics experts at universities and other research institutions that have
access to these tools can be a smart strategy for newsrooms.
However, “this is an ongoing cat-and-mouse
game. Every time researchers come up with a new detection method, forgers will
alter their techniques to evade being caught,” Daldrup said.
That’s why editors and reporters will need
to work harder and rely on proven vetting methods. “Journalists must
collaborate to better verify sources through third parties,” recommended
Grandinetti. “And they need to spread the message to news consumers that they
need to be much more on guard with healthy skepticism when a video strains
credibility, too.”
The right approach for a news organization
depends on its size and mission, Daldrup suggested. If you’re a journalist, “in
general, it is good practice to think twice before you share video of unknown
provenance on social. It is also helpful to monitor research on deepfakes in
order to stay on top of the latest developments in the field.”
Lastly, remember to “share best practices
with the rest of the news industry,” added Daldrup. It is essential that the
entire media industry keep pace with issues that threaten the quality of news
and the public’s trust.
Echo Chamber: a room with sound-reflecting walls used for producing hollow or echoing sound effects —often used figuratively.
Most of the time, echo chambers are amazing, secret shared
spaces, acoustic marvels. See the Whispering Gallery of St. Paul’s Cathedral in
London or the famous subterranean chambers at Capitol Studios in Hollywood.
Unfortunately, “figurative” echo chambers just don’t produce the
same vibe. A prime example is my renewed optimism every March that this is
finally the year for the Washington Nationals. I spend all winter reading and
talking with other fans about which free agents the Nats have signed and why
this will finally get them over the hump. Players talk about how they
incorporated hot yoga into their offseason conditioning program, which I’ll
translate to mean there will be a surge in home runs in the hot, humid DC
summer. Advanced stats project breakout seasons for practically the entire
roster.
Of course, there are plenty of critics out there to
counterbalance my irrational optimism. Some are less articulate than others (hello,
Phillies fans!). Even my loving wife cautions that the Nats will break my heart
again. But, in the cold, dark winter, I willfully shut out all the haters in
favor of my sweet summer dreams. And, yet, here we are still waiting for the
Nationals to win their first playoff series.
Public policy is another arena in which echo chambers don’t support balanced, rational thinking. There are two recent prime examples surrounding major tech companies that are suffering from the echo chamber effect:
A more balanced approach
Facebook recently announced that it would develop its own digital currency, Libra. It’s a big swing with grand slam potential. Quite frankly, it makes a ton of sense as Facebook has massive scale and a well-suited combination of services. However, Facebook dramatically underestimated the reputational harm caused by the Cambridge Analytica scandal, its previous settlement with the FTC for privacy violations, a key area for banking and other regulators concerned about the ability of dark money to flow through Libra.
The company also failed to anticipate basic questions from lawmakers such as how much anonymity would be granted to users of the service. Their rollout strategy basically amounted to “we got this, bro!” At a Senate Banking Committee hearing this week, Facebook’s David Marcus was forced to concede that they will not launch Libra until regulators’ concerns are addressed.
Manipulation and the market
The Senate Judiciary Committee also convened a hearing to look at whether (and how) Google tweaks its algorithms to impact speech and profits. Admittedly, there is a lot we don’t know here. Even Subcommittee Chairman Ted Cruz (R-TX) acknowledged that he has only seen anecdotal evidence that Google deprioritizes conservative voices.
However, Google clearly prioritizes its own sites in search as well as the ads that net the biggest profits for Google. As the dominant player in the ad serving landscape, it’s clear that the company’s algorithms are tuned to make the most money for Google to detriment of consumers and competitors.
Free speech
There are major concerns about real and perceived behaviors that have major impacts on a wide swath of democratic discourse and commercial activity. So, against that backdrop, Google’s new global head of government relations wrote and testified that Google’s algorithms do not factor in political views. As evidence, he cited Google’s diverse workforce and one study. But, as Senator Josh Hawley (R-MO) reminded his colleagues, Google has a history of ideological censorship, such as when they accommodated the Chinese government to operate in that country.
Despite the duopoly’s posturing, lobbying, and academic influence, there’s a
rising sentiment that they can’t be trusted. In fact, Senator Hawley pointedly
asked Google’s head lobbyist “why would we believe you about anything?” Despite
direct pleas from Senators Hawley and Richard Blumenthal (D-CT), Google
wouldn’t even agree to have a 3rd party conduct an independent audit
to ensure compliance with Google’s stated claims of non-bias. The whole
interaction underscores the fact that Google has essentially evaded Congress
and the industry about how its algorithms work – often to the detriment of
competitors and consumers alike.
Both Facebook and Google’s approaches to Congress felt like they had been developed in an expensive, lobbyist-enabled echo chamber. We can shake our heads at the ludicrousness of these situations. But the reality is that these duopoly built echo-chambers yield far greater consequences than the dashed dreams of a middle-aged baseball fan. These two companies shape public discourse throughout the world. They also dominate the digital advertising landscape, raking in massive profits, and deciding the winners and losers in the marketplace. Echo chambers are not the place in which balanced, constructive strategies are conceived. It is time we demand more from Google and Facebook than obfuscation and the raw exercise of market power.
As children’s digital content consumption has come under the microscope and parents are realizing the extent to which popular online video services fall short of their expectations, federal regulators have voted to take away some of the guarantees protecting traditional sources of educational media. At the same time, others are calling on the Senate to leverage COPPA to reign in platforms’ approach to digital video and to enact new legislation to better keep pace with the rapidly evolving digital video marketplace. Clearly, the industry is at a crossroads and children are increasingly dodging traffic in the form of data collection, ad targeting, and unconscionable content—all of which would be unthinkable in the carefully moderated world of kids TV.
Unsurprisingly, YouTube is often found in the crosshairs when talking about objectionable digital experiences for kids. With issues stemming from lackluster privacy protections enabling extensive collection of childrens’ data to recommendations algorithms pitching inappropriate videos, the video social media network faces pressure from the Federal Trade Commission to turn around its platform. One idea from the company is to double down on its YouTube for Kids platform and effectively quarantine kids content from the rest of the site. However, to date, that approach has had some significant problems of its own.
Of course, it’s easy to see why parents like the service: It’s got tons of available content, it’s easy to find, and probably most appealing of all, the large majority of it is free.
Other services, from new market entries to the larger video services like Netflix, Hulu, and Amazon Prime Video (as well as upcoming competitors like Disney+), to traditional media networks have implemented features to better safeguard children. Companies like SuperAwesome are developing video platforms that take the risk out of kids content online for parents and advertisers. And Superawesome demonstrates the level of interest in this space, having raised $13 million in funding earlier this year, led by Harbert European Growth Capital. It was also ranked one of the U.K.’s fastest growing companies.
What’s changed
While YouTube holds great appeal and attracts massive audiences, it has yet to unseat one medium that has been guided by rules to protect children’s content for almost 30 years. That would be broadcast television. “It’s certainly the most powerful way to reach the biggest audience. You know, the demise of linear TV has been greatly exaggerated,” says Harold Chizick CEO of Chizcomm and Beacon, the largest media buyer in the children’s programming sector.
While the digital offerings from content companies are more robust than ever, and kids end up watching on more screens, Chizick says that there is a key difference: cowatching. Parents and kids are more able to catch content together when it’s on a larger screen where broadcast TV is usually found.
Additionally, the advertising industry has taken advantage of the dearth of younger eyes. And, in practice, kids end up not being able to avoid targeting technology that leverages collected personal information. That’s challenging as there additional protections for kids and their data under the Children’s Online Privacy Protection Act (COPPA) which advocates say the federal government is not doing enough to enforce.
Kid Vid
Although the online kids programming business doesn’t necessarily feel ready for prime time, the Federal Communication Commission voted this week to change the rules and regulations surrounding “Kid Vid” or the broadcast licensing requirements of children’s programming called. The changes were originally larger in scope when first proposed by Republican commissioner Mike O’Rielly last winter, in a vote of three to two among the commissioners, the FCC decided to enact an earlier start time for the programming (6 a.m. instead of 7 a.m.), removal of a mandatory 30 minute length for individual programs, and a reduction in reporting requirements for the broadcasters. Broadcasters will still need to air at least three hours a week of educational and informative content.
These changes make sense because broadcast stations need more flexibility to air other important content like local news and coverage of community events, as O’Rielly said in an op-ed at The Hill. As well, encroaching modernity has heralded the shift from linear TV to digital options meaning the burden of regulations falls only on part of the industry. “Notwithstanding this extensive competition in the video marketplace, local broadcasters are the only ones forced to operate under our Kid Vid rules,” he wrote.
In her dissenting opinion, Commissioner Jessica Rosenworcel said that she was concerned about the reliance on algorithmic recommendations for kids who watch content online. “This [proposal] follows on the heels of reports that automatic recommendation systems can present disturbing videos on the screen, one after the other. As a mother, I am not at ease when my kids before the computer and rely on algorithms to deliver their next video,” said Rosenworcel during arguments before the vote.
Modern regulation
Early on, the changes to Kid Vid had children’s programming advocates—and more recently a group of Senate Democrats—concerned. “The FCC’s assumption that children’s television guidelines are no longer necessary because programming is available on other platforms is simply wrong,” as three children’s television groups said in a fall 2018 filing to the FCC. “To obtain access to non-broadcast programming, households must have access to cable or broadband service, and be able to afford subscription fees and equipment. Many families, especially low-income families and families in rural areas, cannot access or afford alternative program options.”
Christopher Terry, assistant professor at the University of Minnesota Hubbard School of Journalism and Mass Communication agrees. In a phone call, Terry levied criticism at the shifting motivation behind the changes. Originally proposed as a first amendment issue in 2018, he said that was unlikely to fly due to the fact that the FCC has full mandate to regulate the content found on broadcast channel as licensing conditions. The reframing of the changes as a competition problem in an era of changing viewing habits, only came later. Even then, Terry questions who is actually benefiting from more competition.
“There is absolutely no upside for the people who use this programming in this proposal,” says Terry. “The only people who benefit from this is the FCC—they’ve got less paperwork to deal with—and the companies that are going to have more opportunities to figure out ways to get around this.”
With the FCC’s new rules moving forward nonetheless, the need for mature, responsible online platforms for kids content delivery is more critical than ever. Regulators want people to embrace the new online platforms, so there should be a destination which protects the interests of kids and their parents equally as any bottom line.
Local television news enjoys a somewhat unique vantage point among audiences: It’s America’s most trusted source of news according to the 2018 Poynter Media Trust Survey. Just over three quarters of respondents said that they either trust local TV news “a great deal” or “a fair amount,” beating out network news at 55% and online news at 47%. (Though local newspapers weren’t far behind broadcast at 73%).
But local news is not without its challenges, as younger generations change their viewing habits and cord cutting becomes more common. In fact, bridging those gaps are the stated goals of CBS’ new local news streaming services. Earlier this month, CBS announced the launch of CBSN Los Angeles. It’s the second local news over-the-top service offered by the Eye Network after a version in New York started streaming last December. Available 24/7, the service offers viewers access to not only the regularly scheduled one-hour news broadcasts, but also exclusive news and weather content produced by a dedicated team.
A local stream
The effort is built on the experience and infrastructure of the CBSN streaming service launched in 2014, according to Executive Vice President and General Manager of CBS Local Digital Media, Adam Wiener. “It’s an investment in innovation. We want to be wherever our consumers are and we’re creating product to meet that need.”
While bringing local news to an online streaming might feel like the natural next step in news delivery, there could be reasons it ends up more complex than that. “I love the idea of experimenting with local news,” said Christopher Ali, associate professor of media studies at the University of Virginia. “I’m a huge fan of it, we need to find different ways to keep local news going.”
Ali also points out that the corollary of the fact that younger people not watching the news, is that the main audiences are older adults, Unfortunately, at least at present, this demographic is the least inclined to adopt new viewing habits. He thinks the audience for local news could remain older as there are unique properties of the local news viewership.
Youthful news
For example, to combat declining audiences, a February 2019 report from the Shorenstein Centre and Northeastern University suggested that local news needs to look at online outlets like Vox for inspiration. But that might not be enough to change local news’ fortunes among the young, says Ali. “We tend not to settle down until we’re in our 30s, which means that local politics, local zoning issues don’t tend to matter, really, until we have kids or we buy a house,” he says.
Another factor that marks the CBSN services are their locations in major hubs. With LA and New York online as well as planned expansions to Boston and the Bay Area, there’s not yet talk for movement to smaller markets. “Places that are hurting for local news are the mid sized cities, your Kansas City or St. Louis. These are the ones where we’re seeing the collapse of the newspaper, and also the kind of a less robust television ecosystem,” says Ali. “So how do we scale what CBS is trying to do to make sure that these other markets, which are quickly becoming local news deserts, are going to be served as well.”
Since not all small towns and remote areas have fast internet, that could also stratify the potential audiences for digital local news broadcasts, Ali says.
Consider the source
Valerie Belair-Gagnon, assistant professor of journalism studies at the University of Minnesota, agrees that any move to digital will favor those with better internet access. However, she also says that local news’ move to OTT is “part of a larger trend of news organizations to rely on third party platforms to produce news, reduce costs and a reaction to the diversification and segmentation of audiences.”
While CBS certainly has control over its distribution offering stream access through a website, it also has apps located on the App Store, Google Play Store, Amazon’s Fire TV, and Roku’s streaming platforms. For (relatively) smaller companies, these relationships have also proven to be a challenge to navigate. Apple has been accused of anti-competitive behavior by Spotify, and the music streaming service has filed a legal complaint in the E.U. It alleges that the 30% cut Apple takes from digital purchases grants an unfair preference for their own product.
Belair-Gagnon also worries about the ultimate responsibility of creating journalism could be rendered ambiguous. “In a world where news organizations are increasingly relying on third party outside of regulated channels and where there is increasing opportunities for different forms of storytelling, whose responsibility is it to produce news? Who is liable when journalists relinquish their control over third parties?”
The bottom line is that the need for local news is great and experimentation with ways to serve this market is critical. However, there also remain important unanswered questions in the path to modernize the evening news broadcast.
There’s no doubt that podcasts are a growing business, as measured by audience and ad dollars flowing into the space. U.S. advertisers spent $479.1 million on podcast advertising in 2018, up 53% from $313.9 million a year earlier, according to a report released earlier this month by the Interactive Advertising Bureau and PricewaterhouseCoopers.
Those who have advertised on podcasts are
often pleased. “Podcasts are
a phenomenal opportunity for brands,” said Michael Duda, managing partner at
Bullish, a creative agency and consumer investment firm. “We’ve seen tremendous
results for our early-stage brands who conduct marketing programs that are
synergetic to their target profiles.”
But many advertisers—particularly large, established ones—still feel they need more solid metrics to justify their spending. Groups like the IAB work to address these concerns through initiatives like the compliance program it launched in December, which certifies that companies are adhering to its technical guidelines.
Measurement challenges
According
to the IAB announcement, podcast advertising lacks uniformity in measurement
systems and metrics. Their statement also notes that “meaningful measurement
has been thwarted by an inability to connect, track, and analyze user requests;
measurement products that use dissimilar, proprietary algorithms; and a lack of
an agreed-upon set of metrics and their meanings.”
Though
standards exist, marketers say podcast advertising generally performs better
for burgeoning direct-to-consumer brands and other early-and mid-stage
advertisers versus the multinational, established stalwarts. “The early and
mid-stage brands do better with podcast advertising because they can measure it
more effectively,” said Steve Shanks, partner at Ad Results Media, an ad agency
specializing in audio and podcast advertising.
The
top five business categories of audio and podcast advertisers were direct-to-consumer
brands, followed by financial services, business-to-business, entertainment,
and telecommunications brands, according to the IAB report.
Common currency
But larger, more established consumer brands need additional reassurance from a metrics perspective if they are going to invest more heavily in podcast advertising. “The reason some larger brands don’t get into it is because with their media mix models they don’t know how to effectively buy this medium, and therefore the direct-to-consumer brands are smarter in the way they approach it and take advantage of the channel,” said Shanks.
“What
the industry needs is one common currency of how we count downloads and making
sure that all networks and shows are abiding by it,” he said. He added that the
best standard now is IAB’s, “but not every network or show is abiding by those
standards. If a download means something different depending on who you’re
speaking with, it’s going to be hard for some of the larger brands to trust
this space.”
Although
some podcast networks may not adhere to these standards, Shanks said he’s noted
that several of them “are making the effort to switch over.”
Hurdles
to accurately provide universal metrics standards are numerous. The IAB’s standards note that “the ability to track
podcast content and ad playback largely depends on the player requesting the
file” and that only a sliver of “the market share enables client-side tracking
as it exists in other forms of digital advertising.” The IAB also said that the
Apple Podcasts app, for instance, commanded about 50% market share for podcast
players, and yet prevented any client-side tracking or even the ability to
count a “play.”
Targeting Improvements
Marketers also have a need to better target their desired audiences on audio platforms. Spotify wants to prioritize advertising around podcasts, and we’re beginning to see what that looks like. This week the company announced that advertisers can now target ads based on the podcasts people are listening to, according to the Verge.
Though
these ads won’t be inserted in the shows themselves (they will run in between
songs on its free service), advertisers can now “target based on the category
of podcast they consume, which is likely going to be much more specific and
fruitful for the advertisers.”
Test
marketers for the new targeting service were Samsung and 3M. With this new
ability, Samsung can, for example, target people who listen to tech and
business shows, versus the previous targeting which was by age, gender, and
music genre.
Audiences abound
Still,
podcasts represent the opportunity for brands of all types to reach a growing
audience. The IAB/PwC report projects that podcast ad
revenue in the U.S. will generate $1 billion by 2021.
The preferred ad type for podcasts remains
host-read ads, representing about 63% of U.S. podcast ads in 2018. Just over
half of U.S. podcast ads are still direct-response ads, though that has
decreased since 2016 from 73% as brand awareness ads grow in popularity.
Dynamically
inserted ads continue to grow as well. In 2018, they accounted for about 49% of
ads, up from 41.7% in 2017. Host-read ads, whether baked-in or programmatically
inserted, tend to perform more efficiently than produced ads, said Shanks. He
said this can be looked at like ads on influencer channels. When executed well,
these provide much more value for listeners and advertisers, assuming pricing
is the same.
Services
emerge
In addition to Spotify’s efforts, we’ll
continue to see more companies develop solutions to bring hefty ad spend to
podcasting. This week in Cannes, audio platforms, media companies and agency
networks are touting new audio and podcasting services to take advantage of the
rise of streaming audio and bring more advertisers into the fold.
WPP and iHeartMedia announced a new venture
called Project Listen, which will offer “creative consulting and media planning
covering multiple platforms, including broadcast radio, digital streaming,
podcasts, smart speakers and live events,” according to Ad Age.
“Project Listen is about tapping into the
scale of audio and moving from the traditionally transactional radio business
to the future of audio advertising where insights and ideas lead the way to
growth for brands,” iHeartMedia CEO Bob Pittman said in a statement.
Pandora is planning to promote Studio
Resonate, “a consulting arm that aims to help brands navigate tasks like
creating a ‘sonic logo,’ which is a tune or sound that listeners associate with
any given brand,” according to Ad Age.
So, while established
brands may only be cautiously wading into the shallow end of the audio
advertising pool, a slew of players are testing the format. And new solutions
are being developed from all sides of the ecosystem so that the marketing and
advertising opportunities sound as good as the podcasts themselves.
About
the author
Maureen Morrison is a
consultant and writer, working with agencies, startups, publishers and brands
on marketing, editorial and communications strategies. She previously was a
reporter and editor at Ad Age for 12 years, covering agencies, digital media
and marketers.
For the foreseeable future, publishers are pinning their hopes on digital subscription, on reigniting the direct relationship they initially lost in the initial pile into digital publishing. A recent study from the Reuters Institute for the Study of Journalism found that 69% of US and European publishers employ some form of paywall around their content, with the vast majority following a metered or freemium model.
Regardless of which model of subscription or membership each
outlet has deployed, they each
have similar challenges when it comes to the acquisition and retention of
users. In that sense they are very similar to other subscription-based products
in the entertainment space, from the OTT video services to the innumerable
video games subscription services that have been launched in the last year.
The challenges are especially acute for news publishers, however, since news has become a commodity. The news market is flooded with free alternatives and news is not the subscription product most consumers opt for.
We find only a small increase in the overall numbers paying for any online news, and even in countries with higher levels of payment, the vast majority only have ONE online subscription – suggesting that ‘winner takes most’ dynamics are likely to be important. 2/7 pic.twitter.com/5AC0Jj2JrM
— Rasmus Kleis Nielsen (is offline, taking a break) (@rasmus_kleis) June 12, 2019
Boxed goods
However, just
as the challenges are similar, there are success stories around other
subscription products that news publications should consider emulating in their
own approach to consumers.
One of those increasingly lucrative consumer subscription
products is that of the subscription box.
These generally take the form of a batch of products curated and
delivered directly to you monthly,
sometimes in partnership with a publisher. The range of products offered spans from apparel to hot sauces to
sustainably sourced fruit and vegetables.
And consumers are responding: Royal Mail predicts the market will be worth £1bn by 2022, and that over a quarter of
the UK population has already signed up for a subscription box.
Katie Vanneck-Smith is the founder of Tortoise, a “slow journalism” publisher with a focus on membership. She told me that publishers can take valuable lessons away from the rise of products like subscription boxes, and that publishers have “only just started to catch up with the consumer behaviours in the industry”. So what can news publishers learn from the growth of those products, particularly around engaging and retaining subscribers?
Curation as a service
The value of a subscription box lies in the fact that its
contents have been specially selected for the
consumer base. Subscribers trust that the brand behind the box has the
expertise required to choose only the best goods to serve up. And this is doubly true when the box
contains luxury products rather than staple goods. Boxes like Loot Crate and
Stitch Fix trade off the fact that they have the connections and knowledge to
deliver products that are relevant to the receiver. Crucially, they both play
up the fact that human editors are the ones ultimately doing the curation, rather
than just an algorithm.
In that sense, those subscription offerings are very similar to products from high-end publishers. This includes The Times & Sunday Times, which make the curation of stories relevant to their audiences a core tenet of products like The Brief. Both leverage the fact that, in a sea of products, there is value in having an expert pick out only the best ones on your behalf.
Churn is a fact of life, so cater for it
It typically costs around five times more to acquire a new
subscriber than to retain an existing one. That’s why so many publishers are avidly
focused on the development of
their own internal engagement scores, to determine when people are
likely to jump ship and hopefully to intercede. The Times in particular has
invested a huge amount of money in reducing churn along every part of the
process, but it is effectively a universal concern among subscription-based
products.
However, while the quality of the provided service is ultimately the best guarantor of user retention, sometimes factors outside of a publisher’s control will inevitably cause people to consider dropping off. In that case, as with subscription boxes, publishers need to offer solutions that cater to their audience’s changing situation. Ecommerce platform Cratejoy found that customers typically gave financial reasons as the cause for cancelling a subscription box, and advises that subscription boxes offer a “downgrade” option.
Increasingly, publishers are doing the same. They offer flexible options or
discounts to the subscribers who contact them to cancel. Some publishers are also considering a wider rollout of a pause
option for subscribers. This has
the dual benefit of keeping them within the logged-in ecosystem for marketing
purposes while also negating the high cost of reacquiring a lapsed subscriber.
Have a mission
Subscription packages like ODDBOX make a social mission part of their sales strategy: much of its messaging is based around the notion that food wastage is a significant issue, and that subscribing is the right thing to do to combat a problem. Similarly, the Guardian found that the rhetoric it employed around its membership scheme had a significant impact, and that choosing to support open access to journalism for everyone was frequently cited as one of the most important reasons people chose to donate.
Notably, when the Guardian reached its milestone of a
million paying users, it chose to change the messaging from one of survival to
one of sustainability. Consequently, it saw its best week ever in terms of
donations. This ran counter to internal
misgivings that fewer people would choose to support it no longer appeared in peril.
The fourth subscription
At the World News Media Congress in Glasgow, co-editor of
the Innovation in News Media World Report Juan Senor suggested that a consumer
is likely to pay for four subscription services. The first two would likely
be entertainment services, the third a general news subscription, and the fourth to a niche site they
have a personal interest in.
While people typically feel affinity to newsbrands, the trend towards personalization of content means that publishers can typically serve up content tailored specifically to them. Effectively, they are increasingly hybrids of the third and fourth subscriptions. For example, The Telegraph recognized that its rugby content was of particular interest to its audience. So, it recently made all its content around the sport a key part of its subscription proposition, one of only three types of article to sit exclusively behind the paywall.
When it comes to marketing that specific content, publishers
could do much worse than to emulate the techniques employed by subscription
boxes, which are by their nature niche. The products themselves – news and
goods – are very different in nature, but the lessons around messaging and
retention are universal.
Important new research was recently presented at a major economics conference and reported on by the Wall Street Journal. The comprehensive study was conducted over nearly five years by Veronica Marotta, Vibhanshu Abhishek and led by Alessandro Acquisti, who is globally recognized for his work studying behavioral economics and the impact of privacy on digital society. The bottom-line: Acquisti’s team found that behavioral advertising, as measured and delivered based on third party cookies, increased publisher revenues by a mere 4%.
If you’re nodding your head unsurprised by this statistic, then you’re likely in the 67% of publishers surveyed recently by Digiday who answered that behavioral advertising doesn’t help their business. But make no mistake, the findings are profound as to how they inform the future of digital advertising. They will also have a strong influence on the next steps in US privacy legislation. Put simply, nearly all of the growth touted by the industry benefits intermediaries rather than the publishers who provide the news and entertainment. And for the first time ever, there is empirical research to dismiss long-touted industry arguments that privacy rules will kill the golden goose that pays for free content.
Implementing new rules
This empirical study makes it clear that the absence of rules has overwhelmingly benefited intermediaries. (This would also explain the market caps of a few of the biggest intermediaries.) And now, there remains a singular challenge: No individual publisher can change tactics unless all significant publishers move in lock step. That’s because individual companies would lose significant revenues moving on their own, even on behalf of consumers. Therefore, the bar must be raised equally, through a combination of tech and regulation.
Google is the most critical company with the most to lose and will have a seat at the head table no matter where the market or regulators take us. It will be critical for individual publishers to be able to move in the best interests of consumers and their fellow publishers without being held back by Google’s stronghold. Reuters reports that “Google has repeatedly said that it acts in the best interest of its users and offers sufficient warning to industry partners potentially affected by its moves.” We’ll see.
Recent experience suggests that Google will not cede ground. A friendly reminder that when GDPR rolled out last May in order to better protect the privacy interests of EU citizens, Google waited until the final days to push through its own interests. Global publishers had to send a formal letter of concern to Google’s CEO. They also filed a copy with every major competition authority in the western world.
Built on a
shaky foundation
Much of the digital advertising marketplace and Google’s business have been built on direct-response advertising in which clicks and audience targeting are valued more than the media that surrounds it. This presents a challenge for media companies which have invested heavily in high-quality, premium news, and entertainment environments. Unfortunately, the largest part of the digital advertising market was whittled down to little more than an efficient delivery vehicle for cookies auctioned off to the highest bidders. And little has been done to dismantle this poorly-built foundation. Rather, our entire industry – data brokers, ad tech platforms, agencies and publishers have fueled these direct-response metrics by doubling down on them through behaviorally-targeted advertising.
The premise? That the new capabilities to collect and use browsing data across the web eliminate the waste in advertising, giving marketers their long-sought dream of one-to-one targeting with real-time measurement and publishers a share in the spoils to help fund their digital growth. And, through these same data reservoirs, advertisers could focus on cherry-picking consumers as efficiently as possible. This spawned a slew of sites optimized not for long-term relationships with loyal audiences but instead for their ability to create diverse cookie pools for these real-time markets.
The 800lb gorilla
Google was a company built for the direct response economy post-2009. Due to several years of belt tightening after the 2008 financial crisis, chief financial officers had more pressure to ensure that marketing investments met the quarterly demands of shareholders. And boy did Google deliver. Google did everything it could to maximize the personal data in its coffers and to minimize friction for advertisers who wanted to micro-target people based on it.
Some history:
When web browsers were on the cusp of consumer privacy innovation by restricting tracking cookies, Google lobbyists hindered industry progress. At the same time their own browser, Chrome, took a dominant seat in the market.
When consumer ad blockers became a risk to Google’s data collection, Google began a series of secretive deals and payments in order to whitelist their own data collection tags and later commandeer a browser solution to protect its own ad formats. This saved the company billions in revenue while everyone in industry took a hit.
When a more privacy-focused mobile environment emerged from Apple, Google continued investing in its privacy-porous Android device. It earned a $5.1 billion fine for its efforts. Google also launched its own code layer in AMP. This provided the company deeper influence on what can be on publishers’ websites.
Google wrestled control and influence over the ad tech supply chain used by competing publishers through a series of targeted acquisitions.
Market
dominance
The fruit of its labors is an advertising market optimized for its own interests.
Who conducts the most bids to buy advertising in these real-time markets?Google.
Who offers up the most bid requests to sell advertising in these real-time auctions?Google.
Who operates the most negotiations between buyer and seller?Google.
What is the most valued asset in digital advertising?Personal data.
Who owns the most personal data?Google.
Put another way. Google is the largest buyer, seller, and transactionvehicle for digital ads that leverage personal data. And Google has by far the most personal data. At this point, it should hardly be a question whether this is a rigged market.
The numbers are startling. In the past decade, Google’s SEC filings show that the advertising revenues Google delivers across its vast network of millions of publishers have barely doubled, having grown from $9 billion in 2010 to $20 billion last year. However, in the same ten years, Google has quadrupled its owned and operated advertising revenues from $28 billion to a whopping $116 billion last year.
Search was (and is) the monopoly that privileges entry into any other business by Google. However, the rich interest profile that can be assembled by harvesting personal data across its operating system, browser, ad services, analytics, and its own properties comprises a behavioral advertising fortress.
The anticipated Department of Justice decision to investigate Google is the company’s worst nightmare in terms of timing on these issues. Already we’re seeing influencers and former competitors begin to open up about bad conduct in Mountain View. Inevitably, there will be more details and more reports. It’s critical to the process—and the future of digital media—that light be shone on their activities.
Without a doubt, the investigation means that Google will have a harder time finding friends in the publisher and ad tech communities. However, it’s unlikely you’ll hear publicly from many of them. And therein lies the ultimate symptom of antitrust: No company wants to cross Google. But the word “frenemy” will cut in a different direction now: You won’t find anyone watching your back when you’re under the regulatory microscope.
The rise
of ad tech intermediaries
All of this said, Google wasn’t alone in its attempts to reap the spoils of the behavioral advertising marketplace. In the period between 2010 and 2016, hundreds (if not thousands) of new companies spawned in the ad tech wild west. By my count, there were 78 ad tech companies starting with the letter “A” in 2016 and today 64 of the 78 have either been acquired or disappeared from the industry. History will determine whether most (if any) of these companies had a lasting impact on the industry.
The core
proposition of nearly all of them was to serve a burgeoning programmatic
marketplace in which third parties could be inserted into the supply chain in
return for value. The typical webpage went from dozens of third parties to
hundreds. This triggered a real-time competition for eyeballs, most often represented
by third-party cookies.
Personal data was collected by these third parties without consumer knowledge or control. The data was then used to target consumers across the web, as cheaply as possible. All of this was done outside of consumer expectations which led to the “adtechlash” with the rise of ad blockers, various levels of tracking protection launched into Chrome’s “competition” web browsers Safari, Brave and Firefox, and new privacy laws like 2018’s GDPR and CCPA rolling out in 2020.
Now we find ourselves on the eve of new laws globally to better align with consumer expectations. Publishers need to be fierce defenders of these consumers and protecting their experiences. Ultimately, the fight to defend behavioral advertising likely isn’t worth it. At the very least, every publisher and their representative organizations should make certain whose interests they’re fighting for. This will determine who benefits over the next ten years.
C-3PO as a nightly news anchor? Alexa winning a Pulitzer Prize? These silly scenarios sound like the stuff of science-fiction. But the reality is that automation, which often takes the form of artificial intelligence and machine learning, is increasingly infiltrating the fourth estate and impacting how media companies gather, report, deliver, and even monetize the news.
From transcribing to fact-checking and polling to tweet parsing, artificial intelligence has been hard at work in newsrooms for years. However, the number of organizations large and small—including giants like The Washington Post, Forbes, AP and Reuters—using AI and machine learning to compose content is on the rise. And that’s got the industry and consumers sitting up and taking notice.
Naturally—along with those in a number of fields—there are journalists worried about being replaced by automation. However, there are many who embrace these technological advancements, seeing them as useful assistants that help process and distribute the news.
“AI can help journalists cover and deliver the news more efficiently by freeing them from routine tasks, identifying patterns in data, and helping surface misinformation,” said Lisa Gibbs, the Associated Press’ director of news partnerships.
Chris Collins, senior executive editor of breaking news and markets at Bloomberg, agreed. “Technology is good at repetitive tasks and newsrooms tend to be overloaded with those. If you leverage technology to help with them, journalists can spend more time doing journalism—interviewing sources, breaking news, writing analysis and so on,” said Collins.
Success stories
Bloomberg built Cyborg, a program that extracts key info from corporate earnings reports and press releases. Bloomberg also has AI-assisted monitoring tools that rely on machine learning to filter out spam, recognize key names, and classify topics to cut through this noise and capture specific events relevant to Bloomberg’s financial audience.
“By doing that, we’re able to be more competitive
when it comes to identifying news events,” said Collins.
AP uses a similar AI resource to automate corporate
earnings articles. It also employs video transcription services that create transcripts
for its broadcast customers, saving AP’s video operations personnel precious time.
Additionally, the AP’s newsroom is beginning to focus more on how AI can help the news-gathering process itself. “We recently completed a test of event detection tools, such as from SAM, which uses algorithms to scan social media platforms and alert editors when it has identified likely news events,” said Gibbs. “What we found is that using SAM, in fact, does help our journalists around the world discover breaking news before we otherwise would have known.”
Reg Chua, COO of Reuters Editorial, said his organization has been using AI for several years. “A lot of it is your basic automation stuff like scraping websites and pulling stuff off feeds and then turning them into headlines published automatically or else presenting this information to humans for checking before we publish. We also employ quasi automation and technology that scans and extracts important information from documents,” said Chua.
Reuters News Tracer filters noise from social media to help discern fact from fiction and newsworthy angles from countless tweets and posts.
One of Reuters’ newest AI tools is News Tracer, which filters noise from social media to help discern fact from fiction and newsworthy angles from countless tweets and posts. “News Tracer’s core function is to tell journalists about things they didn’t know they were looking for—to quickly find news that can be reported on,” said Chua, who added that the tool provides a newsworthiness score and a confidence score to help reporters determine what to focus on.
Big and small papers
benefit, too
RADAR (Reporters and Data and Robots), a London-based
news service, has been a trailblazer in the realm of AI-reported local news.
“We operate as a news agency with a subscriber base of UK local news publishers,” said Gary Rogers, RADAR’s editor-in-chief. “We employ six data journalists. Our reporters work largely with UK open data, seeking out stories that will be relevant and informative for local audiences. They work as any data journalist might in finding the stories, but they use software as their writing tool in order to produce many localized versions. These are distributed to local news operations all over the UK.”
Rogers noted that AI allows RADAR to achieve
a scale of story production that would not be possible by human effort alone.
“We tackle about 40 data projects each month.
Each project will yield an average of 200 to 250 localized versions of the story,”
said Rogers. “Since last autumn, we have been producing between 8,000 to 10,000
stories per month.”
Smaller community newspapers are investing in big machine learning capabilities, as well. Case in Point: Richland Source, a Mansfield, Ohio daily, uses a program called Lede AI to automate local sports reporting.
“Lede Ai writes and publishes game recaps for every high school sporting event in Ohio immediately after it finishes,” said Larry Phillips, managing editor of Richland Source. “If it’s a big game, we will send a reporter and Lede Ai writes and publishes the first draft; our journalist adds color, flavor, and flare that can only be done by being at the game. With Lede Ai, we’ve never received a complaint about inaccurate reporting, and we’ve published over 20,000 articles.”
Education and
transparency
News media professionals worry about human
obsolescence in the face of such quickly accelerating automation. Yet many believe
those concerns are premature or misguided.
“While this has been true in most industries
and may happen in media, there is a broader picture of AI’s enabling rather than
employment-destroying qualities,” Rogers said. “AI can take over repetitive and
boring tasks, which frees journalists to do more important work. It can help journalists
find stories by sifting large amounts of information. In our case, it allows our
reporters to amplify their work, write a story in the form of a template, and produce
hundreds of versions of the story for local newspapers across the UK who lack the
resources to do it themselves.”
Consider, too, said Phillips, that “AI still
can’t ask follow-up questions, can’t knock on the doors of multiple sources, work
a beat, make a follow-up call, do the shoe-leather grunt work, garner an off-the-record
comment which leads to a story angle, and certainly can’t replicate the human element,
the nuance, that encompasses the very best work in the profession.”
Even if their human resources are relatively
safe for now, news organizations have to navigate carefully through uncharted waters
when it comes to ethics around and disclosure of AI practices.
“As these technologies evolve, having standards
around transparency and best practices – such as how do we prevent bias in data
from impacting our news coverage – will be critical for the entire industry,” added
Gibbs.
Bloomberg’s Collins echoes that sentiment.
“It’s essential to understand what technology can and can’t handle. Clearly, as
with all journalism, you need judgement, best practices and processes in place to
ensure what you are writing is accurate, fast and worthwhile,” said Collins. “You
need to be transparent about how a story was produced, if it was assisted or published
using AI. In our experience, the combination of years of human journalistic experience
with technology such as AI is powerful. Obviously, the technology isn’t left to
run the newsroom. It is trained and overseen by journalists, who are learning new
skills in the process.”
Reading the tea leaves
Looking ahead, artificial intelligence will
create exciting new capabilities as well as troubling obstacles, say the pros.
“As newsrooms increasingly embrace AI, it will
help with everything from spotting breaking-news events, to finding scoops in data
to audience personalization,” said Collins.
But prepare for even more fake news fiascos.
“Distribution of so-called deepfakes, assisted by AI, is a troubling trend,” Collins cautioned. “How technology evolves to both spread and combat misinformation will be a major challenge for the industry.”
Yet Richland Source publisher Jay Allred and
others remain optimistic. “In the near-term
at the local level, I think AI will largely be used for two things. First, it will
fill the gaps on informational journalism tasks that simply are not done anymore
due to shrinking payrolls,” said Allred. “Second, it will surface insights from
public databases—finding out, for instance, how a particular city floods and where,
how many speeding tickets were issued and where throughout a state, where do the
most citations for drunk and disorderly conduct occur within a city. This will spur
and support investigative journalism that wouldn’t otherwise happen.”
The legal and policy community continues to debate the impact of General Data Protection Regulation (GDPR), the ins and outs of the California Consumer Privacy Act (CCPA), and how (or whether) Washington should regulate consumer privacy. While the debate rages on, we are seeing a stream of consumer-focused privacy-oriented product rollouts. It is interesting to look at what these controls actually do and how they might inform the policy debate.
In a concession to consumer privacy, Google recently announced that it would allow consumers to block companies from tracking them across the web when they are using Chrome. Specifically, they will differentiate between 1st party and 3rd party cookies. As such, they will allow consumers to delete 3rd party cookies while preserving the 1st party cookie, which, for example, allows a website to remember a consumer’s log-in information. In addition, Google will soon roll out features to prevent companies from identifying consumers via device fingerprinting, another method used to track consumers across the web.
Let’s be clear, Google is late to the privacy game given the long history of privacy protections offered by Apple and Mozilla. Apple has famously blocked 3rd party cookies by default in Safari and recently introduced Intelligent Tracking Prevention (ITP), restricting the ability of companies to access cookies when a consumer is not interacting with that company.
Services that care for consumers
When a consumer visits YouTube on Safari, Google can access the
cookies they have set on their browser. However, as soon as the consumer
navigates to another website, Google can no longer access those cookies. ITP
essentially breaks the ability to track consumers around the web. Apple has also worked to block loopholes where
cookies, set by companies like Google and Facebook, pretend to be 1st
party but are then used for tracking and secondary use. Unfortunately, it is
hard to muster much confidence that Google will be anywhere as diligent as
Apple in closing loopholes, given its business model is built on its ability to
expertly track consumers.
Beyond the most commonly used browsers, a whole suite of new services have popped up. Duck Duck Go, a search engine, and Brave, a browser, offer strong privacy controls as their primary value proposition. As we’ve noted for several years now, more and more consumers are turning to ad blockers to protect themselves and/or improve their web experience. Forcing these massive platforms to once again compete on privacy will be good for everyone, particularly consumers. The question, of course, is whether some of these companies are simply too big to change at this point.
Policy implications
Back to my original question: What can policymakers learn from
these new consumer privacy controls?
For starters, consumers increasingly demand more and stronger
protections of their data and digital lives or the market wouldn’t be headed
that direction. You can reasonably debate whether Google’s recent announcement
went far enough. But the fact that a multi-billion-dollar company like Google
did anything at all speaks volumes.
It’s also worth pointing out that all of these new controls, which curtail the ability of companies to collect consumer data at scale, are not actually breaking the internet despite the frequent claims of ad tech lobbyists. In fact, given this push to meet consumer demand for greater privacy controls, it’s funny and a little sad to look back at the hysteria from some of these lobbyists.
Meeting
consumer demand
The big takeaway is how these companies and their engineers are
designing services and features to meet consumer expectations for privacy. This
new wave of privacy controls gives consumers the ability to stop companies from
tracking them across the web. Yet, these services preserve the ability for
websites and apps to collect and use consumer data when the consumer is
interacting directly with that company.
This approach makes sense. Consumers share their data so that companies can use it to provide a service or content. In this fair value exchange, the consumer can choose to engage or not. Consumers do not expect companies to track them outside of that fair value exchange and doing so is simply bad business.
It isn’t a question of whether consumers want their privacy
protected, these market moves demonstrate the demand and inevitability. And, as
policymakers consider how best to craft consumer privacy protections, it’s
worth noting how today’s best engineers have attempted to meet consumer demand
and expectations for privacy.
Certain terms like “new wave,” “new school,” and “online video” start to lose their meaning over time. The same seems to be true of the “NewFronts,” now in their eighth year in New York. The showcase for traditional and digital native publishers selling their video offerings to marketers has been split up into twice-per-year affairs (with a fall showcase in L.A.). Just 16 publishers presented in New York this year, down from 36 in 2017.
The idea of the TV Upfronts and NewFronts are to dazzle
advertisers and get them to commit a chunk of their advertising to the
publisher, though there is less scarcity online. Have the NewFronts made
progress over the years? Most definitely. Has that progress meant there is no
need for them anymore? Not quite.
What’s most interesting at this year’s shindig is that
traditional players are pushing new acquisitions and initiatives, while the
digital natives are trying to sound more traditional with ongoing series. This
points to a convergence of purposes and the fact that online video, OTT,
streaming video et al have commingled
to the point of absurdity. This leaves marketers grasping at just what they’re
buying and how they can track and optimize it all. And yet there’s still a
place for publishers at the NewFronts as a showcase for offerings and to
generate much needed buzz.
Platform domination
The biggest challenge for publishers, as always, is trying to stand out from dominant players like Hulu, YouTube, and even Twitter. And the dominant players just get more dominant. YouTube casts an immense shadow as the largest ad-supported video platform online. But, as Digiday’s Sahil Patel points out, YouTube users are spending 200 million hours per day watching YouTube on a TV, up from 100 million hours last October.
And Hulu hit $1.5 billion in ad revenues last year by
offering a mix of legacy TV programming and original shows. During its
NewFronts presentation, Hulu execs pointed out that they have 26 million paid
subscribers, and a much younger audience than cable, at 31 years old vs. 53
years old. Plus, 80% of Hulu viewing takes place on a TV set, up 75% from last
year. (And Hulu even sponsored Digiday’s coverage of the NewFronts.)
This puts many publishers in a bind because
they have to sell their uniqueness to advertisers while also cutting deals with
the platforms to expand reach.
“Mass reach is still a thing,” Mediahub’s Michael Piner told
Digiday. “And there are certain partners that are being prioritized because
they can achieve the mass reach of TV.”
What do publishers get?
So what do publishers get for their money and trouble at the NewFronts now? Well, the decrease in presenters means they do get more attention from attendees. And at least one publisher, Studio71, was touting its upfront ad sales. The company has presented at the NewFronts from 2016 through this year’s edition. Studio71’s CEO Reza Izad told Digiday that 85% of their revenues each year came from upfront commitments. They boast 100 million unique viewers per month on YouTube and vet each piece of content on the network.
Still, many publishers such
as Group Nine and Refinery29 decided to forgo the NewFronts for a private tour
to increase intimacy – and likely save costs. The increased competition for
digital video ads is partly to blame, and people are also paying more for
services such as Netflix and HBO that don’t serve ads at all.
But it’s still hard to ignore the growth of digital video, especially if you sell video advertising in entertainment. The IAB’s Video Ad Spend Report surveyed marketers and found they would be spending $18 million on average this year on digital video ads, up 25% from last year, with the Media/Entertainment vertical up a whopping 75%. (And yes, that means studios are buying more ads on other media.)
Switching places
Meanwhile, notable presentations from Meredith and Conde
Nast discussed new shows for their OTT services, and Meredith is also
distributing them through its local TV stations. And as OTT moves into
broadcast, cable network Viacom was showcasing content on its newly acquired
PlutoTV OTT service.
“Viacom is embracing digital inventory, and at the same time we see Condé and Meredith pushing themselves into the OTT universe,” Wavemaker’s Noah Mallin told AdAge. “They are starting to resemble each other more and more.”
As AdAge’s Jeanine Poggi’s so astutely points out, this is
the year when the NewFronts actually looked a lot like the regular TV Upfronts,
with the themes of brand safety, original programming, and scale. “This year more publishers spoke about renewing
existing shows, creating long-form content akin to TV and
positioning themselves as the new ‘primetime,'” Poggi wrote.
So where does that leave publishers and the IAB? Perhaps the
time has come to ditch the NewFronts and merge them into the regular TV
Upfronts. More importantly, publishers need to calculate carefully the benefits
of a flashy program on stage at the NewFronts, and whether that still beats a
private tour or other marketing outreach. Ultimately, it will take a new round
of upstart video-centric publishers who want to make a splash to inject new
energy into the NewFronts.