Consumers are very aware that social platforms and websites collect information about them. However, most Americans say political campaigns should not be able to target them with digital ads based on their online activities. In fact, according to the Knight Foundation and Gallup’s new research, nearly all say they believe that Facebook and Google (97%), Amazon (96%), and news sites or apps (88%) collect data on their browsing history and purchasing habits. However, this research found that 72% say that internet companies should make no information about users available to political campaigns in order to microtarget certain voters with online advertisements.
Limited targeting
A much smaller population of consumers (20%) state that
general details about internet users, such as their gender, age or zip code should
be permissible to use for targeting. This is in line with the advertising policy
at Google, which limits the information political campaigns can use for
targeting. Interestingly, 7% of U.S. consumers are okay with microtargeting
saying that any information should be made available for a campaign’s use. This
is consistent with Facebook’s practice, which does not place any such limits ad
targeting.
Acceptable approach
Almost 60% of consumers favor websites showing ads with the
disclosure of who paid for the ad, how much it cost, and whom the ad is aimed
at. However, there’s a smaller consumer population (20%) that wants no
political advertising in absolute terms. This is the policy adopted by Twitter.
An even smaller consumer base of 4% want no political ads
shown in the final month of an election campaign. Juxtaposed to this is 16% of
consumers who believe ads should not be regulated because they are a form of
free speech.
Further, almost all consumers believe platforms should
refuse misleading content in political ads.
Specifically, consumers said social media should not allow political ads that would target supporters of an opposing candidate and provide the wrong election date (81%). In addition, consumers said that platforms should not allow display ads that say a politician voted for a policy if in fact, they did not (62%). Close to half of all consumers (45%) think that social media companies should not accept ads that misrepresents or omit facts about a candidate.
Political parties react differently
Democrats are more likely than Republicans to
favor stronger policies regarding content in political ads. Nine in 10
Democrats (91%) say an ad targeting an opponent’s supporters with an incorrect
election date should be refused, compared with 73% Republicans.
Seven in 10 Democrats (71%) compared to
Republicans (55%) say an ad that contains an inaccuracy about a politician’s
voting record should be refused, while half of.
Fifty percent of Democrats say an ad omitting
some details about a candidate’s position should be refused compared to 41% of
Republicans.
Currently, there’s much debate in D.C. and
Silicon Valley about microtargeting political ads. Social seem content to make their
own policy decisions. Unfortunately, this leaves consumers very vulnerable and
concerned about the often divisive, and even false
The world of mobile is built on scroll and swipe actions. Addictive and habit forming, infinite scroll became the defining characteristic of social media networks for a time. Clicking is making a decision. Scroll is an everlasting stream of possibilities.
Readers do love to scroll. However, it is also a better way to read and digest long content. Usability studies by the Software Usability Research Laboratory show that users can read long, scrolling pages faster than paginated ones. And they absorb the same level of information from the content.
‘Participants using the paging condition took significantly longer to read the passages than either the full or scrolling conditions. They stated that they found the Paging condition to be “too broken up,” and that they had to “go back and forth” quite a bit to search for information.’
Publishers, hoping to establish the kind of enthralled readership seen on social media, built infinite scroll and swipe navigation into their UIs, expecting readers to drip through page after page of their content. Scroll and swipe seemed to be purpose-built for news and entertainment sites.
Unfortunately, the results of scroll and swipe for publishers didn’t hit the mark.
A bottomless pit
Infinite scroll gave readers a bottomless pit of publishers’ content. But dig into the data and you can see that most readers are still only skimming the surface of the content. According to the readership data studied here at Marfeel, we find that 75% of readers never get beyond 10% scroll depth in an article.
Here you can see how ‘content chunks’ are added to the scroll to give readers escape points to new content. These cues are easy to digest, understandable, and have a simple navigation.
Content may have infinite depth. But even engaged readers are unlikely to ever stray from the shallow-end.
When it comes to publisher content, if a reader is not engaged, they are unlikely to continue scrolling. So, if your recirculation strategy relies on readers scrolling past the bottom of an article, you’re losing out on 80% of your audience.
With swipe, the results aren’t as damning but still deeply flawed. Swipe has been shown to potentially increase page views per session by over 400% for some publishers. However, only about 20% of readers ever discover it.
Infinite scroll needs structure, variation, and novelty
Infinite scroll works so well on social media because the content and the formats change on every swipe. You’re rolling the dice for another chance at entertainment. On a publishers’ site, a swipe may just lead to another few paragraphs of content.
Traversing this dead-zone is cognitive work. There’s a lower dopamine response from making a judged evaluation on a longer article or piece of content. Faced with a paragraph of dense analysis on Euribor inflation before the next crumb of content is offered up, a reader’s first instinct will be to leave the page, not continue.
A Chartbeat report found only “slight evidence” infinite scroll contributes to deeper scroll depth, but no definitive evidence that infinite scroll improves depth of visit.
For infinite scroll to work for publishers, varied and exciting content is key. It needs to be digestible as single-scan content chunks. Users must be able to look at once and make a decision about whether they want to read or not.
Grouped titles, images, and instantly understandable content give readers more bursts of information they can use to make quick decisions. These atoms make up a proposition that a user can decide to delve into or keep scrolling. Publishers need to spot scroll-stopping points and introduce alternative exits at these points to keep readers engaged. Writing in Psychology Today, Susan Weinschenk explains, ‘If there is a small, specific cue that signifies that something is going to happen, that sets off our dopamine system’. UI/UX cues that more content is available become a dinner bell that connects to a Pavlovian response inside us.’
Without these cues, infinite scroll becomes an endless litany that unengaged readers will choose to leave rather than find the end. With these tools, publishers can reconsider scroll depth as their most crucial metric and start focusing on recirculating readers, which techniques such as ‘inline articles’ as seen above can increase recirculation by an average of 60%.
Take a lesson from Tinder on swiping
Swipe removed pagination from the mobile experience. However, not every reader discovers it in mobile publishers’ content. Imagine a world where only 20% of visitors to Tinder realized they had swipe for more content.
Apps such as Tinder and TikTok are built around swipe because they make it explicit. Publishers’ audiences are not expecting to swipe, rather follow the sections and pagination of traditional news sites. Publishers that have managed to make swipe work for them use clear and simple UI cues that make swipe a logical and intuitive cue.
Another major difference in what swipe offers us – compared with clicking new content, changing pages, or switching apps – is that we’re unsure of risk/reward ratio. And this can deliver higher rewards than evaluating decisions.
Provided we have some information about a user’s tastes, such as broad interests, swiping to a new article in a relevant or category or similar topic can even be more rewarding than choosing a new link based on rational decision-making. To do this, swipe needs to be integrated with cues to show that swipe is the intended action. As it is not yet a natural motion on content sites, this behavior needs to be taught and established.
By re-working these born-mobile techniques, publishers can re-define swipe to make it a more natural and established action. This will help publishers turn swipe and scroll from well-intentioned but ineffective tools into powerful and essential components of any content-based UI.
Comments for news organizations’ digital publications took a blow when NPR closed its online comments section in 2016. Even before NPR’s monumental decision, other organizations including The Verge, Reuters, and Recode had also bid adieu to online comments. The feature was increasingly viewed as an incubator for name-calling, insults, and sometimes harassment. And the trend continues. In just the past year, USA Today and OregonLive added their names to the growing list of news outlets that have had enough of supervising online trolls and bots.
Comments definitely don’t always bring out the best in people. Some commenters use the shield of anonymity to share derogatory and abusive messages, communicate hate toward certain racial groups, or turn a constructive discussion into a name-calling match. And negative comments can even influence readers to have a more unfavorable view of the story itself.
Some national and regional newsrooms have employed comment moderators, but monitoring thousands of comments requires considerable labor. The task of deleting uncivil comments oy leads to emotional exhaustion for the moderators. However it turns out that deleting the litany of racist, sexist, and hompophobic comments also decreases trust in the news organization, according to a 2019 Center for Media Engagement study. While comments sections once held the promise of democratizing news discussions, that optimism seems to have backfired.
The death of comments may be exaggerated
Is this the beginning of a slow dive into obsolescence for comment sections?
Coral helps moderators identify disruptive comments and surface the best submissions.
Not necessarily. Despite the woeful forecast from some commentators about the death of comments sections, experts who study reader engagement warn critics against hasty obituaries. The number of news organizations that have done away with comments sections do not represent the industry as a whole, says Andrew Losowsky, head of Coral by Vox Media, a system that aims to bring publishers and journalists closer to their communities.
In fact, comments sections can bring great value that often goes overlooked. Done right, comments sections can increase reader loyalty and engagement, and even boost subscriptions. Simply deleting comments sections, Losowsky says, makes sense when those sections turn into vitriolic spaces. But he says this move can also be “incredibly short-sighted.”
The key for news organizations is to develop a strategy. News outlets should think of their comments sections more deliberately. The need to ask questions like: What does it mean to involve people in the conversation? And, what sort of conversations do news organizations want to foster? From there, news organizations can ask pointed questions for readers to answer in the comments sections.
Managing the unmanageable
One way to make handling the volume of comments more manageable is by implementing artificial intelligence to help with the task of comment moderation. The New York Times is one outlet that actually expanded its comments section in 2017, with a machine learning technology called Moderator, which helps to moderate up to 12,000 comments a day. The technology scores each comment based on whether a human staffer might reject it based on inflammatory or inappropriate behavior. It then decides whether to keep or delete the comment based on that score.
Local news organizations, however, often lack the resources of larger sites. With revenues falling for smaller outlets, securing funding to implement a comments moderation system — much less hiring an employee specifically to moderate comments — might land quite far down the priority list.
Innovating with engagement
The State newspaper in Columbia, South Carolina has found that its Coral commenters are more engaged.
That doesn’t mean some local news sites aren’t experimenting with their own initiatives, though. The State newspaper in Columbia, South Carolina, which has a daily readership of around 270,000, implemented Coral about nine months ago. The platform uses AI to detect and remove toxic comments. Commenters at The State also need to sign into and register their email addresses to Coral separately from their subscriptions. This means that Coral users tend to be more engaged.
“We still see some of the bluster and posturing that can be negative in comments. But by and large, the comments that we do get are a lot more focused on the actual story,” said Cal Lundmark, southeast audience growth editor for McClatchy.
In addition to more constructive comments, general engagement on The State’s website flourished for Coral users. Active Coral commenters read twice as many stories on The State’s website per visit compared to other subscribers.These visitors spend almost 16 minutes longer on the site per visit as well.
Interaction breeds civility
When journalists interact with commenters, they can actually encourage civility in the comments sections, according to a report from the Democracy Fund and the Engaging News Project, now known as the Center for Media Engagement. The study found that journalists who answer reader questions and participate in discussions make the tone of comments sections more constructive.
It is effective even when local reporters only have enough time to dip into the comments sections rather than engage regularly. It “creates a sort of bond between the reader and the news organization, where people feel less okay to say nasty things,” says Gina M. Masullo, associate director of the Center for Media Engagement at the University of Texas at Austin.
Reporters at The State have been doing exactly this as part of the recent initiative to re-strategize the comments section. “The time in their day when they can really just stop and say ‘I can respond to some comments’ is pretty limited,” Lundmark said. “So it does take teamwork… between the audience team and the reporters themselves.”
In addition, news organizations can benefit from highly “person-centered” messages, according to Masullo’s research at the Center. These are comments from journalists that acknowledge a commenter’s anger, but also ask the commenter to keep their language civil.
“It worked across three different experiments that we did,” Masullo said. “I think it’s because even people who are angry like when somebody validates that they have a right to be angry.”
Social media, mediated
For the news organizations that have deleted comments sections over the past few years, engaging with their audiences is far from dead. In its one year update after doing away with its comments sections, NPR’s public editor Elizabeth Jensen declared the move a success. She cited other ways of communicating with readers such as newsletters as garnering more attention. Journalists respond to readers and listeners through social media channels like Twitter and Facebook, Jensen wrote, and discussions also remain civil in closed Facebook groups.
Plus, it makes sense to take the conversation to social media if the community already regularly engages there, says Lynn Walsh, assistant director of Trusting News.
“We have to be realistic: Are people going to log into the website to comment when they may not already be engaging there actively?” Wash said. So, maybe “we should reach people by going to where they already are instead of asking them to come to us.”
But Losowsky warns against relying too much on social media to foster discussion.
“What you’re saying is that your engagement with your most loyal and engaged users belongs to a third party that you cannot control. And that data and that relationship belongs to them,” Losowsky said. “So if you say instead we will let Facebook manage this, then Facebook owns that relationship with your readers. You do not own it, they do. And they control it.”
Positive comments
So what does the future hold for comments and will news organizations abandon comments sections while reaching out to readers on other channels? Or will they attempt to take on the challenge of molding comments sections into places for constructive feedback?
Research continues to explore how to make comments sections into constructive spaces. Maite Taboada at Simon Fraser University, for instance, is working on a system to identify constructive comments that news organizations can highlight to create civil dialogue. And innovative organizations like Hearken develop different ways to tune into to readers and commenters, in addition to making listening to communities a fundamental directive in the reporting process.
“I’m cautiously optimistic that we will make the internet a better space,” Masullo said. “It’s not going to happen overnight, and it’s gonna be a combination of human and technological solutions. But I don’t think we should just say, ‘oh you know, this is a dismal mess and there’s no hope.’ Because I really think there is.”
Marketers know that thought leadership is an important part of a strategy to build awareness and trust, and to help their companies get a seat at the table. But until now, they have had a hard time proving it. The reality is that marketers are under pressure from all sides to prove their contributions to the bottom line.
After years of talking to clients about their challenges, Rob Mitchell, CEO of Longitude and Chairman of Thought Leadership Network, says he saw a gap in the market for a network devoted to promoting thought leadership as a career and to promote best practices. Thus, TLN was born, to bring “together thought leadership practitioners to share knowledge, experiences, and ideas.”
Mitchell thinks of TLN’s mission as “Thought leadership about thought leadership.”
Tia McPhee, Global Brand Director of The Financial Times and a Thought Leadership Network advisory board member says that she hopes the group’s annual slate of events allows thought leaders to learn from and support one another. But as the group started coming together, it was clear there was one challenge members were struggling with more than any other: measuring the effectiveness of thought leadership.
What is thought leadership?
The term “thought leadership” is familiar. However, defining it is difficult. Some might say, “You know it when you see it.” But in the age of content marketing—when every company is pumping out content—it can be even harder to separate real thought leadership from more run of the mill content marketing efforts.
“Content marketing can be part of a strategy to develop yourself as a thought leader,” says McPhee. She adds that content marketing is really the “delivery mechanism.”
“Good thought leadership starts with the strength of the ideas,” says Mitchell. To be a true thought leader, you need an original idea. Content marketing is just the avenue through which you deliver that idea.
What’s the goal?
More importantly, content marketing often focuses on short-term engagement metrics—think downloads and clicks. However, TLN’s research shows that the key to measuring the effectiveness of thought leadership is all about taking a long-term view.
When thinking about thought leadership efforts, it’s important to understand what this branch of marketing is really designed to do. According to the report, campaigns can do three things:
Reimagine—Pivot the business to a new reality.
Renovate—Build on past successes and move into new areas that will attract attention from both new and existing customers.
Reinforce—Strengthen customer loyalty and retention for existing, mature products and services.
But the question remains: How do you know if you’re achieving those goals?
A measurement framework
In order to help TLN members better prove the value of their work, the group has compiled a report that details the metrics that matter most, six principles of effective measurement, and provides a checklist to make sure efforts align with business goals. Mitchell says the report is the result of “trying to distill” his years of experience, combined with primary research—speaking to clients and advisory board members—and even bringing in outside experts to get a new perspective.
“Alignment” is the foundation of the three-tier effectiveness model presented in the report. “It is all about ensuring that there is buy-in from every relevant function in the business, as well as agreement about the overall goals,” says the report.
Next, you have to take a look at engagement, though Mitchell warns against putting too much weight on short-term metrics. He wants to “shift the focus a little bit.” Mitchell says, “Right now we measure what’s easy to measure.”
He sees a need for a “shift toward alignment, influence, and impact.” Mitchell suggests spending 40% of your time tracking alignment, 40% on impact, and 20% on engagement. Ask yourself, are you changing behavior? That’s how you really know you’re having an effect on customers.
“These longer-term metrics are harder to measure,” says Mitchell. And therein lies the challenge. In order to determine if your thought leadership is moving the needle on influence and impact you need to track whether your perception of expertise is increasing, customer lifetime value (CLV), meetings generated, and profitability. You’ll also want to understand how your content is contributing to the marketing pipeline and generating qualified leads.
Reading that list may have you scratching your head. Really, how do you know how your expertise is perceived? Can you really know how many meetings your thought leadership campaigns have generated?
Admittedly, the “influence and impact” metrics are the hardest to track. They will require that you invest in customer surveys and other in-depth outreach efforts that go beyond simple engagement efforts. However, Mitchell and McPhee promise that putting in the effort to track these numbers will go a long way to understanding your effectiveness.
Ultimately, says McPhee, TLN and the report give you “the tools to make your case.”
“Tribal subscription marketing”
But for publishers like The Financial Times, thought leadership is a slightly different animal than it is for other brands. After all, McPhee asks, what are you doing as a publisher if you aren’t regularly putting out interesting new ideas? Good journalism isn’t necessarily the same as thought leadership. But publishers are embracing the practice as a way to build loyalty and enhance CLV.
For instance, The FT launched its New Agenda which McPhee describes as “a rallying cry to business leaders to think about responsible capitalism.” In practice, it’s a collection of the publishers’ best pieces around the idea of protecting “the future of free enterprise and wealth creation by pursuing profit with purpose.”
She says she’s seen the model replicated across other publishers—think Washington Post’s “Democracy Dies in Darkness.” It has led to what some in the industry are calling “tribal subscription marketing,” which creates a sense of belonging among readers.
“Thought leadership can get people behind you,” says McPhee. That’s as important for publishers as it is for any other brand—especially in the age of digital disintermediation and a near constant fight for every subscriber.
“Providing expertise in thought leadership—both in terms of the content itself and the measurement of its impact—is an obvious addition to what the FT can offer clients,” adds McPhee. Ultimately, thought leadership may be just what you need to find new ways to add value for your subscribers and potential sponsors.
In an effort to provide more transparency in journalism,
McClatchy, a local news publisher with a footprint in 30 markets, designed an
information card to help readers better understand the reporting process. To
evaluate its effectiveness three McClatchy newsrooms undertook a study to
explore whether the placement of these informational cards on a news
organization’s website affected readers’ recall of the card and trust in news.
The information cards, called Behind the Story (BtS), highlight
why a topic is important to write about, how the story is written, and who the
journalist interviewed. The Center for Media Engagement, in partnership with three
McClatchy newsrooms: The Wichita Eagle, El Nuevo Herald and The Sacramento Bee,
conducted research with over 300 participants to assess recall and efficacy of
the BtS cards.
Overall, the findings show that, while the BtS card appealed to most readers, a majority of readers did not notice the presence of the card when placed within the context of an article. Importantly, however, when shown outside the context of an article, a majority of readers said the card would improve their trust in a news organization.
Recall
In all, 34% of respondents said they remembered seeing a
section titled “Behind our Reporting” when the card was placed within the context
of the article compared to 21% who recalled seeing the BtS card when placed at
the bottom of the article. Although a higher percentage of people noticed the
BtS card when it was placed in-line, the difference in overall recall is not
statistically significant compared to the card appearing at the bottom of the
article.
Interestingly, non-subscribers were significantly more
likely to recall the card when it appeared in-line (40%) than subscribers
(30%).
Trust
Placement of the BtS card, in-line or at the bottom of the article,
did not impact participants ratings to the trustworthiness of the article, the
news organization or the reporter in a significant matter. Overall, the study
suggests that using a card designed to improve transparency may positively
affect trust. However, newsrooms need to ensure that it’s noticeable.
Previous research from the Center for Media Engagement found that less than 10% of readers in three communities thought that their local newsrooms adequately explained how and why they decide what stories to cover.
With efforts like these, McClatchy is taking important and
positive steps to help strengthen readers’ trust and negate concerns with
misinformation and fake news. Fine tuning the placement of the BtS card and its
messaging are encouraged. Future research should assess additional BtS designs,
verbiage and continue to test best placement and overall impact on recall and
trust metrics.
The death of third-party cookies — courtesy of Google, Apple, GDPR and CCPA — also means the accelerated fade of third-party data. And from ad tech companies to marketers and media companies, the shift has left many wondering how they’ll proceed. Many fear the effect the changes could have on their business.
At Piano, though, we see opportunity where others see threat. And we think the shift away from third-party data is an opportunity to thrive. It puts new emphasis on first-party data. It also enables those companies to draw on something even more powerful: zero-party data.
Every day, users provide media companies with the type of explicit, accurate, data other businesses covet. Consumers willingly volunteer this data as they register or subscribe in order to unlock more content or build a deeper relationship with their site of choice. And it’s exactly this type of information that, if used right, can push those companies ahead in this new data landscape.
The problem with third-party data
To understand their audience and find new users, companies in and outside of media have long been relying onthird-party data. This type of data, sold by data aggregators, is available in huge (if shrinking) volumes and can potentially shed light on many different consumer types. However, there are three primary problems with it:
The rising consumer concern around online privacy. Consumers are becoming increasingly aware of tracking and privacy. And that’s driven government privacy and consent regulations like GDPR, as well as the changes in browser cookie policies. This has reduced the reliability of third-party data.
It can be out of date and unreliable. This is why consumers are often targeted for a product they’ve already purchased or decided against, only to have ads “follow” them around the internet for weeks. If you’re building a strategy around data that’s two months (or even two weeks) old, it’s likely already outdated. And the trend towards limiting third-party cookies only means third-party data will become even less reliable in the future.
Sourcing is often hazy. How third-party data is collected, aggregated and then sold is generally a black box. User consent isn’t always known or verifiable. As the industry moves to tighten rules around data transparency, the value and utility of third-party data will continue to decline.
Unlike third-party cookies, third-party data probably won’t necessarily vanish completely — though it will certainly diminish in value. Even so, there are better alternatives available, especially to media companies.
If third-party data is finished, what now?
Some important questions still remain unanswered in this post third-party cookie world. However, what’s not unclear is that the more reliable the data, the higher the value for publishers’ marketing partners and the more they’re willing to pay for targeted ads. And for that, third-party data falls short next to the other options:
Zero-partydata is intentionally shared with a business by its customers. This could range from customer relationship management (CRM) data, such as the email address or gender filled out in a registration form, to preferences selected during website customization. As data explicitly shared as part of a value exchange or to improve the customer experience, it’s the most valuable data a company can collect. Critically, it belongs to the customer, not the business. It can’t be used without consent and must be carefully protected. However, it is highly accurate, reliable, and transparent in sourcing.
First-party data is often thought to encompass zero-party data. However, it also includes behavioral data collected by observing customers as they browse your website or app, or visit your store. This type of data is implicit, rather than explicit, but is still considered premium for many reasons. It’s collected by you so is more likely to be accurate. And any insights drawn from your own audience are likely to truly reflect their preferences and behaviors. This means that campaigns and strategies are more likely to succeed. Crucially, there are fewer concerns over data misuse and GDPR non-compliance as you control the conditions in which data is collected and stored.
Second-partydata is when first-party data passes through a second set of hands. In other words, second-party data is someone else’s first-party data. It occurs in like-minded publisher alliances, when suppliers exchange data with their retailers for mutual benefit. It also happens when agencies strike exclusive deals with major publishers to empower programmatic buying. Gaining explicit consent to use and share second-party data isn’t always an easy feat, but can be handled reliably with the right technology setup.
Media companies are in a unique position to take advantage of these data sources . This is particularly true for first-party data and the gold standard of all user data: zero-party data.
Maximizing your zero-party data
In the media landscape, companies can use zero-party data to both benefit their advertising partners and personalize the experience they offer users themselves. All while finally aligning the interests of their subscription and advertising teams — something many media companies have been striving to do for years.
Zero-party data is already available through registration and subscription sign-ups. But the savviest companies will maximize their store using many of the same techniques subscription companies have already implemented to optimize their business. That starts with quality content that puts users at the center of the audience experience. It continues in the form of exclusive content, personalized recommendations and customized offers that engage the user and personalize the experience. Only when a true value exchange is in place will users be incentivized to provide their data to improve their experience even further.
A data management platform, or DMP, can then help companies use that data to understand and segment their users. Lookalikes will also allow them to extrapolate the data to cover their entire audience. In doing so, they’ll be able to target ads to relevant users and engage more effectively with their chosen demographic.
Take Piano customer Mediahuis, for example. One of the largest publishers in Europe, they built a substantial base of 4.4 million registered users. They then used the zero- and first-party data collected to create a significant, audience-based advertising business across Belgium and the Netherlands. Audience campaigns grew to 27% of total ad revenue, with clickthrough rates 26% higher than non-targeted campaigns.
That’s the power of zero- and first-party data. Combining your advertising and subscription businesses through a focus on known users leads to a stronger and profitable monetization model, drawing from multiple revenue streams.
And that, as far as we’re concerned, is hardly something to be afraid of.
About the author
Michael Silberman, SVP Strategy, leads Piano’s Strategic Services team. He helps clients develop reader revenue strategies and drive success and revenue on the Piano platform. He joined Piano in 2018 after 10 years building the digital media business at New York magazine, and earlier, as one of the top editors launching and growing MSNBC.com in the early days of the consumer Internet. Visit piano.io to find out more about Piano Zero, an end-to-end solution that empowers zero- and first-party data.
Cooking and watching TV: it’s a culinary combination that’s been a
staple of the small screen since Philip Harben whipped up a batch of lobster
vol-au-vents for BBC viewers way back in 1946 (look it up, kids). But try asking Harben to remind you how
many tablespoons of olive oil he said to add to the saucepan and see how far it
gets you.
Of course, Harben died in 1970, a situation which certainly doesn’t
help the promptness of his reply. But to be fair, even when he was still
with us, his program (Cookery) offered little in the way of
interactivity.
Needless to say, digital offers a buffet of opportunities to
enhance how-to experiences like cooking. And, with its multiplatform,
entertaining, and interactive approach, Discovery Inc.’s Food Network Cooking
app aims to take cooking shows to a whole new level.
“I remember watching Julia Child on WGBH in Boston when I was
growing up. Back then, unless your living room TV was near the kitchen or you
actually had a small TV in your kitchen, you couldn’t actually make it
along with her,” said Tim McElreath, Discovery, Inc’s Senior Product Manager of
Emerging Platforms. “Now you can just bring your iPad or Fire tablet or your
Echo show over to your kitchen counter.”
Launched in October 2019, Food Network Kitchen provides on-demand
cooking classes, which incorporate an option for online grocery shopping to secure
all the right ingredients. However, the strategy doesn’t stop there. Food
Network Kitchen also offers interactive cooking instruction with some of the
biggest names in culinary television across a wide range of platforms.
Live and lively
“Right now, we’re doing up to 10 live broadcasts a day out of New
York and Los Angeles. We’re also doing more and more on-location broadcasts,”
said McElreath. “So, we’ll have, say, Bobby Flay up in Chelsea Market on a
Sunday demonstrating how to make a chorizo omelet or something like that. He’ll
walk you through and try to do it at a pace so that you can cook along. And if
you’re watching on a mobile, you can type in a question. It’ll get relayed to
the chef by a monitor and then they’ll answer you. Or if they can’t answer it, someone
will answer it for you. So your question will get answered.”
“What we’ve seen is that our talent really starts to shine in a
live context. They’re great at producing polished scripted programming. But if you
get somebody like Michael Simon in front of the camera, talking
extemporaneously… He’s just a great raconteur. He can just talk your ear off
while he’s cooking. He’ll talk about his experiences, he’ll talk about his
family, he likes to talk about how his dad taught him to cook.”
Sometimes the questions submitted during a live broadcast aren’t about
the recipe being made, they’re about the chef and their experiences and how
they learned to cook. These kinds of questions are also welcome. “It prompts
them to start telling a little bit more about themselves.”
Demanding on-demand
Beyond the live programs, Food Network Kitchen also boasts a
library of more than 800 on-demand classes. Many of these are bundled into
courses, such as Rick Bayless’s traditional Mexican cooking course.
“We also have courses by cooking types, so we have a baking
course, a grilling course, and things like said,” said McElreath. “Each class
within a course has a recipe associated with it. But if you watch the course,
get the gist of the recipe, and then want to make it again later without
necessarily watching the whole course over again, we have regular step-by-step
text recipes that you can walk through as well.”
Listening to audio opportunities
For McElreath, the big focus with Food Network Kitchen has been to
build the smoothest possible virtual smart-screen situation for at-home chefs,
while also providing customers with a value proposition.
The approach McElreath and his team are using for the Food Network
Kitchen app – interactivity, value, and entertainment – is an extension of the
work they’d been doing on Alexa and Google Assistant.
“We’d been working on voice
platforms since early 2016, and it’d been largely kind of a research and
development type group for experimenting with the capabilities of the platform,”
said McElreath. “The voice multi-mobile platforms were so new that, across the
board, people were just kind of making things up as they went to try and figure
out how it fit in with their digital strategy. But we’d done a lot of
experimentation for things like searching for recipes and recommendations on
both the voice and the smart-screen platforms.”
Ease and extensions
For Food Network Kitchen, part of this strategy is making the app
available on Amazon Alexa and Echo Show, Fire Tablets, Fire TV streaming media devices,
and Fire TV Edition smart TVs. This gives McElreath and his team an opportunity
to integrate some of the work they’d already done while also extending the
platform out to a very specific situational interaction.
“You know, if somebody’s looking to take a cooking class, and we
have a set of ingredients, it should be very easy to get the exact ingredients
delivered very quickly,” he explained. “And since we’re on TV, Mobile, and Smart
Screen, you can watch something on TV and say, ‘Okay, I want to save the recipe
I’m watching right now,’ which shows up in your saves on Mobile while you’re
out shopping so that you’re able to use that as your shopping list, after which
you’ll go home and be able to walk right over to your kitchen counter and be
able to pull up that recipe on Smart Screen without having to search for it
again.”
Mind you, with audiences expecting such interactivity between
their devices, content companies are going to have to step up their game.
Fortunately, McElreath has already considered the many possibilities that exist
beyond the kitchen.
“We have a lot of short-form
how-to videos like how to chop an onion, how to poach an egg, and things like
that. And if we have a content template, then I think there’s a pretty big
opportunity to start thinking about applying it to some of our other Discovery
brands.”
Today DCN submitted the final version of comments regarding the proposed revised regulations for implementation of the California Consumer Privacy Act (CCPA) to the California Attorney General (AG). The comments were developed with input from the DCN membership. They are in response to the revised comments the California Department of Justice, Office of the Attorney General released on Friday, February 7, 2020 to address the feedback received on their first draft of regulations, originally published and noticed for public comment on October 11, 2019.
DCN is actively
involved and tracking the regulations, created to provide guidance for
businesses about what is required under the CCPA, to ensure they take into
account the needs of our membership. We’ll continue to review any updates and
make sure that our members are up-to-date.
For reference, here are the comments filed by DCN in December 2019 regarding the initial set of proposed regulations.
The future of cookies is unknown and much of the industry
is scrambling to maintain consumer confidence in the digital advertising
ecosystem. At the heart of the fear, uncertainty, and doubt is consumer
privacy. The challenge of our industry is not the cookie. The challenge is that
we have progressively lost customer confidence and it’s time to get it back
together.
Enter Project Rearc. At IAB’s Annual Leadership Conference, there was much discussion of the post-cookie world. “Project Rearc,” as in re-architecture, was also introduced; it calls on IAB members to build a replacement for third party cookies. In an effort to “rebuild” the digital advertising ecosystem, Project Rearc is asking the industry to develop a new identifier that works across browsers, adheres to privacy standards and doesn’t rely on third-party cookies.
The spirit of Project
Rearc is good in principle. It’s the first attempt to create a neutral
environment for all players to collaborate and discuss potential options. It is
still in its early stage, however the main principle is to “put consumers at
the center of everything we do.” That is the exact kind of thinking we’ll need
to do as an industry.
What
are the existing Unique ID solutions?
Unique ID solutions were initially created to optimize
the cookie-matching process. The idea is pretty simple: running a single synchronization
instead of one per ad-tech partner. So, by definition and despite the hype
around it, these solutions fully rely on 3rd-party cookies.
An exception is Liveramp, which uses IDL (for Identify
Link). IDL can work without cookies by associating an ID to an online
identifier such as a mail address. This is neither a novel, nor optimal approach.
How do
we win back consumer confidence?
Cookies are not the real problem today. Privacy is now a
real public concern. The problem is not about how the user is identified and
tracked. The problem is in figuring out how to provide the control and the
understanding of what happens when they are tracked.
One common misconception in the industry is assuming we
can replace cookies with logged-in traffic. The problem here, as opposed to
cookies, is that a mail address is both persistent and easily linkable to a
physical person. Furthermore, the data privacy associated with a mail address
is managed through barely understandable Terms and Conditions that only few
users are really reading. In other words, it’s hardly a solution to privacy,
when your data can be used to gain a lot of personally identifying information
which the user unknowingly gives consent to.
We must work on finding a solution that allows the user
to take control regarding what they share, with whom, and for what purpose. We
also have to do this in conditions that the user can truly understand. This
means, for example, not asking consumers to provide their consent for ad-tech
players that they’ve never heard of (a la GDPR). Instead, we need to develop a
solution that would provide clear benefits to users being willing to share more
information.
What
could this value exchange look like?
First and foremost, focusing on communicating and
delivering the clear benefits should be our primary goal. For example, we need
to develop a way for the user to declare preferences (i.e. via a whitelist or
blacklist) regarding the type of ads that they would be willing to receive.
They could do this manually, by choosing product categories, advertisers, or
specific industries of interest. Or we do this automatically by letting
algorithms “inspire” the consumer to decide based on a set of basic opt-in
information such as age, gender, or interests and behavior inferred by website
visits.
The more the user decides to declare granular data, the
more benefits we need to provide. On the advertiser side, the user might, for
example, be rewarded with coupons and discounts from brands (on top of seeing
only relevant ads). On the publisher side, the user declaring granular data and
accepting a wide range of ads could get access to more premium and specific
content.
We could end-up in a win-win situation that would provide
control & useful recommendations to the end-user as well as a better ROAS
for brands (user engaging more with the ad because they expected to see it).
It will be a long and complex journey to turn those
principles into a concrete actionable solution. And at this stage, creating
such a solution is not just an option. It’s a need.
Understanding consumers’ wants and needs is critical for advertising success. These days, having technology that enables these insights is not just nice to have – it is a must have. However, with January’s release of the California Consumer Privacy Act (CCPA), which favors consumer privacy rights and increases limitations on advertising targeting parameters, the ability to predict and target those “wants and needs” based on personal information is becoming increasingly difficult.
As a result, advertisers will undoubtedly need to adjust their strategy to accommodate new and emerging regulations. In January, for example, Google announced its plan to phase out support for third-party cookies in Chrome within the next two years. Rather than attempting to reach their target audience based on data collected from cookies, advertisers will need to delve deeper to define consumer preferences and behaviors. Pivoting media buying strategies toward contextual alignment and partnering closer with content creators will be paramount to reach consumers in the new data privacy regulated world.
At Integral Ad Science (IAS), we conduct an annual survey with agencies, advertisers, ad tech vendors, and publishers to get a sense of the common themes and challenges facing industry leaders in the upcoming year. In our 2020 Industry Pulse Report, data privacy legislation was understandably a point of concern for all parties. In fact, 82% of respondents listed contextual targeting as a key industry trend in the upcoming year.
Publishers have the advantage
This is where publishers are able to regain control. With new standards like the General Data Protection Regulation (GDPR) and CCPA already imposed (and Chrome’s looming 2022 deadline) a decreased ability to rely on cookies will undoubtedly take center stage globally. When it comes to reaching desired audiences, the industry shift toward an increased reliance on contextual and semantic capabilities will put publishers back on top. Not only do we foresee great power in first-party audience data and stronger than ever buyer-seller relationships, but there is also a greater opportunity for publishers to market the true value of their content.
The first step is for publishers to ask themselves: Am I monetizing all of my inventory, and how can I stand out from the crowd? Taking the time to understand their unique inventory landscape will help publishers smartly create packages with delivery guarantees that their advertisers actually want to buy.
The media buy
According to one 2020 Industry Pulse Report publisher participant, “Data will be the single most important factor. People want to reach specific audiences and, as a publisher, we need to rely on our first party data that has been cleared from a GDPR and CCPA perspective to offer contextual and behavioral targeting for media specific buys.”
However, curating and selling these custom inventory packages by using first-party data is not as simple as it may sound. Publishers need to partner with an unbiased third-party to validate and optimize these custom packages, especially when trying to win back advertiser trust. Now more than ever, advertisers are relying on verification partners to ensure that their ads are placed in brand safe and suitable environments. When asked about the implications of data privacy regulations, a manager at a global media agency noted, “…Ensuring our media is presented in the way the brand intends is now more important than ever.”
And consumers agree: according to the IAS Ripple Effect study, consumers are 30% more likely to remember an ad found in a suitable environment. Publishers leveraging a third-party verification partner who both sits in the epicenter of the ecosystem and provides patented technology to analyze all page content will be one step ahead of the competition.
The increasing impact of data privacy legislation means predicting and targeting consumer “wants and needs” will become increasingly difficult. Advertisers looking to reach the right consumers in the right context will need to lean heavier on publishers offering contextual and semantic solutions. Content is still king. So, publishers should continue to focus their efforts on creating quality content and then partnering with an established third-party to help intelligently package and promote it.