At this point, publishers know the cookie deprecation drill. You’d be forgiven if you’ve forgotten the very first deadline Google set for deprecating third-party cookies in Chrome. (It was Q2 2022. Believe it or not, they originally announced that deadline in early 2020.) We’ve seen Google extend the deadline twice since then. And they’ve done it yet again. We’re looking at a 2025 deadline, and once more, the digital industry needs to plan and act as though the third time will indeed be the charm.
A shock? Probably not, all things considered. A disappointment? It certainly is for publishers. In reality, publishers have been building their data strategies over the last few years as though each cookie deprecation deadline will be the one. The stakes are simply too high to call Google’s bluff, considering its dominance of the ad market.
But while it’s easy, even understandable, for publishers to feel frustrated by this stop-start process, it’s important to remember every delay gives independent tech providers more time to advance their own identity solutions, and gives publishers more time to experiment and test identity solutions. Digital professionals may consider Google the “800-pound gorilla.” However, by extending the cookie deprecation deadline, it might very well end up losing a few pounds.
To some publishers, Google’s latest move on cookies feels different. According to the company’s official statements, they’ve pushed the deadline to get ahead of compliance and regulation concerns. But as they have in the past, publishers suspect Google is punting because their own solution still isn’t positioned to dominate the identity market. If the old 2024 deadline had stuck, Google would risk ad spend shifting away from its walled garden.
The truth is, Google’s Privacy Sandbox hasn’t caught on in the way some in the industry might have expected. DSPs aren’t spending enough there, and publishers aren’t seeing enough transactions for the Sandbox to be considered “dominant.” Lotame, for one, hasn’t tested the Privacy Sandbox, simply because we’ve seen zero interest and demand from our marketer clients. So why is that? One likely answer is that publishers and advertisers are seeing other cookieless IDs performing as well as or better than the Sandbox itself.
Google hasn’t cornered the market on identity, and the industry is better off for it. Publishers’ revenue needs and business goals are not uniform, so having multiple identity options can improve their chances of finding the solution that helps them thrive and compete in the media ecosystem. When we look at the third-party cookie deprecation timeline, it doesn’t matter “who started it.” What matters is this delay opens the door for more innovation, more options in the marketplace, and more time for publishers to explore those options.
We can’t deny that casting a wide ID net – testing multiple solutions – requires time and resources. But some leading publishers are finding that non-Google IDs are providing them a revenue lift already, and are delivering today while the Sandbox remains a question mark.
It’s just not in publishers’ best interest to sit around and wait for Google to solve their – and its own – identity problems. Publishers need to take matters into their own hands and use this time for testing and experimentation. For those who have found taking action to be daunting, this delay is a gift, and a chance to locate tech partners who can serve their specific needs. Other IDs are demonstrating it’s possible to solve for identity, preserve user-privacy, and drive ad revenue. The time to double down on identity testing is now. Publishers may have to wait for Google, but they can’t wait for performance.
The public has a knowledge gap around generative artificial intelligence (GenAI), especially when it comes to its use in news media, according to a recent study of residents in six countries. Younger people across countries are more likely to have used GenAI tools and be more comfortable and optimistic about the future of GenAI than older people. And a higher level of experience using Gen AI tools appears to correlate with more positive assessment of their utility and reliability.
Over two thousand residents in each of six countries were surveyed for the May 2024 report What Does the Public in Six Countries Think of Generative AI in News? (Reuters Institute for the Study of Journalism). The countries surveyed were Argentina, Denmark, France, Japan, the UK and the USA.
Younger people more optimistic about GenAI
Overall, younger people had higher familiarity and comfort with GenAI tools. They were also more optimistic about future use and more comfortable with the use of GenAI tools in news media and journalism.
People aged 18-24 in all six countries were much more likely to have used GenAI tools such as ChatGPT, and to use them regularly, than older respondents. Averaging across countries, only 16% of respondents 55+ report using ChatGPT at least once, compared to 56% aged 18 to 24.
Respondents 18-24 are much more likely to expect GenAI to have a large impact on ordinary people in the next five years. Sixty percent of people 18-24 expect this, while only 40% of people 55+ do.
In five out of six countries surveyed, people aged 18-34 are more likely to expect GenAI tools to have a positive impact in their own lives and on society. However, Argentia residents aged 45+ broke rank, expressing more optimism about GenAI improving both their own lives and society at large than younger generations.
Many respondents believe GenAI will improve scientific research, healthcare, and transportation. However, they express much more pessimism about its impact on news and journalism and job security.
Younger people, while still skeptical, have more trust in responsible use of GenAI by many sectors. This tendency is especially pronounced in sectors viewed with greatest skepticism by the overall public – such as government, politicians, social media, search engines, and news media.
Across all six countries, people 18-24 are significantly more likely than average to say they are comfortable using news produced entirely or partly by AI.
People don’t regularly use GenAI tools
Even the youngest generation surveyed reports infrequent use of GenAI tools. However, if the correlation between young people using GenAI more and feeling more optimistic and trusting about it holds true on a broader scale, it’s likely that as more people become comfortable using GenAI tools regularly, there will be less trepidation surrounding it.
Between 20-30% of the online public across countries have not heard of any of the most popular AI tools.
While ChatGPT proved by far the most recognized GenAI tool, only 1% of respondents in Japan, 2% in France and the UK, and 7% in the U.S. say they use ChatGPT daily. Eighteen percent of the youngest age group report using ChatGPT weekly, compared to only 3% of those aged 55+.
Only 5% of people surveyed across the six countries report using GenAI to get the latest news.
It’s worth noting that the populations surveyed were in affluent countries with higher-than-average education and internet connectivity levels. Less affluent, free, and connected countries likely have even fewer people experienced with GenAI tools.
The jury is out on public opinion of GenAI in news
A great deal of uncertainty prevails around GenAI use among all people, especially those with lower levels of formal education and less experience using GenAI tools. Across all six countries, over half (53%) of respondents answered “neither” or “don’t know” when asked whether GenAI will make their lives better or worse. Most, however, think it will make news and journalism worse.
When it comes to news, people are more comfortable with GenAI tools being used for backroom work such as editing and translation than they are with its use to create information (writing articles or creating images).
There is skepticism about whether humans are adequately vetting content produced using GenAI. Many believe that news produced using GenAI tools is less valuable.
Users have more comfort around GenAI use to produce news on “soft” topics such as fashion and sports, much less to produce “hard” news such as international affairs and political topics.
Thirty percent of U.S. and Argentina respondents trust news media to use GenAI responsibly. Only 12% in the UK and 18% in France agree. For comparison, over half of respondents in most of the countries trust healthcare professionals to use GenAI responsibility.
Most of the public believes it is very important to have humans “in the loop” overseeing GenAI use in newsrooms. Almost half surveyed do not believe that is happening. Across the six-country average, only a third believe human editors “always” or “often” check GenAI output for accuracy and quality.
A cross-country average of 41% say that news created mostly by AI will be “less worth paying for” and 19% “don’t know. 32% answered “about the same.”
Opportunities to lead
These findings present a rocky road for news leaders to traverse. However, they also offer also an opportunity to fill the knowledge gap with information that is educational and reassuring.
Research indicates that the international public overall values transparency in news media as a general practice, and blames news owners and leadership (rather than individual journalists) when it is lacking. However, some research shows users claim to want transparency around GenAI tools in news, but trust news less once they are made aware of its use.
The fact that the public at large is still wavering presents an opportunity for media leaders to get out in front on this issue. Creating policy and providing transparency around the use of GenAI tools in news and journalism is critical. News leaders especially need to educate the public about their standards for human oversight around content produced using GenAI tools.
The digital media landscape is on the brink of a transformative shift with the impending phase-out of third-party cookies. Despite Google’s latest delay, adoption is slow while the urgency for adaptation remains high. Teads recently undertook a survey to shed light on the preparedness of global publishers. It reveals a striking disparity in readiness and confidence, and found an alarmingly low level of digital media companies that are prepared for the state of advertising in a cookieless world.
The current digital ad landscape
Teads surveyed 555 publishers across 58 countries, uncovering critical insights. Alarmingly, only 32% of publishers are actively preparing for the cookieless advertising future. This indicates a significant portion of the digital media industry is delaying action, potentially waiting until the deprecation’s full impact becomes unavoidable. However, some organizations are taking proactive measures to be ready for the change.
Confusion and confidence
Our survey reveals that 53% of publishers feel overwhelmed by the numerous cookieless advertising solutions available, and only 28% feel confident in their understanding of the new landscape. This confusion is a significant barrier to effective adaptation. Heather Carver, Chief Revenue Officer at Freestar, notes the importance of developing durable technologies and strategies. “We’re using this time as an opportunity to strengthen our cookieless solutions. We’re focusing on developing durable technologies and strategies that will remain relevant regardless of cookie policies.”
Anticipating the financial impact
Publishers expect the financial implications of the cookieless transition to be substantial. Approximately 45% of publishers expect a significant decrease in ad revenue, with a 120% increase in concern year-over-year. This is underscored by findings that cookieless traffic not only fills less but also yields less, putting revenue streams at risk.
Login struggles
Despite the many challenges media companies face as they prepare for a cookieless advertising future, there is a positive outlook among some publishers. About 44% see the transition as an opportunity to leverage their first-party data and enhance content quality. Furthermore, 37% of respondents appreciate the privacy benefits of cookieless solutions expressing confidence in finding new ways to monetize their content.
Preparation and proactive identity solutions
As publishers adapt to this brave new world, many are experimenting with cookieless solutions and leveraging first-party data to set themselves up for success in the years to come.
Kedar Prabhu, VP of Product Management at Dow Jones, highlights the importance of leveraging first-party data in a world with or without cookies. “By focusing on the wealth of data generated by our direct and long-standing relationships with subscribers, we’ve not only prepared ourselves for the post-cookie world but have also unlocked new opportunities for growth and engagement,” Prabhu said. “We can offer our advertisers targeted, effective ad placements based on direct audiences composed of known users and enriched with real, meaningful insights into preferences and behaviors, all while maintaining the privacy standards that our customers and regulators expect.”
Similarly, the IAB is undertaking efforts to develop standards that support the industry’s transition. Angelina Eng, VP of IAB’s Measurement, Addressability & Data Center, emphasizes the need for robust guidelines: “We see this as a critical time to advance our guidelines and frameworks to support the industry’s transition,” Eng said. “We’re accelerating our efforts to develop standards that address the needs of a cookieless web, ensuring that all parties can navigate this shift smoothly.”
Other publishers, like Freestar, are using this time as an opportunity to strengthen our cookieless solutions. “We’re focusing on developing durable technologies and strategies that will remain relevant regardless of cookie policies,” Carver said. “The extension doesn’t change our momentum. But it allows us more stability for testing and implementing these solutions.”
Tier-specific insights and strategies
The survey also highlights differences in readiness and strategy among various tiers of publishers:
Tier 1 Publishers: Leading the Charge
52% believe the changes offer an opportunity to differentiate through first-party data and content quality.
62% have a signed-in strategy, and 38% employ dedicated resources for cookieless solutions.
74% engage directly with advertisers using first-party data.
Mid-Tier Publishers: Strong Adaptation
74% engage directly with advertisers through first-party data.
32% have a strong grasp of the evolving digital landscape.
36% have specific resources for exploring cookieless technologies.
Lower Tier Publishers: Facing Challenges
Greater dependence on industry solutions like Seller Defined Audiences (SDA).
Only 22% have a strong understanding of the shift towards a cookieless advertising framework.
17% are testing cookieless alternatives, reflecting a slower response to industry changes.
As Simon Klein, Global SVP of Supply at Teads, states, “Despite Google’s recent announcement, the phase-out has only been delayed until early 2025, and the reality of a cookieless world is here,” Klein said. “This data underscores the urgent need for industry-wide adaptation and the critical role of innovative solutions in this transition.”
Digital media executives must adopt innovative technologies and forward-thinking strategies to successfully navigate the realities of a cookieless world. While many are in a wait and see holding pattern, there’s no time to waste. This isn’t a question of when. This change is inevitable. Proactive publishers are not only poised to be ready for the cookieless future of digital advertising, they are employing solutions that are helping them do better business today.
About the author
James is a data leader with over 20 years of experience in digital advertising. As the Global VP of Data at Teads, he leads a team focused on data-driven solutions and the cookieless transition. He previously held senior roles at Microsoft, Verizon, AOL, and Yahoo, excelling in data-driven marketing strategies. A fellow of the Chartered Institute of Marketing, James enjoys racing cars, cooking, and traveling.
Not too long ago, the consensus was that a significant digital reader revenue strategy could only work at two or three outlier news organizations. The New York Times had the breadth and depth and quality of content for which the average person highly engaged with the news might pay. The Wall Street Journal had a large potential base of readers who needed its specialized content for their jobs and who had expense accounts that would cover it.
Beleaguered regional newspapers such as the Minneapolis Star-Tribune and the Boston Globe eventually proved this wrong. Voice of San Diego and dozens of other local and national nonprofit newsrooms found they could have public radio-like success with small donations from readers who understood the altruistic mission of accountability journalism.
Beyond the business side
Local news organizations are right to pursue the formula. We’re past the debate over whether a significant number of readers will pay to support strong journalism. It’s been proven they will.
Industry leaders and journalism funders continue to put crucial focus on testing and improving revenue models. Many cohorts of local publishers have been trained in the business-side factors involved in a reader revenue strategy. Help on achieving the level of journalism that will capture an audience and move them to give or subscribe has been much harder to come by.
And that’s the elephant in the room: The media support system – the dot orgs, foundations and funding organizations – need to figure out how to help make the journalism at under-resourced newsrooms strong and impactful enough to generate the kind of support that will make them sustainable. (And ultimately lead to more such journalism.)
This isn’t a question of building a business model or fundraising. This is about staffing and data acumen and the knowledge and tools it takes to create powerful journalism and a user experience that audiences value and support.
As they long have, amazing training opportunities exist through organizations such as IRE, ONA, SPJ, the Ida B. Wells Society and more. But the barriers for small and under-resourced news organizations to actually take advantage and put that training to use are high.
Small is the new normal
Zooming out, we see a local journalism landscape dominated by hundreds of very small newsrooms: local independent online startups that are one- to three-person operations and legacy Black and brown news organizations. They have limited resources or are chain-owned daily newspapers whose staffs have been reduced to one or two reporters.
First and foremost, these newsrooms need more direct operational funding to employ more journalists. This is something that the massive Press Forward initiative created by a coalition of journalism-supporting foundations is seeking to address.
But the industry also needs to have teams equipped for the future. Newsrooms like these will benefit from a system of training, resources and mentorship to support “capital J” accountability journalism in news ecosystems that are now decentralized.
Readers are well-served and grateful for coverage of the day-to-day news of the community. However, every newsroom yearns for the space and resources to also do work that goes deeper, that holds the powerful accountable, that has impact and drives change. It’s the kind of work that elevates the stature of your brand, that exposes your organization to more people, that is the catalyst to subscribe or give for many.
We need great journalism
I’d argue that the same dynamic applies to advertising at many news organizations, even if they don’t realize it. They can’t compete with the price, reach and targeting of the digital ad tech that drives the biggest online platforms. But small newsrooms can make a hell of a case to local advertisers that they want to be adjacent to and associated with the kind of journalism that has the community appreciative and engaged.
This has happened in incredible (even Pulitzer Prize-winning) ways and it is exciting how quickly collaborative journalism has been embraced. But it’s never completely organic. This movement has happened in large part through the facilitation, research, training, convening and cheerleading of the Center for Cooperative Media at Montclair State University in New Jersey. Solutions Journalism Network has built training programs into its facilitation of regional and topical journalism collaboratives. And ProPublica, the Center for Investigative Reporting/Reveal, ICIJ and my alma mater, the Center for Public Integrity, helped show a new generation of investigative and single topic-focused nonprofits how having collaboration in your DNA allows you to punch far above your weight.
Meanwhile, Report for America is building training, mentorship and additional editing support into its process, to make sure that its ambitious goal of putting hundreds of additional reporters in under-resourced local newsrooms across the country has the intended impact. And the Investigative Editing Corps is pairing small newsrooms with experienced editors to provide support for enterprise and investigative reporting that goes beyond their typical daily news coverage.
Emphasis on essentials
Technology is also playing a part in making more advanced reporting possible in smaller newsrooms, from data journalism resources such as The Accountability Project and Big Local News to the document and records-access tools of Muck Rock.
When the Center for Public Integrity focused its mission four years ago on investigative reporting that confronts inequality in the U.S., we thought about how to scale that work beyond what our 25-person newsroom could do. When we obtained secret White House documents showing the true extent of the COVID-19 outbreak in 2020. We shared them directly with journalists across the country, and it saved lives. After spending thousands of hours obtaining and cleaning more than a decade’s worth of data about polling place locations and closures, we made it available to power not just our own reporting, but others’ work ranging from small local news organizations to NPR, the Wall Street Journal and the New York Times.
A lightbulb went off when we were publishing “Unhoused and Undercounted,” an investigation that proved public school districts across the country were failing to identify and serve homeless students as required by federal law. We realized that this story, using our data analysis, could be written in almost any local community in the country and have a high potential for very direct impact in helping kids.
We were offering the data and the formula of questions to ask. How could we get it — and similar investigations — into the hands of any/every willing local newsroom able to tackle it, in a way that allowed them to have impact with few resources but also an entry point to go far deeper into the topic in their community if they could?
Decentralized journalism calls for decentralized solutions for seeding and supporting the kind of work that will spark a virtuous cycle of revenue that rewards the most impactful journalism. Our media ecosystem is supported by a robust network of organizations that are focused on keeping newsrooms afloat. But like all things digital, even this support must continue to evolve. Revenue models are only as effective as audiences’ willingness to support journalism. It’s time to focus on empowering under-resourced newsrooms to deliver the highest caliber journalism, to support society – and to inspire audiences to support them.
About the author
Matt DeRienzo is a veteran newsroom leader whose work over the past four years as editor in chief of the Center for Public Integrity was recognized with a national Edward R. Murrow Award for general excellence. Previously, he served as vice president of news for Hearst’s Connecticut newspapers and as the first full-time executive director of LION, a national nonprofit supporting local independent online news organizations. He can be reached at [email protected].
Programmatic advertising may be the most ubiquitous, influential market in the world. Yet almost nobody, including most of its participants, understands how it actually works. It’s something I cover in a chapter of my new book The Death of Truth. The chapter – which attracted a lot of attention after it was excerpted in WIRED is (appropriately) titled “Buying Blind.” It documents how so many publishers and even marketers are blindsided by the unintended consequences of programmatic’s dominant role in advertising.
Even those working with programmatic advertising every day were surprised at how it can actually undermine the journalism ecosystem. It has created a marketplace in which blue-chip advertisers unknowingly finance the worst peddlers of misinformation and disinformation at the expense of publishers who still care about informing readers.
A striking example of this phenomenon occurred in 2019 when my team discovered that Warren Buffett’s Berkshire Hathaway was the biggest advertiser on Sputnik News, a Kremlin-controlled disinformation website, through its subsidiary Geico. This was, of course, not because of a deliberate decision made by Buffett or executives at Geico. Instead, Geico’s funding for Sputnik came in the form of ads the company inadvertently placed on the Russian disinformation site because of the black hole that is programmatic advertising. Comscore and (my company) NewsGuard have estimated there is $2.6 billion a year in programmatic advertising unintentionally going to publishers of misinformation—revenues that quality news publishers badly need.
Companies like Geico use ad tech tools offered by “Demand Side Platforms” — DSPs — to buy these ads. The largest DSPs, as you might know, are Google and The Trade Desk. The Trade Desk (now a partner with NewsGuard in helping to combat this issue) has real-time bidding technology that uses advanced algorithms and extensive data to place ads across the web at the scale of 5.4 million ads per second. But for all the sophisticated data powering the operations of the DSPs, one crucial detail was overlooked as programmatic was invented and flourished: The identity of the websites where a brand’s ads are placed. This has left advertisers flying blind, sending hundreds, thousands, or even millions of dollars into the ether without knowing where their ads will appear. The result: Ads appearing in environments where studies show they are less likely to get cost-effective responses – and also likely to embarrass the brand.
Exclusion and blocking: Blunt, ineffective advertising tools
Seeing how brands have been burned by placing their ads alongside unsavory content, many advertisers and agencies have taken blunt approaches that range from bad to worse:
One approach involves using “exclusion lists” — a list of websites that an advertiser deems inappropriate — instructing the DSP to block the brand from running its ads on any sites on the list. This approach has some appeal, but it is reactive and never fully effective because, new, hoax websites crop up daily, wreaking havoc on our information ecosystem long before they make it onto an advertiser’s exclusion list. This is true now more than ever as phony websites created by generative AI looking to get in on the programmatic gravy train are popping up every day.
Another approach involves using “keyword blocklists” — lists of sometimes thousands of keywords like “Ukraine,” “war,” “gay” or “Black” that the advertiser deems dangerous. These blocklists instruct the DSP to block ads from appearing on any webpage that contains even just one of these keywords. But studies have shown that news — especially news serving minority or underserved communities — are disproportionately harmed by keyword blocklists.
Worse yet, some advertisers have decided to remove their ads from news altogether by blocking their ads from the entire category. This has the obvious effect of harming the news industry, slashing already dwindling revenues for news outlets. But it also has a negative, dollars-and cents impact on advertisers themselves: Missed opportunity.
According to studies, such as those from the IAB and Stagwell, advertising on news can be highly effective. In a 2020 study, the IAB found that “nearly half of consumers find brands that advertise in the news to be more customer-focused and engaging, more innovative, and relevant to them.” Newsreaders are a large but overlooked demographic: 25% of Americans are “news junkies,” according to recent research by Stagwell. Therefore, any strategy that simply avoids or excludes news is failing to reach a large, engaged audience.
Inclusion creates a better ad ecosystem
There is a better way: Website inclusionlists. Inclusion lists allow advertisers to focus their ad spend on pre-vetted, high-quality websites that align with their brand values and target audience. This ensures better placement and engagement, and ultimately increases the return-on-investment (ROI) of ad campaigns. And, of course, this proactive approach reduces the risk of a brand having its ads appear on low-quality or inappropriate sites, including sites propagating misinformation.
My company, NewsGuard, offers one solution for building a high-quality publisher inclusion list. We deploy a team of expert journalists to rate and review the reliability of news sources across the open web based on a set of apolitical criteria of journalistic practice. Using these ratings for more than 10,000 top news and information sources, NewsGuard offers inclusion lists of highly credible news publishers, which can be activated via pre-bid segments through The Trade Desk, Peer 39, and Comscore, or via private marketplaces of trusted news domain lists in SSPs including Pubmatic, OpenX and Magnite.
Because the focus is on websites that pay attention to best journalistic practices, every member of DCN is probably on NewsGuard’s highest-quality inclusion list.
You should brag about It!
Quality publishers can play their part in advocating for the use of inclusion lists as the superior advertising strategy. Publishers can make advertisers aware of inclusion lists in their advertising marketing materials. Many publishers now signal their trustworthiness to advertisers and readers by including their high score from an independent third party in their marketing materials. They can encourage ad agencies to end the harmful practices of boycotting news or using overly broad keyword blocking. They can work with the supply-side platforms to ensure that only other quality news websites are included in ad buys.
This next, logical evolution in programmatic advertising requires that everyone in the process step up to do their part. Publishers and advertisers would be the most immediate beneficiaries, with news departments getting the revenues they need and brands getting the more efficient purchases they seek. Especially in an election year, we should also keep in mind that democracy will function best with more news resources to support an educated public. Democracy matters. The news matters. And it pays to be the smartest players in the programmatic marketplace
About the author
Steven Brill is the co-founder of NewsGuard. His new book, ”The Death of Truth,” comes out June 4 from Penguin Random House.
The fediverse buzz continues to grow, with articles highlighting the potential to revolutionize the digital landscape. Proponents say it’s similar to the Internet’s early days, before Big Tech platforms built their algorithmic fiefdoms. Instead, the fediverse is about interoperability and flexibility.
Media companies are always on the lookout for ways to attract new audiences and engage more meaningfully with their readers. And – given Google’s experimentation with AI answers and social sites “distancing themselves from news” – finding new routes to audience development has become an increasing imperative.
The decentralized nature of the fediverse offers a compelling alternative to traditional search and social. Importantly, this approach allows media companies to retain their direct relationship with audiences, which removes the dependency on social and big-tech platforms for reaching new people.
Unlike traditional social media platforms that operate within closed ecosystems, fediverse represents a decentralized network of interconnected servers and platforms. It comprises a federation of independent servers, each hosting its social media platform.
These platforms, which range from microblogging to image sharing to video hosting, communicate using standard protocols. Their interoperability allows people on different servers to interact seamlessly. The fediverse decentralizes media companies by enabling them to distribute their content across interconnected servers and platforms rather than relying on a single, centralized platform.
Emphasis on choice and control
Unlike centralized platforms, where a single server owned by the platform provider stores user data and content, fediverse lets people choose their server. This server is selected based on individual preferences regarding privacy, content moderation, and community guidelines. This decentralized approach empowers audiences by putting them in charge of their online experience. It also mitigates concerns about data ownership and platform censorship. For media companies, this translates into an environment where people are more likely to engage with content they trust and have control over.
Encouraging diversity and inclusivity
The fediverse enables people to connect across different platforms and communities within the federation. For example, a user on a microblogging platform can follow and interact with users on a video hosting platform. This functionality breaks down the barriers that typically separate content and conversations on traditional social media platforms. This cross-platform interaction fosters a rich tapestry of ideas, perspectives, and content, creating a more vibrant and dynamic online ecosystem. Media companies can leverage this aspect of fediverse to reach diverse audiences actively seeking varied content.
Organic and community-driven engagement
In contrast to the centralized model, where platform algorithms often dictate content visibility and user interactions, fediverse promotes a more organic and community-driven approach. Users have greater control over their timelines and content visibility, allowing for a more personalized and authentic online experience.
This user-centric design aligns with evolving expectations of digital privacy and autonomy, resonating with individuals seeking alternatives to mainstream social media platforms. Media companies can benefit from this by creating content that naturally finds its way to interested audiences without algorithmic interference.
Media companies test the fediverse
At least two digital media companies are exploring the fediverse to gain more control over their referral traffic and onsite audience engagement. The Verge and 404 Media are building new functions that allow them to simultaneously distribute posts on their sites and federated platforms like Threads, Mastodon, and Bluesky. Replies to those posts on those platforms become comments on their sites.
This functionality means people from different platforms can interact with the content without creating individual accounts for each platform. For media companies, this interoperability can significantly enhance audience reach and engagement.
Advantages for media companies using the fediverse
Usability and interoperability are ideal for enhancing user experience. This approach enables seamless communication between platforms, ensuring autonomy, and providing robust content control.
Interoperability ensures that different platforms can communicate using common protocols like ActivityPub. This allows people to interact with content across various platforms seamlessly, thus creating a unified and interconnected ecosystem.
User autonomy empowers people to select their servers (instances) based on their preferences for privacy, moderation, and community guidelines, offering greater freedom and reducing the dominance of any single platform.
Content control enables media companies to host their servers or collaborate with trusted ones, giving them direct control over content distribution and audience engagement. Therefore, it mitigates risks associated with algorithm changes or policy shifts on major social media platforms.
Cross-platform interaction allows content like a media company’s article shared on one platform to receive comments, likes, and shares from users on other platforms, broadening reach and engagement without being confined to a single platform.
Community-driven moderation decentralizes content moderation, allowing it to occur at the community or server level. Media companies can set moderation policies to ensure their content meets their standards and audience expectations.
Enhanced privacy through decentralization gives media companies more control over their data and privacy settings, protecting user data from being exploited by large platforms.
Although federated platforms have smaller user bases than the larger walled gardens like Facebook and X, they offer significant audiences for media companies. Federating sites allow media companies to tap into the growing demand for decentralized, user-centric platforms, attracting new audiences and fostering a more loyal and engaged user base.
Federated platforms offers the potential for a fundamental shift in how media companies interact with their audiences. Media companies that experiment with the fediverse can initiate engagement and have an opportunity to build stronger, more direct connections with their audiences.
Digital media has always been marked by the intersection of creativity and technology. Unfortunately, in many cases that intersection has been a bumpy one. Today, however, there are automation solutions that can allow these two sides of the media business to work together and do better work as a result.
The digital publishing business is at a pivotal crossroads. On one hand, an explosion of creative possibilities facilitated by technologies like artificial intelligence (AI) are enabling personalized, dynamic content at scale. On the other hand, lies the intricate realm of ad operations, where complexity remains consistent, and efficiency and precision are paramount. The gap between creative potential and operational capability must be closed if digital media firms are to compete and scale effectively in today’s market.
The path to a solution starts with an exploration of the relationship between tech-enabled creatives and ad ops and ends with a question: How can organizations strategically synchronize right-brain creativity with left-brain operational efficiency in publishing? The answer is to level the playing field with the power of automation. Bringing both sides of publishing into perfect, tech-powered harmony does more than just streamline workflows. It empowers media firms to navigate the rapid pace of market changes, paving the way for maximum efficiency and operational excellence.
AI’s impact on the publishing business
Reaching the right customer at the right time with the right message is becoming a critical component of advertising success. So critical, it’s driving publishing companies to embrace advanced technologies, particularly AI technologies, to create content at scale like never before. The surge in tech-enabled content combined with the end of third-party cookies is forcing ad ops teams to re-evaluate their operations to keep up. Here’s a closer look at the impact of AI on content creation.
AI enhanced content production and customization at scale
By harnessing AI’s power to analyze data and recognize patterns, publishers can turbocharge content creation, crafting high-quality texts and visuals that are deeply engaging. This tech-enabled approach enhances content production and enables precise audience segmentation. It delivers hyper-personalized content at an unparalleled scale across various formats. All of which substantially boosts efficiency and the impact of content strategies.
How AI affects brand engagement
The creative advancements powered by AI are also profoundly influencing consumer interactions with brands. Relevant, engaging content that aligns with audience preferences and values can substantially boost brand engagement. Engagement which can potentially turn into revenue for brands. The downside is, it creates even more operational challenges for ad ops teams trying to keep up.
Operational challenges with tech-enabled creatives
To reap the benefits of tech-enabled creatives, publishers must fully grasp the challenges they pose to ad ops teams. This understanding is crucial for implementing effective solutions and maximizing the value of creative content in digital media.
Pressure to scale
Traditional manual processes in ad ops (campaign setup, trafficking, performance tracking), become bottlenecks when faced with the scale and speed of AI-driven content production. The pressure to scale operations to match the pace of content generation increases, requiring an equally robust systems in ad ops.
Complexity management
The complexity of content tailored for different audience segments requires meticulous management throughout its lifecycle. Automated systems capable of managing the detailed workflows and large datasets of ad ops are essential for keeping pace with production, and achieving maximum ROI.
Risk of siloed functions
Another significant challenge is the risk of siloed functions within organizations. When advanced creative tools operate independently from operational capabilities, it can create gaps that lead to inefficiencies and missed opportunities.
Creative outputs must seamlessly integrate with ad ops systems to prevent campaign delays, performance tracking issues, and scaling challenges. Such disconnects hinder adaptability, affecting competitiveness and growth. Robust automation reduces friction between sales and ops teams translating to improved morale, higher productivity, fewer fire drills, and ultimately – happier clients.
Clearly, there is a critical need for automation. Without integrating automated systems, ad ops teams face delays, increased make-goods, and ultimately a failure to capitalize on the content’s potential.
An Integrated approach nets tangible benefits
As the digital media landscape evolves, integrating advanced automation tools with creative production capabilities becomes increasingly critical. It not only enhances operational efficiencies but also serves as a key driver of revenue growth and scalability. Let’s take a look at the tangible benefits automation can deliver.
Quickly adapting to market changes
With AI’s rise, and content production dramatically increasing, ad ops teams struggle to quickly adapt to market changes. However, according to Theorem’s research, 79% of ad ops professionals think their tools need improvement, and 69% feel digital advertising processes need enhancement. Implementing advanced, automated systems can empower ad ops teams to meet the needs of a constantly evolving market more effectively.
Enhancing data flow
Automation also significantly improves the flow of information between creative teams and ad ops. Centralizing information, as noted by 59% of ad professionals, ensures that data from campaign performances is quickly available to refine ongoing and future campaigns.
Automation drives revenue through operational excellence
Automation not only streamlines operations in ad ops, it significantly enhances ROI. 55% of ad pros in our research noted time-consuming processes as a major pain point. Another 36% reported complex processes slowed their pace of work. By automating tasks such as campaign setup, monitoring, and optimization, businesses can minimize costly mistakes and make-goods that are directly impacting bottom line revenue.
There is also the tangible reward of time saved as a direct result of automation. 56% of ad pros believe implementing automation saves valuable time. Time that can be redirected towards creative and strategic initiatives.
Navigating innovation with automation: the strategic advantage
Media organizations need to remain competitive and responsive in an industry driven by rapid technological advancements. Embracing the symbiotic integration of tech-enabled creatives with automated ad ops can lead to marked improvements in campaign outcomes and overall business performance. Implementing automation isn’t merely about keeping pace. It’s about setting the pace in the journey towards more synchronized, efficient, and responsive operations in publishing. Now is the time to act, to innovate. This will empower your team and your organization to lead now – and well into the future.
These days, digital media companies are all trying to figure out how to best incorporate AI into their products, services and capabilities, via partnerships or by building their own. The goal is to gain a competitive edge as they tailor AI capabilities to their audiences, subscribers and clients’ specific needs.
By leveraging proprietary Large Language Models (LLMs) digital media companies have a new tool in their toolboxes. These offerings offer differentiation and added value, enhanced audience engagement and user experience. These proprietary LLMs also set them apart from companies that are opting for licensing partnerships with other LLMs, which offer more generalized knowledge bases and draw from a wide range of sources in terms of subject matter and quality.
A growing number of digital media companies are rolling out their own LLM-based generative AI features for search and data-based purposes to enhance user experience and create fine-tuned solutions. In addition to looking at several of the offerings media companies are bringing to market, we spoke to Dow Jones, Financial Times and Outside Inc. about the generative AI tools they’ve built and explore the strategies behind them.
Media companies fuel generative AI for better solutions
Digital media companies are harnessing the power of generative AI to unlock the full potential of their own – sometimes vast amounts – of proprietary information. These new products allow them to offer valuable, personalized, and accessible content to their audiences, subscribers, customers and clients.
Take for example, Bloomberg, which released a research paper in March detailing the development of its new large-scale generative AI model called BloombergGPT. The LLM was trained on a wide range of financial data to assist Bloomberg in improving existing financial natural language processing (NLP) tasks, such as sentiment analysis, named entity recognition, news classification, and question answering, among others. In addition, the tool will help Bloomberg customers organize the vast quantities of data available on the Bloomberg Terminal in ways that suit their specific needs.
Launched in beta June 4, Fortune partnered with Accenture to create a generative AI product called Fortune Analytics. The tool delivers ChatGPT-style responses based on 20 years of financial data from the Fortune 500 and Global 500 lists, as well as related articles, and helps customers build graphic visualizations.
Generative AI helps customers speed up processes
A deeper discussion of how digital media companies are using AI provides insights to help others understand the potential to leverage the technology for their own needs. Dow Jones, for example uses Generative AI for a platform that helps customers meet compliance requirements.
Dow Jones Risk Compliance is a global provider of risk and compliance solutions across banks and corporations which helps organizations perform checks on their counterparties. They do that from the perspective of complying with anti-money laundering regulation, anti-corruption regulation, looking to also mitigate supply chain risk and reputational issues. Dow Jones Risk Compliance provides tools that allow customers to search data sets and help manage regulatory and reputational risk.
In April, Dow Jones Risk & Compliance launched an AI-powered research platform for clients that enables organizations to build an investigative due diligence report covering multiple sources in as little as five minutes. Called Dow Jones Integrity Check, the research platform is a fully automated solution that goes beyond screening to identify risks and red flags from thousands of data sources.
The planning for Dow Jones Integrity Check goes back a few years, as the company sought to provide its customers with a quicker way to do due diligence on their counterparties, Joel Lange, executive Vice President and General Manager, Risk and Research at Dow Jones explained.
Lange said that Dow Jones effectively built a platform which automatically creates a report for customers on a person or company, using technology from AI firm Xapien. It brings together Dow Jones’ data that is plugged into other data sets, corporate registrar information, and wider web content. It then leverages the platform’s Generative AI capability to produce a piece of analysis or a report.
Dow Jones Risk & Compliance customers use their technology to make critical, often complex, business decisions. Often the data collection process can be incredibly time consuming, taking days if not weeks.
The new tool “provides investigations, teams, banks and corporations with initial due diligence. Essentially it’s a starting point for them to conduct their due diligence, effectively automating a lot of that data collection process,” according to Lange.
Lange points out that the compliance field is always in need of increased efficiency. However, it carries with it great risk to reputation. Dow Jones Integrity Check was designed to reshape compliance workflows, creating an additional layer of investigation that can be deployed at scale. “What we’re doing here is enabling them to more rapidly and efficiently aggregate, consolidate, and bring information to the fore, which they can then analyze and then take that investigation further to finalize an outcome,” Lange said.
Regardless of the quality of the generated results, most experts believe that it is important to have a human in the loop in order to maintain content accuracy, mitigate bias, and enhance the credibility of the content. Lange also said that it’s critical to have “that human in the loop to evaluate the information and then to make a decision in relation to the action that the customer wants to take.”
In recent months, digital media companies have been launching their own generative AI tools that allow users to ask questions in natural language and receive accurate and relevant results.
The Associated Press created Merlin, an AI-generated search tool that makes searching the AP archive more accurate. “Merlin pinpoints key moments in our videos to exact second and can be used for older archive material that lacks modern keywords or metadata,” explained AP Editor in Chief Julie Pace at The International Journalism Festival in Perugia in April.
Outside’s Scout: AI search with useful results
Chatbots have become a popular form of search. Originally pre-programmed and only able to answer select questions included in their programming, chatbots have evolved and increased engagement by providing a conversational interface. Used for everything from organizing schedules and news updates to customer service inquiries, Generative AI-based chatbots assist users in finding information more efficiently across a wide range of industries.
Much like The Guardian, The Washington Post, The New York Times and other digital media organizations that blocked OpenAI from using their content to power artificial intelligence, Outside CEO Robin Thurston explained that Outside Inc. wasn’t going to let third parties scrape their platforms to train LLM models.
Instead, they looked at leveraging their own content and data. “We had a lot of proprietary content that we felt was not easily accessible. It’s almost what I’d call the front page problem, which is you put something on the front page and then it kind of disappears into the ether,” Thurston said.
“We asked ourselves: How do we create something leveraging all this proprietary data? How do we leverage that in a way that really brings value to our user?” Thurston said. The answer was Scout, Outside Inc.’s AI search assistant. Scout is a custom-developed chatbot.
The company could see that generative AI offered a way to make that content accessible and even more useful to its readers. Outside had a lot of evergreen content that wasn’t adding value once it left the front page. Their brands inspire and inform audiences about outdoor adventures, new destinations and gear – a lot of which is evergreen and proprietary content that still had value if it could easily be surfaced by its audience. The chat interface allows their content to continue to be accessible to readers after it is no longer front and center on the website.
Scout gives users a summary answer to their question, leveraging Outside Inc’s proprietary data, and surfaces articles that it references. “It’s just a much more advanced search mechanism than our old tool was. Not only does it summarize, but it then returns the things that are most relevant,” he explained.
Additionally, Outside Inc’s old search function worked by each individual brand. Scout searches across the 20+ properties owned by the parent company which include Backpacker, Climbing, SKI Magazine, and Yoga Journal, among others. Scout brings all of the results together, from the 20+ different Outside brands, from the best camping destinations, to the best trails, outdoor activities for the family, gear, equipment and food all in one result.
One aspect that sets Outside Inc.’s model apart is their customer base, which differs from general news media customers. Outside’s customers engage in a different type of interaction, not just a quick transactional skim of a news story. “We have a bit of a different relationship in that they’re not only getting inspiration from us, which trip should I take? What gear should I buy? But then because of our portfolio, they’re kind of looking at what’s next,” Thurston said.
It was important to Thurston to use the LLM in a number of different ways, so Outside Inc launched a local newsletter initiative with the help of AI. “On Monday mornings we do a local running, cycling and outdoor newsletter that goes to people that sign up for it, and it uses that same LLM to pick what types of routes and content for that local newsletter that we’re now delivering in 64,000 ZIP codes in the U.S.”
Thurston said they had a team working on Scout and it took about six months. “Luckily, we had already built a lot of infrastructure in preparation for this in terms of how we were going to leverage our data. Even for something like traditional search, we were building a backend so that we could do that across the board. But this is obviously a much more complicated model that allows us to do it in a completely new way,” he said.
Connecting AI search to a real subscriber need
In late March, The Financial Times released its first generative AI feature for subscribers called Ask FT. Like Scout, the chat-based search tool allows users to ask any question and receive a response using FT content published over the last two decades. The feature is currently available to approximately 500 FT Professional subscribers. It is powered by the FT’s own internal search capabilities, combined with a third-party LLM.
The tool is designed to help users understand complicated issues or topics, like Ireland’s offshore energy policy, rather than just searching for specific information. Ask FT searches through Financial Times (FT) content, generates a summary and cites the sources.
“It works particularly well for people who are trying to understand quite complex issues that might have been going on over time or have lots of different elements,” explained Lindsey Jayne, the chief product officer of the Financial Times.
Jayne explained that they spend a lot of time understanding why people choose the FT and how they use it. People read the FT to understand the world around them, to have a deep background knowledge of emerging events and affairs. “With any kind of technology, it’s always important to look at how technology is evolving to see what it can do. But I think it’s really important to connect that back to a real need that your customers have, something they’re trying to get done. Otherwise it’s just tech for the sake of tech and people might play with it, but not stick with it,” she said.
Trusted sources and GenAI attribution
Solutions like those from Dow Jones, FT and Outside Inc. highlight the power of a brand with a trusted audience relationship to create deep, authentic relationships built on reliability and credibility. Trusted media brands are considered authoritative because their content is based on credible sources and facts, which ensures accuracy.
Currently, generative AI has demonstrated low accuracy and poses challenges to sourcing and attribution. Attribution is a central feature for digital media companies who roll out their own generative AI solutions. For Dow Jones compliance customers, attribution is critical to customers, to know if they’re going to make a decision based on information that is available in the media, according to Lange.
“They need to have that attributed to within the solution so that if it’s flowing into their audit trails or they have to present that in a court of law, or if they would need to present it to our internal audit, the attribution is really key. (Attribution) is going to be critical for a lot of the solutions that will come to market,” he said. “The attribution has to be there in order to rely on it for a compliance use case or really any other use case. You really need to know where that fact or that piece of information or data actually came from and be able to source it back to the underlying article.”
The Financial Times’ generative AI tool also offers attribution to FT articles in all of its answers. Ask FT pulls together lots of different source material, generates an answer, and attributes it to various FT articles. “What we ask the large language model to do is to read those segments of the articles and to turn them into a summary that explains the things you need to know and then to also cite them so that you have the opportunity to check it,” Jayne said.
They also ask the FT model to infer from people’s questions when it should be searching from. “Maybe you’re really interested in what’s happened in the last year or so, and we also get the model to reread the answer, reread all of the segments and check that, as kind of a guard against hallucination. You can never get rid of hallucination totally, but you can do lots to mitigate it.”
The Financial Times is also asking for feedback from the subscribers using the tool. “We’re literally reading all of the feedback to help understand what kinds of questions work, where it falls down, where it doesn’t, and who’s using it, why and when.”
Leaning into media strengths and adding a superpower
Generative AI seems to have created unlimited opportunities and also considerable challenges, questions and concerns. However it is clear that an asset many media companies possess is a deep reservoir of quality content and it is good for business to extract the most value from the investment in its creation. Leveraging their own content to train and program generative AI tools that serve readers seems like a very promising application.
In fact, generative AI can give trustworthy sources a bit of a super power. Jayne from the FT offered the example of scientists using the technology to read through hundreds of thousands of research papers and find patterns in a process that would otherwise take years to read in an effort to make important connections.
While scraped-content LLMs pose risks to authenticity, accuracy and attribution, proprietary learning models offer a promising alternative.
As Jayne put it, “The media has “an opportunity to harness what AI could mean for the user experience, what it could mean for journalism, in a way that’s very thoughtful, very clear and in line with our values and principles.” At the same time, she cautions that we shouldn’t be “getting overly excited because it’s not the answer to everything – even though we can’t escape the buzz at the moment.”
We are seeing many efforts bump up against the limits of what generative AI is able to do right now. However, media companies can avoid some of generative AI’s current pitfalls by employing the technology’s powerful language prediction, data processing and summarization capabilities while leaning into their own strengths of authenticity and accuracy.
Over the past few years, publishers have seen formerly reliable sources of traffic like Google and Facebook dry up, with no new platforms on the horizon to make up for that loss. Shifts in user behavior and changes in algorithms have left publishers scrambling to respond across both search and referral channels. And with the rise of generative AI-enabled search, the ability to find new sources of referral traffic and build a loyal audience has become even more critical for publishers.
To gain a better understanding of how traffic declines are affecting them and the steps they’re taking to address challenges, Arc XP partnered with Digiday to survey 115 publishers for the report The state of publisher traffic: Framing the evolution and impact of search and referral in 2024. We asked about the referral traffic trends they’re seeing, how those trends have impacted their revenue, and the steps they’re taking to either rebuild their traffic or find other ways to reach and grow their audience.
In this article, we’ll focus specifically on what we heard from publishers about their referral traffic from social media platforms (like Facebook and TikTok), news aggregators (like Apple News), and other third-party platforms (like Reddit).
Publisher referral traffic trends in 2023
Referral traffic is an important revenue driver for publishers, with 98% of survey respondents saying that referral traffic has a moderate or very significant impact on their annual revenue. But 2023 proved to be a challenging year for publisher referral traffic, with most survey respondents saying they experienced a 1% to 20% decline.
The publishers in our survey experienced traffic declines across the major social media channels. Respondents named Facebook as a channel where they expect continued declines (82%), followed by YouTube (67%) and TikTok (57%). Given Meta’s announcement that it will de-prioritize news content on its platforms, the decline in referral traffic from Facebook is not surprising. And across all social platforms, opaque changes to algorithms have made it difficult for publisher content to stand out among the vast array of options presented to users.
According to survey respondents, the primary ways referral traffic decline impacts their revenue are decreased advertising ROI (63%) and changes in collaborations with brands, influencers, or other publications (54%). They also cited a change in competitive positioning, change in quality of audience, and a decline in social media engagement (each named by 43% of respondents).
How publishers are responding to referral declines
When asked what challenges they face around responding to the trend of declines in referral traffic, 54% of respondents named building/maintaining strong relationships with external platforms as a challenge. This was followed by adapting to social media trends (52% of respondents). Accurately pinpointing referral sources (49%) and constantly changing algorithms and updates (46%) were also top challenges.
Despite these challenges, the publishers we surveyed are forging ahead with tactics to combat referral traffic decline, including experimenting with new forms of video content and increasing their presence across social channels. 81% of respondents said they are experimenting with live streams and long-form video content, and 70% said they are focusing on short-form original vertical video for TikTok, YouTube Shorts, and Instagram.
Long-form video content will ultimately offer publishers more control over monetization options than short-form videos created specifically for social channels. With long-form videos, publishers can incorporate in-depth reporting that sets them apart from other content sources and encourages deeper reader engagement and return visits.
The “pivot to video” isn’t new for publishers. Unfortunately, they’ve been burned before by making big bets on video content that didn’t pan out. This time around, publishers need to think carefully about what they want to accomplish with their video strategy: is it about getting advertising revenue from the videos? Driving readers from other channels to their website? Or creating longer-term audience relationships?
Surprisingly, only 56% of respondents said they are increasing direct-traffic efforts (newsletters, owned podcasts, etc.). Given the inherent unpredictability of social platforms, all publishers would benefit from thinking about how they can build more direct connections with their readers.
Publisher referral traffic expectations for 2024
When publishers look ahead to 2024, they are optimistic that referral traffic will rebound. Most of the publishers we surveyed expect referral traffic to increase by 1% to 20% this year, a trend that will likely driven by newsworthy events like the summer Olympics and the U.S. presidential election.
Publishers’ cautious optimism about 2024 might also reflect confidence in the tactics they’re implementing to combat referral traffic declines. But with platform changes and user behavior shifts it’s likely that referral traffic will never fully rebound to previous levels. Publishers will need to continue exploring ways to boost traffic across all channels, including owned channels that enable direct connections with readers.
Publishers have faced intense headwinds in recent years when it comes to protecting and growing revenue streams. However, there are some equally powerful tailwinds that the industry needs to acknowledge and embrace to put publishers on a viable path forward. Perhaps the most significant one is predictive audiences.
Predictive audiences, supercharged by growing AI capabilities, offer publishers multiple paths to increased revenue. Even more importantly: sustainable revenue. Let’s explore why that is, and the ways in which publishers can incorporate these capabilities into their monetization plans.
A sustainable path within a landscape of crumbling identifiers
When Google announced its latest stay of execution for third-party cookies, some publishers breathed a(nother) sigh of relief. Third-party cookies have long been seen as an understood path to revenue thanks to their role in enabling cross-site ad targeting. However, this capability has been in decline for years. In fact, the reach and accuracy of third-party cookies has become increasingly limited.
Publishers don’t need a replacement for third-party cookies. They need something altogether better. And that’s where predictive audiences come in. By fueling growth based on the strength of a publisher’s first-party data, predictive audiences offer a path to revenue that’s both in a publisher’s control and can be strengthened over time.
The premise behind predictive audiences for publishers is fairly simple: By taking a publisher’s first-party data (i.e., everything the publisher knows about its audience), the publisher can build models capable of predicting likely behavior in current and potential new users. These predictions can be used to create better user experiences while simultaneously opening more and deeper monetization opportunities.
Here are a few areas where predictive audiences’s power to help publishers drive revenue has become most evident.
Growing ad dollars
For many publishers, the fastest path to revenue growth is to look beyond their sites to find additional high-value inventory for their advertisers. By using their audience data as seed data, publishers can leverage predictive audiences to identify users beyond their own walls who are likely to behave like their known audiences. Working with external partners, a publisher can make these models and their resulting segments available for advertisers on demand as an extension of their audience.
Growing Yield
Predictive audiences can also be leveraged to greatly help publishers make more from their inventory within their walls. By combining first-party data with contextual and engagement signals, publishers can fuel robust data models that predict which ads will perform best when served to a given audience. Such an approach tends to deliver far more relevant results than can be achieved with third-party data, enabling publishers to improve the yield on their inventory. Such models can also fuel ad personalization that drives better results for advertisers and higher premiums for publishers.
Growing Audience
Beyond direct revenue, publishers can also tap into predictive audiences to grow their user base. Such growth helps expand their first-party data assets and inventory, driving greater revenue downstream. The mechanisms for fueling audience growth are similar to those for driving more ad dollars: Publishers can model their data to help them predict the behavior of unknown users. By activating that data, they can drive interested audiences in hopes of converting them to loyal visitors.
A bright future paved with predictions
The ability of these predictive audience strategies to drive publisher revenue has a lot to do with the level of first-party data the individual publisher brings. Of course, not all publishers are on equal footing when it comes to first-party data assets. Some have been capturing and building their first-party data practices for years, enabling them to fuel strong predictive models and broader identity graphs that can reach across their properties. Others—publishers that have not invested nearly as heavily in their first-party disciplines—are looking for off-the-shelf solutions that can help them take advantage of predictive audiences’s power all the same.
The AI-driven future will favor publishers that prioritize robust first-party data practices, but the race is far from over. Regardless of where an individual publisher stands with its first-party data assets, there’s still time to build out the needed strategies that can fuel growth through predictive audiences. By doing so to capture the right data and signals to fuel the strongest models, publishers can chart a more sustainable (and monetizable) path forward.