Gen Z gets a bad rap from the news industry. Whether it’s news avoidance, the refusal to pay, or the rise in following news influencers rather than media organizations, myriad issues make it challenging for publishers to build relationships with younger audiences. Yet young audiences will pay for products that add value to their lives.
The belief that younger audiences will engage – and even pay – for media products drove the foundation of Youthquake. Danuta Breguła, MD for Paid Products at Ringier Axel Springer Polska and Liesbeth Nizet, Head of Future Audiences Monetization at Mediahuis nv are the people behind the Substack publication that focuses on how publishers can connect with young people.
Crucially, it’s no longer the case that young people will simply “grow into” paying for news as they get older and have more disposable income. Nizet explained that this is a change that she’s seen over the 15 years she’s worked in journalism. “News is not a destination any more,” she observed. “[Young people] consume news between all the other cool things. That’s why platforms are really interesting for them, because they give you news, but also all the other stuff.”
Although the push to go directly to a news app or site may be lower, Nizet believes that younger audiences can be persuaded to pay for news. That belief drives her work every day at Mediahuis.
“You see that young people want to pay for a new skin in Fortnite, or something on Roblox, or a nice feature on Airbnb for example, because it inspires them, or triggers them,” she explained. “Why aren’t we able to find what triggers them [to pay] for something as important as independent journalism?”
Thinking beyond the article
One issue Nizet highlighted is that many news organizations still think in text and image. Even video on news sites is usually landscape with a clumsy play experience. “It’s not the experience that they have on other platforms, and there is really some space for us,” she emphasized.
Short-form video — in portrait for mobile viewing – is the preferred consumption format for 61% of Gen Z and young millennial consumers surveyed by the Reuters Institute. Short-form text was the next most popular (40%), with long-form text ranking third in young audiences’ preferences (32%).
One example is looking at explainer videos which perform well for creators and influencers. News brands are ideally placed to do well from these, but Nizet said that this requires journalists showing their faces. To engage young news audiences, “we need to show our vulnerability,” she outlined. “We need to show how much effort it is to create a really good article, that it’s not just some piece of content like an influencer unboxing something.”
Nizet pointed to Danish news publisher Zetland as an example of offering alternative formats. Zetland identified that many of its readers wanted to get an update on their commute, and didn’t necessarily want to be looking into their screens. They invested in building an audio app with journalists reading out their stories. Now, 80% of their audience consume the news that way, and 45% of their subscribers are in their 20s and 30s.
Building trust off-platform
As well as innovating around publishers’ own platform experiences, there is value in investing in a presence wherever younger people are, in order to build those relationships. French daily newsbrand Le Monde told Press Gazette that investing in content for primarily Snapchat, TikTok and YouTube had helped initiate relationships with new audiences, who they then saw become paying subscribers after two or three years.
Nizet noted that although the end goal of being visible on social media should be to tease audiences back to publishers’ own work, there is a bigger role at play. “We can show them [on social] what our journalism looks like, how trustworthy it is, how we show different perspectives, and how we make content that is relatable to their world,” she said. “That is what will make them pay for it.”
“They don’t want to pay for some instance that is preaching to them how they need to live their lives. That is often what we still have in traditional media: we are going to tell you how the world is, and how you should think. It worked for other generations, but it doesn’t work for [young people].”
Although younger audiences are more likely to turn to social media for news, they are also very distrustful of the information they find on it. A Gen Z Report from Oliver Wyman Forum & TNM found that Gen Z are almost twice as likely to fact-check news, but also that they trust people like them 2x as much as “mainstream” news outlets.
Another opportunity social platforms present publishers is the ability to engage and interact with young news audiences. This isn’t a new phenomenon, of course. Nizet noted that older generations also comment and read what others are saying with as much interest as the original content.
“We are not just senders, but we act like senders,” Nizet explained. “We see platforms as traffic drivers. But a platform can do so much more than just traffic building. It’s about building trust and engagement, and letting people get to know your journalism.”
Crucially, this requires a re-adjustment of who publishers assess as their competitors. “We’re not competing against [traditional] media any more,” Nizet pointed out. “We are competing against cat movies, and influencer drama… that is the real competition.”
There is a balance to be struck between investing in building audiences on platforms publishers have little control over, and showcasing work to build trust. Nizet draws a clear distinction in her work at Mediahuis. Off-platform is the hook, where the question should be how journalism can be showcased and trust can be build. On-platform is about the reward, the value, the exclusivity and the community.
Looking outside publishing for inspiration
However successful individual publishers might be at attracting younger audiences, Nizet believes that real change will come from looking outside the industry at what works in other areas. This is the focus of her and Breguła’s Youthquake newsletter, and a report on How publishers can grow with today’s youth.
“We really want to go beyond the obvious things. So for example how Taylor Swift or Red Bull can help us understand and monetize younger people,” Nizet said. “There’s also a link between content creators, influencers and news brands…which could offer you a totally different perspective as a journalist than what you are used to, and it can be so enriching.”
It’s a sentiment that Zetland CEO Tav Klitgaard echoed to The Publisher Podcast this week. “The product has to be much better,” he said, referring to news sites and apps. “You have to compete with Spotify and Instagram. You shouldn’t compete with a legacy print paper, and it seems like a lot of people in the media industry are still believing that’s [who] you need to compete with, which is just totally wrong. You need to compete with YouTube.”
A shift in thinking to engage young news audiences
Nizet is optimistic that publishers can build a relationship with younger audiences, even a paying one. She pointed out that there will always be a need for news, and that there is a lot of opportunity for those who can think outside the box.
Crucially, the answer to these challenges won’t come from the way publishers are used to doing things right now. “We need to shift how we think,” Nizet emphasized. “We don’t control the internet… but we can see how we can adapt to it in formats that [young people] like, and stories that they like and feel relatable.
“At some point, they will pay for it. I don’t mean when they are 30 or 35, I mean at the moment that they are feeling the value that we can offer them.”
Building a relationship where that value becomes evident to Gen Z is not a quick task. Strategies put in place now will take years to pay off, as with the example of Le Monde on social media. But it is a vital job that news publishers need to actively be planning for, if they want young audiences to pay for news in the future.
2025 has already proven to be a defining year for social media, and we’re only a few months in! From Meta’s controversial decision to remove independent fact-checkers to TikTok’s on/off ban in the U.S. and Australia’s move to restrict social media use for under-16s, the social landscape is undergoing an intense period of change. These shifts are forcing media companies to rethink their distribution strategies and explore reliable, transparent alternatives for news delivery that prioritize accuracy and audience trust.
Live blogs are emerging as one such alternative for transparent journalism, offering fact-checked, real-time updates in an engaging format. Used by leading publishers like Der Spiegel in Germany and The Guardian in the UK, live blogs combine bite-size news, micro-videos, and interactive elements, giving journalists a powerful tool to engage their audiences while maintaining credibility.
A sustainable path for journalism
Half of U.S. adults get news from social media platforms like Facebook, X and TikTok, unfortunately that has only contributed to their declining trust in traditional media outlets. And the proliferation of fake news on social media demands new channels that provide information just as quickly and personally—while ensuring factual accuracy.
Live blogs strike a crucial balance between speed and accuracy, ensuring that audiences receive up-to-the-minute information without the risks associated with unchecked social media posts. They empower journalists to report transparently, involve their audience in the storytelling process, and foster genuine connections. As leading German news provider, the Sueddeutsche Zeitung, commented:
“Live blogs enable news to be shared with audiences while other in-depth reports, reportage, or commentaries on the event are prepared. In a sense, they show storytelling in the making.”
Direct interaction through comment blocks and Q&As, for example, allow journalists to go beyond basic updates to establish and meet user needs in an audience-first approach that fosters deeper connections. Videos showcasing their work make news organizations more relatable and trustworthy, while the ability to build transparency into reporting processes by clarifying sources, linking directly to primary documents, and issuing real-time corrections, stand live blogs apart from the fast-moving, unregulated world of social media.
Personality goes a long way
We know that the social and personal element of influencers is a major draw for audiences. With their less formal structure, live blogs allow journalists to bring a bit of their own personality into their reporting, whether through their tone, choice of details, or even small touches like author photos and location tags: “On the ground in D.C.” or “Live from the Oscars red carpet”.
A more casual style isn’t suitable for every story, but Der Spiegel’s recent Grammys live blog suggests there is room for opinion in live coverage. The witty back-and-forth between the reporters added depth and personality, making it feel more like a conversation than information feed, and turning a passive read into a shared event experience. As live blogs continue to evolve, they offer a powerful way for journalists to engage their audience on a more personal level while maintaining journalistic integrity.
By prioritizing transparency, authenticity, and connection, news organizations can rebuild audience trust and position themselves as authoritative sources in a crowded media landscape. This commitment to openness builds credibility, combats misinformation, and strengthens the relationship between journalists and the public—fostering deeper engagement and loyalty. And in 2025, that will be more crucial than ever.
Engaging younger audiences with interactive micro-content
Short-form video content has become a dominant format in social media, with platforms like TikTok popularizing the trend. However, while social media may excel at capturing attention, it often lacks editorial oversight. Live blogs, on the other hand, allow media organizations to blend micro-content—such as short videos, Q&As, and polls—with verified news updates that audiences can trust. This interactive approach keeps younger audiences engaged while ensuring that the information they consume is accurate and relevant.
Stuff often uses live blogs to great effect, covering events such as The Met Fashion Gala to keep readers up to date with the outfits on show and drama as it unfolds. By incorporating expert written comment and video snippets, readers were brought closer to the event. Audience surveys enabled reporters to gain real-time feedback from readers, who in turn could actively participate in the coverage they consumed, creating a compelling, participatory experience.
Mobile-first and second-screen experiences
The way people consume news has shifted dramatically. Television audiences continue to decline, while mobile devices have become the primary gateway to news and entertainment. Live blogs cater to this shift by offering mobile-first, responsive designs that provide seamless access across devices.
Additionally, live blogs enhance the “second-screen experience” by integrating real-time stats, analysis, and background reports. Whether covering a sports match, a political debate, or a major breaking news event, live blogs give audiences a richer, more immersive experience than passive social media scrolling.
One of the world’s longest-running and most-watched non-sporting events, The Eurovision Song Contest, is a great example of this in action. From the performances on stage to the backstage dramas, national titles such as The Irish Independent used their live blog to provide followers with a second screen to participate in the antics of this diverse and elaborate spectacle.
Strengthening community ties
In the race for digital engagement, national and global media outlets often overlook the power of local reporting. However, hyperlocal content is experiencing a resurgence as audiences seek news that directly impacts their communities. And people often turn to social media groups for interaction at a neighborhood level.
Live blogs offer an ideal alternative for this type of coverage, enabling media companies to provide real-time updates on local events, elections, and sports teams. In the summer of 2024, heavy rain caused severe flooding across Germany and other parts of Europe. Reporters from a range of national and regional titles covered the situation via live blogs for days on end, often late into the night, to keep local readers informed on their situation in their area. Readers spent an average of 5 minutes on the live blogs to keep abreast of updates and prepare for the floods.
By focusing on hyperlocal reporting, publishers can build stronger community ties, enhance audience loyalty, and support long-term engagement through subscription models.
Engagement with responsibility
The social media era has prioritized virality over veracity, often at the expense of journalistic integrity. As regulatory pressures increase and audience expectations shift, media organizations must embrace formats that prioritize both engagement and responsibility in order to win back audience share with viable alternatives.
Live blogs present a sustainable solution. By offering real-time, fact-checked news in an interactive format, they provide a compelling alternative to social media’s often chaotic and unreliable ecosystem. Publishers that invest in live-blogging technology will not only enhance audience trust but also future-proof their reporting strategies in an increasingly uncertain digital landscape.
Podcasts are transforming how Americans consume news, offering on-demand access to trusted voices and in-depth analysis. As traditional news formats evolve, podcasts have become a critical medium for audiences seeking timely, engaging, and diverse perspectives.
Second only to comedy, news podcasts are a dominant podcasting genre. A new report from Sounds Profitable, in partnership with Signal Hill Insights, finds that 31% of podcast listeners consumed news content in the past month. The findings underscore a significant shift in how Americans engage with news, moving away from traditional TV broadcasts and toward more personalized, on-demand listening experiences.
News podcast consumer demographics
The average age of a news podcast consumer is 47, closely mirroring the overall U.S. adult population. This starkly contrasts television news audiences, where the average age skews significantly older—70 for MSNBC, 69 for Fox News, and 67 for CNN. This demographic shift highlights how younger audiences gravitate toward podcasts as a preferred medium for staying informed. Balancing short-form daily news updates with longer-form analytical discussions allows podcast listeners to integrate news consumption seamlessly into their routines.
The social influence factor
One of the study’s more interesting findings is the role of social influence in driving news podcast discovery and engagement. News podcast listeners exhibit significantly higher levels of social sharing and recommendations compared to their non-news counterparts:
73% receive podcast recommendations from friends and family, compared to 51% of non-news listeners.
73% actively recommend podcasts to others, versus 49% of non-news listeners.
83% say they are likely to listen to a podcast recommended by someone they know.
This word-of-mouth dynamic plays a crucial role in podcast adoption, highlighting the importance of personal connections in shaping media consumption habits. While platforms and algorithms contribute to discovery, personal recommendations remain the most powerful driver of engagement.
Additionally, news podcast listeners are more likely to consume content with others. Unlike other podcast genres that often cater to solo listening, news podcasts frequently become a shared experience. Group listening fosters discussions and deeper engagement with the content, whether in the car during a commute or as part of a morning routine. The study reveals that 88% of news podcast consumers who listen with others cite “listening while traveling” as a major benefit, compared to 66% of podcast listeners.
Advertising challenge and opportunity for news podcasts
Despite their high engagement levels, news podcast listeners are not immune to advertising fatigue. The study reveals that:
21% have stopped listening to a podcast due to excessive ads.
14% cite repetitive content as a reason for abandoning shows.
This finding challenges the assumption that strong host-listener relationships can completely counteract fatigue. Even among engaged audiences, there is a threshold for how much advertising they are willing to tolerate.
However, the research also uncovers a compelling opportunity for brands. News podcast listeners are more receptive to brand-sponsored content than the general podcast audience:
61% say they are likely to listen to a brand-sponsored podcast.
46% indicate that a company’s involvement makes them more likely to try a new podcast than 34% of non-news listeners.
Brands can forge meaningful connections with news podcast audiences by positioning themselves as content partners rather than just advertisers. By integrating seamlessly into the content, brands can enhance rather than disrupt the listener experience.
Podcasts and the future of news consumption
The traditional model of news consumption—gathering around the television at a fixed time—has largely faded. Instead, audiences curate their news experiences through digital and on-demand platforms. While social media and news websites play an important role in this transition, podcasts offer a unique advantage: deeper engagement and trust.
Unlike passive scrolling through headlines, listening to a news podcast requires intentional engagement. The hosts of these podcasts often become trusted voices, forming strong bonds with their audience. This level of trust is a significant draw, positioning news podcasts as a vital part of modern news consumption. However, the challenge lies in maintaining audience engagement without alienating listeners through excessive advertising.
The findings from this report offer a compelling look at the evolving media landscape. News podcasts attract a younger and more engaged audience and reshape how people discover, consume, and share news. The influence of social recommendations and the potential for shared listening experiences emphasize the unique role of news podcasts in today’s information ecosystem. Additionally, the nuanced relationship between advertising and engagement further solidifies their distinct position.
Content licensing has long been an important revenue stream for digital media companies. For decades, it allowed publishers to monetize their content by granting rights for others to republish or repurpose their material, evolving from licensing to aggregators, databases, social platforms, to streaming video services. Now, content licensing faces another evolution: artificial intelligence (AI).
Digital media publishers are finding themselves in a unique position in that they possess decades worth of quality content AI companies crave. “Over the next few years, content creators and AI companies will deepen their relationships,” predicts Yulia Petrossian Boyle, founder and principal of YPB Global LLC and FIPP chair. “However, as AI players try to secure more original content, those relationships will need to transition from one-off deals to well-structured, ethical partnerships with strict IP protection and meaningful ongoing revenue for publishers.”
TIME’s COO Mark Howard believes that publishers currently have three ways they can approach the AI dilemma: “You can do nothing. That’s just not something we would consider, to sit on the sidelines and just let everybody else figure it out. The other two options are to litigate and negotiate. Litigation is a very, very large commitment… So, that leaves negotiation.”
For some media companies, AI licensing agreements offer an alluring mix of copyright protection and monetization opportunities as DCN contributor Damian Radcliffe points out. And, as they negotiate these deals, publishers are discovering they must balance the potential for monetization with the need to protect intellectual property rights, navigate complex legal challenges, and ensure responsible AI usage.
Fair value in AI content licensing
According to a recent INMA report, executives considering licensing deals need to understand the value of their content in an AI-driven market. Then they have to negotiate attribution and compensation models that align with business goals. The report recommends collaborating with industry peers to create standardized agreements. It emphasizes the importance of advocating for responsible AI practices, including transparency in data usage.
Image credit: Ezra Eeman, Strategy & Innovation Director – NPO
The report also highlights emerging licensing models, which include direct licensing, value-in-kind partnerships, training fees, bundled partnerships, and per-use compensation. Boyle notes promising approaches, like “data-as-currency” deals, where AI companies offer analytics in exchange for access to their platforms and services (in some cases in addition to some smaller flat fees).
“Revenue-sharing is on the rise, where publishers earn a portion of subscription revenue or performance-based compensation (based on lead-gen, or engagement analytics),” she says. “For example, Perplexity AI’s Publishing Program launched in July 2024 offers revenue share based on the number of a publisher’s web pages cited in AI-generated responses to user queries. Those in the program earn a variable percentage of ad revenue generated per cited page.”
Boyle says that, while compensation models are improving, she worries that AI companies do not adequately compensate for content that has higher production costs, such as investigative journalism. She points to pushback from publishers like Forbes, who rejected the Perplexity proposal.
Negotiating with AI companies on behalf of her consultancy, Boyle has observed that offers by some AI companies for training datasets are insufficient. “Since agreements are not indefinite, it is unclear to me how publishers will be compensated in future when AI companies may no longer need training data for their data sets.”
In her opinion, current compensation models between major AI companies and publishers do not adequately reflect the significant investments that publishers make in creating original content. She believes compared to the substantial amounts AI companies invest in technology, such as chips, their expenditure on content seems disproportionately low. This disparity highlights a need for a more balanced financial recognition of the value that original content creators bring to these partnerships, she says.
However, striking these deals isn’t simple. Howard notes that each one is different, each has different monetization models and philosophies on revenue sharing.
“Some of them are flat fee for training, some of them are variable based on user adoption of their own products, and some of them are based on future ad models that haven’t even launched yet,” Howard says. “Many of them have some form of value-in-kind around technology or technology resources, which makes me very excited. I think that that may end up being where most of the value is derived in the long term.”
A few of TIME’s AI partnerships are infrastructure-based, like Fox Verify, which uses their blockchain-based technology to verify all of the content TIME publishes in the CMS. This provides them with a ledger of all of their intellectual property going forward. After that, according to Howard, they worked with Tollbit and Scalepost to track and monitor all of the AI bots on TIME’s site any given day and see what they’re doing.
Access to technology is a key benefit of TIME’s AI partnerships for Howard. “We’re partners of theirs. I have direct access to their CTO and their senior leadership team. We get to hear what… they’re thinking about the market, that’s a really valuable conversation for us to have.”
“We brought money in as a result of these deals,” he says. “I’m happy about what we brought in. Some of it is fixed, a lot of it is variable and a lot of it is access to product resources and technology.”
Factiva puts trust first in its AI licensing
Dow Jones launched Factiva Smart Summary in November, a groundbreaking feature in its business intelligence platform engineered with Google’s Gemini models on Google Cloud. Smart Summary leverages generative AI technology to create concise summaries for Factiva users that are fully transparent and traceable, utilizing licensed content from each of their publishing partners.
To do so, Factiva approached every one of its nearly 4,000 sources in 160 countries with licensing agreements. “We did this because we are a publisher first and arbiter for publishers… We won’t ask any of our publishing partners to do anything that we’re not prepared to do ourselves,” explains Traci Mabrey, general manager of Factiva. “As such, we have elected and will continue to elect, to reach out to publishing entities and request additional licensing permissions and actual rights for generative AI use.” Today, its marketplace includes nearly 5,000 partners.
Dow Jones emphasizes the importance of respecting and compensating intellectual property and content creation. Mabrey outlines four key criteria guiding their AI partnerships: trust, transparency, segmentation, and compliance.
“We believe that trust is imperative. We believe there needs to be transparency in terms of content being created, used, surfaced and attributed,” Mabrey says. “There also needs to be relative segmentation in terms of use cases across different solutions. And there needs to be compliance and governance to adherence to the first three, of trust, transparency and segmentation.”
Deal points when licensing content for AI training
There’s no one-size-fits-all model for licensing deals, and the best approach depends on a publisher’s specific goals, content, and resources. Some determine how easily an LLM can integrate into their existing systems and CMS. Some choose LLMs based on those they already deal with.
But, data privacy and security are central concerns in these agreements. Vadim Supitskiy, chief digital and information officer at Forbes, told Digiday that ensuring interactions with AI products remain safe and protected is a key priority.
Mabrey echoes this sentiment, emphasizing that privacy and security are integral components to negotiations with AI partners. “As we’re looking at responsible delivery of AI, responsible usage of content and privacy and security in terms of technical infrastructure, that is our leading indicator.”
Publishers must have review rights over AI-generated outputs, ability to see proof of usage logs, and be able to enforce brand guidelines, according to Boyle. “All those things have to be clearly defined in the licensing agreements. Tracking metrics of engagement, attribution, and demographic insights is also important for publishers to receive, to be able to see how valuable their licensed content is,” she says.
Essential safeguards in the agreements themselves ought to include strong, sophisticated clauses to protect publishers’ IP, says Boyle, “including mechanisms to prevent unauthorized reproduction, clear ownership definitions, restrictions on data usage, well defined termination provisions, attribution and fair compensation.”
Howard emphasizes that no two content licensing deals with AI companies are the same, and each comes with significant legal and technical hurdles. “First, there’s the legal aspect and every company needs to come up with their own legal terms and what is acceptable to them and what is not. What do they have the rights to? What do they not have the rights to?” he says.
“Once you’ve determined all of that, you need a technology solution to be able to deliver the content to them… All of the delivery mechanisms are quite different and require some form of customization.”
These complexities point to why AI companies have slowed the pace of new licensing agreements after an initial rush. Negotiating unique terms and building tailored tech solutions for each partner has proven difficult to scale, Howard notes.
Where AI licensing is headed
AI is reshaping how content is distributed, discovered, and monetized. For media companies, the choice is clear: engage in legal battles or proactively negotiate terms that ensure fair compensation. The market is rapidly evolving with new players, technologies and partnership models.
For companies currently negotiating content licensing deals with AI, Howard says to move forward. He points out that, while there are benchmarks based on what other companies have secured, the initial rush of deals has likely passed. He doesn’t expect future deals to improve; in fact, he thinks they’ll probably get worse.
Mabrey believes that the industry has reached a unique inflection point, where generative AI gives it the chance to assert that content is intellectual property and requires compensation. “We, as a media community around the world, should be coming together to assure that all of us are asserting our rights in the same manner.”
In light of these shifts, there’s a clear message for media executives: the future of content licensing is in their hands. Instead of letting the industry define them, publishers can shape the future of the industry by hammering out a windfall through litigation and the courts, negotiating partnerships, and advocating for fair treatment.
Artificial intelligence is rapidly transforming the way media companies operate. From automating article summaries to addressing editorial efficiencies, the use of AI has helped media companies save time and streamline operations. While AI offers substantial benefits, recent studies have revealed a trust gap between media companies and their audiences around AI use:
Since trust is the cornerstone of media, AI implementation introduces new challenges. Missteps can result in loss of reader trust, damage to brand reputation and potential legal and regulatory challenges.
As AAM developed its new Ethical AI Certification program, we researched how media companies are implementing AI and studied industry-recommended best practices for increasing transparency and disclosing AI use. This research resulted in the development of several guidelines for media companies to increase transparency and maintain reader trust when integrating AI solutions into their operations.
1. Clear and consistent AI labeling
AI-generated or assisted content should be visibly labeled and disclosed. Labels should be placed prominently with an article or video rather than buried in fine print.
Here are two examples of how media companies are disclosing AI use:
The Associated Press created standards around generative AI. While the tools may change, the core values remain – journalists are accountable for the accuracy and fairness of the information they share.
USA Today adds disclosures to indicate when AI is used to write its “Key Points” at the top of selected articles. It also discloses that a journalist reviewed the AI-generated content before publication and includes a link its ethical conduct policy.
2. Create and publicize AI policies to build trust
Media companies should publish a clear AI policy outlining:
How and when AI is used
The company’s privacy policy when involving AI use
Editorial guidelines for AI-generated content
How the company will handle ethical issues including bias mitigation and misinformation prevention
Policies should be easily accessible on company websites and updated regularly. Media companies also should ensure that they have licensing agreements in place to use the information and data provided by their AI solutions in published content.
3. Human oversight and accountability
Human oversight of AI-generated content is also essential to include when implementing AI, especially in editorial. Assign clear roles and responsibilities for AI oversight within newsrooms and establish an internal AI ethics committee to assess AI applications, guide policy development and ensure ongoing compliance with ethical standards.
4. Ongoing education
Since AI best practices and regulations are constantly evolving, it’s important for media companies to provide ongoing training for staff on AI technology, ethics and best practices. Hosting regular training workshops and updating employees on policy changes helps companies stay ahead of evolving AI trends while ensuring responsible and ethical AI usage.
5. Regular audits and risk assessments
Media companies should conduct regular assessments to manage AI risks including assessing the accuracy of AI-generated content, the effectiveness of company transparency measures and potential challenges including bias and inaccuracy in AI-generated content.
As AI continues to evolve, transparency remains essential to preserving trust between media companies and audiences. By implementing these industry best practices and guidelines, media companies can take the lead in setting a higher industry standard, maintaining audience trust and ensuring ethical AI implementation within their operations.
Journalism support organizations face scrutiny regarding efficiency, strategic direction, and overall impact. Critics argue that these organizations function as bureaucratic intermediaries, consuming philanthropic resources without adequately addressing the needs of local news outlets. At the same time, many journalists and news organizations find these support structures essential to their survival, particularly as they navigate an increasingly complex media landscape. This tension raises an important question: How can support organizations evolve to better serve the local news ecosystem?
Field-level agenda for journalism support organizations
One of the report’s findings is the lack of a shared framework to measure success in the local news sector. Many support organizations operate with distinct, sometimes overlapping missions, making it difficult to assess their collective impact. The report proposes a “field-level agenda” as a solution—an overarching strategy that brings together diverse players to set priorities, establish success metrics, and enhance collaboration.
Support for this concept is widespread. Many industry leaders echoed the idea that a structured, collaborative framework is needed to ensure that support organizations advance the field. David Grant of Blue Engine Collaborative notes that the industry struggles to effectively define its audience and measure impact. Similarly, Mary Walter-Brown of News Revenue Hub emphasized that support organizations should be held accountable for how well they help news organizations grow.
Roles and responsibilities of the news support ecosystem
There is broad agreement on the need for better organization within the field. However, stakeholders are still grappling with who should lead this transformation. Many argue that philanthropic organizations should establish more explicit expectations for accountability and collaboration.
Damon Kiesow, Knight Chair in Journalism Innovation at the University of Missouri, suggests that funders could accelerate progress by requiring grantees to adhere to standardized impact metrics. Tristan Loper of the Lenfest Institute points to recent grantmaking initiatives emphasizing partnerships rather than competition as a potential model for fostering greater alignment within the sector.
Others question the long-term sustainability of specific support organizations. Instead, they propose that funders adopt a more strategic approach to determining which initiatives should last decades and which should be time-limited interventions. This perspective underscores the importance of developing a clear vision for the role of support organizations within the broader journalism landscape.
A collaborative approach to news media support
The report highlights that while no single experience defines all support organizations, there is a shared desire for greater collaboration. The report was given to participates for review and some leaders noted that the report’s initial tone seemed overly defensive. They felt it reinforced criticisms rather than highlighting these organizations’ indispensable work. However, this feedback reinforces the issue’s complexity. Support organizations must navigate a fine line between responding to critiques and advocating for their essential role in the ecosystem.
Stefanie Murray of the Center for Cooperative Media challenged the notion that support organizations lack accountability. She pointed out that many already adhere to rigorous funding requirements. This debate underscores the need for a nuanced discussion about defining and measuring success in ways that reflect the realities of different organizations.
Next steps for journalism support organizations
While this report provides valuable insights, it raises questions that merit further exploration. For instance, how can the support field evolve, and what lessons can we learn from other industries? What balance should exist between funding direct journalism (news organizations) and intermediary organizations providing infrastructure and support?
Several leaders who participated in the report have proposed expanding the taxonomy of support organizations to include groups that act as bridges between journalists and community organizations. Others call for a similar taxonomy for newsmakers and funders, which could help clarify how different entities fit within the broader ecosystem.
The challenges facing journalism support organizations are complex. However, the Democracy Fund’s research states the need for reform is clear. Establishing a field-level agenda could bring greater coherence, accountability, and impact to the sector. However, achieving this vision will require sustained collaboration among all stakeholders, support organizations, funders, and local news leaders.
As the industry evolves, support organizations must adapt to ensure they remain valuable partners to local newsrooms. By embracing a more strategic, data-driven approach to measuring success and fostering collaboration, the field can move toward a more sustainable and effective future for local journalism.
The publishing industry has been of two minds on AI’s rapid advancements – optimistic and cautious – sometimes within the same company walls. Business development teams explore much-needed new revenue opportunities while legal teams work to protect their art and existing rights. However, two major legal developments, the Thomson Reuters v. Ross Intelligence ruling and shocking new revelations in Kadrey v. Meta, expose the fault lines in AI’s unchecked expansion and set the stage for publishers to negotiate fair value for their investments.
One case confirms that publishers have a right to license their content for AI training and that tech advocates’ tortured analysis of fair use doesn’t throw out rights engrained in the U.S. Constitution or require publishers to opt-in to attain them. The other case suggests that Meta may have knowingly pirated books in its high-stakes race to keep up with OpenAI and that Meta’s notorious growth-at-all-cost playbook is more exposed than ever.
AI companies can no longer operate in a legal gray zone, scraping content as if laws don’t apply to them. Courts, lawmakers, researchers and the public are taking notice. For publishers, the priority is clear: AI must respect copyright from the beginning including for training purposes, and the media industry must ensure it plays an active role in shaping AI’s future rather than being exploited by it.
Thomson Reuters v. Ross: A win for AI licensing, a loss for those who intentionally avoid it
In a landmark decision, a federal judge ruled this month in favor of Thomson Reuters against Ross Intelligence, a startup that trained its AI model without rights or permission using the Reuters’ Westlaw legal database.
Judge Stephanos Bibas’ ruling in the Delaware district court is notable because he explicitly recognized the emerging market for licensing AI training data. This undercuts the argument that AI developers can freely use copyrighted works under “fair use” factors. And, consistent with DCN’s policy team, it also highlights the significant importance of the fourth factor of fair use, which publishers have been demonstrating with the signing of each new licensing deal.
For publishers, this is a crucial precedent for two reasons:
AI training is not automatically fair use. Content owners have the right to be paid when their work is being used to train AI.
A market for AI licensing is forming – this is the fourth factor. Publishers should define and monetize it before platforms dictate the terms.
This decision marks a turning point, ensuring that AI development doesn’t come at the expense of the people and companies producing high-quality content. Sam Altman of OpenAI, and other leadership across the powerful AI industry, have attempted to invent a “right to learn” for their machines. That’s an absurd argument on its face but regularly repeated in high-profile interviews, as if the technocrats might will it into reality.
Kadrey v. Meta: Pirated Books, torrenting, and a familiar playbook
While the Reuters ruling validates AI licensing, Kadrey v. Meta reveals how some AI developers have worked to avoid it.
Recently unsealed court documents suggest that Meta employees knowingly pirated books to train LLaMA AI models used as their first commercial version (LLaMA2). Significantly, their fair use analysis shifted from “research” to making bank – a lot of it.
Evidence revealed that demonstrates this knowing strategic shift:
Meta employees downloaded pirated book datasets from a massive, pirated dataset, LibGen, with employees even using torrenting technology to pull it down.
They may have “seeded” and distributed this pirated content to others. That’s a potential violation of criminal code that their own employees sharedthis, “What is the probability of getting arrested for using torrents in the USA?”.
Meta worried that licensing even one book would weaken its fair use argument, so it didn’t license any at all.
Some employees explicitly avoided normal approval processes to keep leadership from having to formally sign off.
Some documents suggest Mark Zuckerberg himself may have been aware of these tactics with documents referencing escalations to “MZ.”
Meta appears to have stopped using this material ahead of LLaMA3, possibly signaling awareness that their actions were legally indefensible.
Making matters worse, Meta’s case is being overseen by Judge Vincent Chhabria in the Northern District of California. This is the same judge who sanctioned Facebook’s lawyers in its massive privacy settlement that led to record-breaking settlements approaching $6 billion with the FTC, SEC and private plaintiffs. In that case, Facebook was accused of stalling, misleading regulators, and withholding evidence related to its user data practices. In other words, Judge Chhabria knows Meta’s playbook: delay, deny, deflect.
Now, Meta faces a crime-fraud doctrine claim. This means that some currently sealed legal advice could be unsealed if it was in furtherance of a crime. If proven, this would not be a simple copyright dispute; it could potentially lead to criminal liability and further regulatory scrutiny. The Court is ordering Meta to unseal more documents this week.
Move fast, break things… again: Meta’s AI strategy mirrors its past scandals
The Kadrey case’s revelations closely resemble Meta’s past data controversies, particularly those that were all put into the basket of Cambridge Analytica. The many ongoing details of the cover up of the scandal are still emerging today. Unfortunately, they were mostly overlooked by the tech press corp who have not been tuned in to these issues for far too long.
For years, Facebook pursued a strategy of aggressive data harvesting to accelerate its growth in mobile where it had risk of being supplanted by new platforms. The company:
Scraped vast amounts of publisher and user data without clear consent.
Shared this data widely with developers in exchange for reciprocal access to their user data – fueling Facebook’s mobile market share grab.
Ultimately settled with regulators for billions after repeated privacy violations.
Now, in Kadrey v. Meta, history appears to be repeating itself. Internal documents show that Meta feared OpenAI and needed to accelerate its AI development. Thus, Meta felt pressured to take outsized risks. Meta’s approach to AI training follows a similar pattern:
Acquire the best data – legally or not.
Use it to gain an edge over AI competitors.
Deal with legal and regulatory fallout later, if necessary.
Recently unsealed documents even expose a documented mitigation strategy.
Remove data clearly marked as pirated (but only if it’s in the filename despite letting the coders strip out copyright info in the actual content)
Don’t let anyone know what data sets they’re using (including illegal datasets)
Do whatever possible to suppress prompts that spit out IP violations
Key takeaways for publishers and media companies
The Thomson Reuters and Kadrey cases demonstrate both the risks and the opportunities for publishers in the AI era. Courts are starting to push back on AI’s unlicensed use of copyrighted content. But it’s up to the publishing industry to define what comes next.
Here are the big issues we must address:
AI models need high-quality data. And publishers must ensure they’re compensated for it. The Reuters ruling proves that a growing licensing market for AI exists.
Litigation is working. The unsealed evidence in the Kadrey case suggests that even AI giants like Meta know they’ve crossed legal lines. Facebook isn’t dumb, evidence from other peer companies may be even more damaging. The plural press needs to be shining the light on these wrongs as national security isn’t an excuse for AI companies to break copyright law.
Publishers must be proactive in shaping AI policy. Big Tech will push its own narrative. Meta and Google pay front groups like Chamber of Progress to stretch the meaning of fair use both in the U.S. and across the pond. Media companies must work together to establish AI licensing frameworks and legal protections and reinforce existing copyright law.
Regulatory scrutiny on AI will intensify. If Meta is found to have used pirated data, it will accelerate AI regulations. This will not likely be confined to copyright but could extend across tech policy as it did in 2018, when one scandal exposed larger problems leading to Facebook being dragged before parliaments around the globe.
The future of AI depends on trust, ethics and media leadership
The past year has shown that AI is both a disruptor and an opportunity. The Reuters ruling confirmed publishers can and should demand licensing deals. The Meta revelations prove why that’s so necessary.
AI is reshaping media, but it must be built ethically. The publishing industry has both the legal and ethical high ground. And media companies must use it to define the next phase of AI’s evolution. The future of AI isn’t just about innovation. It’s about who controls the data and the IP – and whether the people who create it are respected or exploited.
Understanding the difference between having an audience and building a community isn’t just semantics—it’s a strategic necessity. With social referral traffic declining, third-party cookies being (semi) deprecated, and generative AI reshaping search, media organizations must reclaim their communities from third-party platforms. By fostering deeper engagement and stronger loyalty, they can create sustainable revenue streams and drive long-term growth.
Arc XP recently hosted a webinar featuring Mark Zohar, President and CEO of Viafoura, to explore how publishers can cultivate thriving communities. Below, we break down the key insights, strategies, and real-world examples that highlight why community-building is essential for long-term success.
Why community matters more than ever
Traditional approaches to audience acquisition, like relying on social media platforms for referral traffic, are no longer reliable. Social networks like Facebook and X are sending less traffic to publishers, and the rise of alternative content platforms like Substack and podcasts has further fragmented media consumption. Meanwhile, changes in Google’s search algorithms, which prioritizes user-generated content and community engagement, are shifting how audiences discover information.
Anthony DeRosa, former Head of Content and Product at ON_Discourse, expresses the urgency of this shift when he said, “Media companies should own their audiences. They’ve allowed tech companies to steal their content and monetize it by providing a platform for readers to discuss it. How absurd is that?”
The solution? Own your audience. Create spaces where audiences don’t just consume content—they engage with it, discuss it, and contribute to the discourse. As Mark Zohar put it, “An audience listens, while a community interacts, shares, and grows together.”
The benefits of community-building
An effective community strategy provides tangible benefits, including:
Higher Engagement & Retention – Community members spend 5.3x more time on-site and visit more frequently than anonymous users.
Increased Conversions – A strong community drives higher registration and subscription rates, with members being 31% more likely to pay for a subscription.
Reduced Churn – Engaged community members are 2.5x less likely to unsubscribe compared to passive readers.
Better First-Party Data – Communities provide valuable user insights, helping media organizations develop targeted advertising and personalized campaigns.
Stronger SEO – Google now prioritizes user-generated content, meaning active community engagement can significantly boost search rankings.
The Financial Times’ Next Gen News: Understanding the audiences of 2030 study found that younger, digitally native audiences are particularly drawn to participatory experiences. Many skip over full articles and head straight to the comments section to gauge the conversation. For them, community interaction isn’t just a feature, it’s the primary draw.
Building a community: the TRIBE framework
To successfully transition from an audience to a community, Zohar introduced the TRIBE framework, originally developed by Greg Isenberg, CEO of LateCheckout and former TikTok and Reddit Advisor. This framework serves as a guide for media organizations to evaluate how they are fostering community within their brand. TRIBE stands for:
Togetherness – Are we creating spaces where users can engage directly with our content and each other?
Rituals – What habits or recurring experiences keep our users coming back, such as weekly Q&As or interactive polls?
Identity – How are we fostering a sense of belonging through shared interests and values?
Belonging – Are we giving users a reason to feel invested in our community’s success?
Engagement – What opportunities are we providing for active participation, from commenting to user-generated content?
Leveraging the creator economy
A thriving community attracts creators, influencers, and contributors who can help expand reach and enrich discussions. To tap into this potential, media brands should actively collaborate with content creators, bringing fresh perspectives and loyal audiences into their ecosystems. This can be achieved through partnerships on platforms like TikTok and Instagram, as well as influencer collaborations within their own channels. By offering monetization opportunities and fostering engagement-driven spaces, media brands can encourage influencers to participate directly on their platforms rather than relying solely on external networks.
Examples of media brands successfully leveraging the creator economy include:
Yahoo for Creators – A platform that offers writers a community to share expertise and connect with engaged readers.
Forbes Contributor Network – A model where industry experts contribute content while benefiting from Forbes’ audience reach.
Community-building is a strategic priority
Community-building isn’t just about engagement. It is a direct driver of business growth. Organizations that invest in fostering vibrant communities see measurable benefits across key revenue and operational metrics:
Higher Revenue Per User – Community members generate 5x more revenue than general audiences.
Registration Growth – Implementing community features can double registration rates by offering a compelling value exchange.
Sustained Engagement – For some early adopters, community interactions now drive over 30% of total site registrations.
A well-designed community strategy transforms media brands from content distributors into engagement hubs, where audiences aren’t just passive consumers but active participants contributing to the brand’s success.
Foundations for a successful community strategy
For media brands looking to build a thriving community, success depends on three core pillars:
Intention – Community-building must be treated as a business strategy, not an afterthought. Define clear goals, KPIs, and secure executive buy-in to ensure long-term commitment.
Cultivation – A strong community is built on trust and inclusivity. Active moderation, clear user guidelines, and engagement incentives create a safe space where discussions flourish.
Operationalization – A community can’t sustain itself without consistent efforts. Media organizations must develop editorial playbooks, monetization models, and regular engagement cadences to ensure continued growth and participation.
The path forward: own your audience
Media companies can no longer afford to rely on third-party platforms to engage their audiences. Instead, they must take control by fostering direct relationships through community-driven experiences.
The future of media isn’t just about publishing content. It is about facilitating conversations, connections, and shared experiences. By embracing community-building as a core strategy, publishers can create deeper loyalty, drive sustainable revenue, and future-proof their businesses in an era of increasing digital fragmentation.
Artificial intelligence is rapidly transforming the way people access and consume news. With AI assistants increasingly serving as intermediaries between audiences and trusted news sources, it is essential to understand how accurately and reliably they present information. Unfortunately, according to recent research from the BBC, AI does not accurately deliver news.
In new research, the BBC is evaluating how well leading AI assistants—ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity—deliver news-related answers. By granting these AI models access to its website, the BBC sought to assess its ability to effectively reference and represent its journalism.
This study examined the quality of AI-generated responses using 100 news-related questions, with BBC journalists evaluating them based on seven key criteria, including accuracy, attribution, and impartiality. The reviewers then determined whether the responses contain minor, significant, or no issues across these areas.
Significant errors in AI news
The results show that over half (51%) of AI-generated responses contain significant issues, while 91% exhibited some inaccuracy, bias, or misrepresentation. Specific issues include factual errors, misattribution of sources, and missing or misleading context. When evaluating how these AI assistants represented BBC content, the study finds that Gemini (34%), Copilot (27%), Perplexity (17%), and ChatGPT (15%) produce responses with errors in their use of BBC sources.
Accuracy and misinformation
AI-generated responses frequently report factual inaccuracies, even when citing BBC sources:
Gemini incorrectly states that the NHS discourages vaping as a smoking cessation method, despite BBC coverage explicitly confirming that the NHS supports vaping for smokers that want to quit.
Copilot misrepresents the case of rape survivor Gisèle Pelicot, falsely claiming that blackouts and memory loss led her to uncover the crimes against her.
Multiple assistants incorrectly report figures, such as significantly underestimating the number of UK prisoners released and misattributing Chrome’s market share statistics.
ChatGPT erroneously reports that Ismail Haniyeh, assassinated in July, is still an active Hamas leader.
Attribution and sourcing errors
AI assistants frequently misattribute or incorrectly source information. Some rely on older articles, leading to misleading conclusions. In several instances, assistants claim to summarize BBC reporting but include details that did not exist in the BBC articles.
Impartiality and editorialization
In addition to prevalent factual errors, AI assistants struggle with maintaining any semblance of journalistic impartiality. The study flags multiple instances where opinions are presented as facts, sometimes falsely attributing to the BBC as the source. For example, Perplexity characterized Iran’s actions in the Middle East conflict as “restrained” and described Israel’s response as “aggressive,” despite no such characterization appearing in the BBC article.
AI errors in news is a risk to public trust
These findings highlight serious risks in AI-generated news summaries. Misinformation can erode public trust in news media, whether due to factual errors, misleading context, or editorialized conclusions. Distortion of BBC’s content can have significant consequences. If these risks continue, audiences may question the credibility of BBC’s reporting.
AI assistants are set to play an increasing role in how people access news and because they do not generate meaningful traffic to media websites, it appears that the majority of people using them are not exploring further to determine the accuracy of the purported news AI delivers. This, it is critical that AI agents or chatbots endeavor to uphold the information ecosystem’s rigorous and trusted editorial standards.
Ultimately, AI developers are responsible for ensuring their products align with fundamental journalistic principles, including accuracy, impartiality, and reliable sourcing. The BBC warns that if these challenges go unaddressed, AI risks undermining the news organizations it depends on for credible information. As AI continues to evolve, the BBC emphasizes the need for the media industry to champion responsible AI integration to safeguard audiences and preserve journalism’s integrity.