Sports and sports media outside of the major leagues often are labeled as “niche.” But that term is quickly becoming obsolete. Easy and inexpensive AI tools are changing the game. They create new sports media and marketing opportunities for free streaming and social-first athlete-creators, regardless of traditional audience bases and reach.
How is AI accelerating this transformation? It’s helping underrepresented sports and athlete-creators identify and capture new fans, super fans, and monetization opportunities. With smarter data analysis and faster content distribution, sports once considered “niche” have a chance to grow their media audiences and revenues in many ways.
How AI Is expanding the reach of sports
AI is widely used in sports media but it’s not just for the majors. AI presents opportunities for targeted streamers and independent creators. Here’s how leaders are using it to grow:
1. Identifying and engaging new fanbases
AI is helping sports organizations analyze viewership patterns, social media engagement, and fan demographics to uncover potential new audiences. By leveraging machine learning, teams and leagues can:
Identify super fans: Find those most engaged and willing to spend on tickets, merchandise, and streaming subscriptions.
Uncover new fan segments: AI can pinpoint audiences with similar behaviors and interests, even if they haven’t engaged with the sport yet.
Optimize monetization strategies: AI-driven insights help organizations determine the best ways to engage and convert fans through advertising, merchandise, and licensing opportunities.
2. Speed up content distribution
The way fans – particularly Gen Z fans – consume sports content has changed. Short-form videos, highlights, and real-time updates dominate engagement, and AI is making it easier to deliver this content faster than ever. AI-powered tools now handle:
Video ingestion and indexing: AI quickly processes and categorizes game footage for highlights.
Automated captioning and headlines: AI helps create more engaging, searchable content.
Smart clip generation: AI identifies the best in-game moments and instantly produces highlight reels.
This reduces production time and costs, allowing sports organizations to share media with fans faster and at scale.
3. Breaking language barriers and expanding globally
AI-powered translation tools are making sports media more accessible and global. Now, leagues and teams can automatically translate commentary, subtitles, and captions into multiple languages, opening doors to international markets and audiences.
More inclusive media: AI-driven translations provide accessibility for fans who speak different languages or have hearing impairments.
Stronger international engagement: With real-time translations, sports can reach new audiences without the need for costly localization efforts.
Athlete-creators: the new hybrid skill set
From NIL-driven revenue opportunities to the dominance of the Paul brothers, athlete-creators are increasingly leveraging AI. Are we looking at a future of sports in which the highest-performing athletes are not the best-known athlete-creators and vice versa? Yes, it may be challenging for some athletes without the resources or a team of assistants to fully realize their earning potential. However, AI may help level the playing field for athlete-creators. Here are some ways athlete-creators are using AI:
Content creation and editing: AI tools can simplify design and enable quick creation of engaging content without professional design expertise.
Social media engagement: AI analysis of social media trends and audience preferences can be used for targeted creation strategies.
Streamlining distribution: Automation of delivery can make it easier for athletes to focus on training and performance while staying engaged with fans and optimizing revenue opportunities.
Ascendant sports: the next stage of AI-driven growth potential
For an underrepresented sport looking for media expansion potential, new AI tools can help with assessing and answering some key questions:
Does it have a strong but underserved fanbase? If finding free, high-quality broadcasts is a challenge, it’s now much easier to explore serving a fanbase via free streaming options, from FAST to YouTube to other live social short-form distribution outlets.
Does it need better production and distribution? AI-enhanced production – from graphics to real-time statistical analysis – can help sports that have previously not been considered TV-friendly, making them more exciting to watch and easier to follow on digital platforms.
Are there marketers looking to align with its fanbase? Using AI-enabled analysis of data, it’s easier to identify cost-effective and targeted opportunities to connect with sports fans.
With the right application of AI and streaming strategies, ascendant sports can dramatically expand their audience and become stronger players in the sports content ecosystem.
The future: AI will define the next era of sports growth
The sports industry is at an inflection point. The traditional “big vs. small” sports hierarchy is being disrupted by technology, streaming, and AI-driven content strategies. Many sports, regardless of their historical followings, now have the opportunity to thrive and expand their reach.
Artificial intelligence is transforming the way people search for and consume news. With nearly one in four Americans now using generative AI chatbots instead of traditional search engines, the way journalism reaches audiences is shifting dramatically. While AI-powered search promises speed and convenience, it presents serious challenges for news publishers, including loss of attribution (citations), declining referral traffic, and misinformation.
A new Tow Center for Digital Journalism study highlights these risks, revealing how AI-driven search tools often fail to credit, or to accurately credit, sources and bypass publisher controls, such as paywalls. As AI search rapidly evolves, media executives must understand its implications and develop strategies to protect journalism’s integrity and financial viability.
AI search engines struggle with accuracy and attribution
The Tow Center analysis studies eight AI-powered search tools: Perplexity, Google’s AI Overviews, Bing Chat, ChatGPT, Claude, Gemini, Meta AI, and Grok. The research concludes that over 60% of AI-generated responses contain incorrect or misleading information. Unlike traditional search engines that list multiple sources for verification, generative AI tools present single, authoritative-sounding answers—often with a false sense of confidence.
When AI search engines provide sources, they cite syndicated or republished versions rather than providing a citation for the original publisher. This practice diminishes the visibility of primary news organizations and deprives them of direct traffic. Even more troubling, some platforms, such as Grok and Gemini, regularly generate broken or fabricated URLs, misleading users and reducing referral traffic to legitimate news sites.
AI platforms ignore publisher controls
Beyond attribution issues, many AI search tools fail to respect industry norms designed to protect publishers’ content. The study finds that AI platforms routinely retrieve content from sites even when publishers explicitly block them using robots.txt, a standard tool for controlling web crawling. This disregard for publisher restrictions raises ethical concerns and undermines publishers’ ability to manage the use of their content.
The Tow Center analysis also identifies inconsistencies in using publishers’ content with licensing agreements. Some news organizations with formal partnerships with AI companies still experience misattribution or see their content surface in ways that do not drive traffic back to their platforms. These findings suggest that agreements alone are insufficient to ensure proper credit or compensation for news publishers.
Revenue and engagement threat
For media companies, the rise of AI search directly threatens digital traffic, audience engagement, and revenue generation. Traditional search engines drive referral traffic to news sites, supporting subscription models, advertising revenue, and brand visibility. If AI search tools continue to bypass proper citation and link attribution, they could significantly weaken publishers’ brand recognition, authority, and ability to monetize their content.
Additionally, AI-driven search changes user behavior by reducing the need to visit source websites. Instead of clicking through to read full articles, users increasingly rely on AI-generated summaries, limiting publishers’ opportunities to engage audiences directly. This shift challenges established business models and forces publishers to rethink how they distribute and monetize content in an AI-driven search environment.
Responding to AI search disruption
Despite these challenges, there are opportunities to influence how AI search evolves. Since AI search models rely on high-quality journalism to function effectively, publishers have the leverage to demand better attribution, transparency, and enforceable policies. This study suggests four key strategies for publishers:
Advocate for stronger AI regulations: Industry groups and publishers can push for more transparent policies that require AI search engines to cite and link to sources properly.
Negotiate licensing agreements: While existing agreements show inconsistencies, publishers can strengthen negotiations to ensure AI companies provide meaningful attribution and compensation.
Develop AI-optimized content strategies: As publishers adapted to search engine optimization (SEO), they must now consider how AI models surface content and explore ways to maximize visibility.
Educate readers on AI search limitations: Publishers can inform audiences about the risks of AI-generated misinformation and encourage direct engagement with trusted news sources.
The Tow Center study concludes that media companies must remain proactive and ensure that AI search platforms operate with fairness, transparency, and respect for original reporting. The long-term health of the news industry depends on ensuring that AI-driven discovery tools do not replace or undermine the very sources that power them. By pushing for industry-wide standards and holding AI companies accountable, publishers can work toward a digital ecosystem where AI search enhances, rather than diminishes, quality journalism.
In terms of public policy debates, Artificial Intelligence continues to be the belle of the ball with nearly every major government courting the industry to locate their investments and jobs within their jurisdictions. Europe, China, Korea, and the U.S. (among others) have laid out competing tax and government spending plans to entice and encourage AI companies. Against this backdrop of AI frenzy, President Donald Trump, via the Office of Science Technology and Policy, has solicited input on the formation of an “AI Action Plan” in order to “define the priority policy actions needed to sustain and enhance America’s AI dominance.”
Unsurprisingly and unabashedly, tech companies advocate that the U.S. government allow their content-generating AI models to train on copyrighted material without consent or compensation. However, as DCN noted in our comments regarding the action plan, a key component to achieving the stated goal of enhancing America’s AI dominance – and the broader success of American businesses – is the robust protection and enforcement of U.S. intellectual property law including the Copyright Act.
Copyright protection makes legal, and financial, sense
The longstanding legal rights for copyright holders are derived from the U.S. Constitution (Article I, section 8, clause 8), which affords them the opportunity to monetize the results of their hard work and investment in a variety of ways and incentivizes them to reinvest in the creation of additional content and new innovative delivery mechanisms to potential consumers. As a result of these longstanding rights, American content creators, including news organizations and other publishers, are able to contribute significantly to U.S. economic growth, including through employment, exports and important trade surplus, and digital services and goods.
According to a recent study, copyright-based industries accounted for 12.31% of the U.S. economy and 63.13% of the U.S. digital economy. From 2020 to 2023, these industries outpaced U.S. economic growth almost threefold. In the digital sector alone, copyright-based industries employ 56.6% of all employees in the digital sector. The annual compensation paid to core copyright workers is approximately 50% higher than the average U.S. annual wage. As for the global impact, the sales of select U.S. copyrighted products in overseas markets amounted to $272.6 billion, which exceeded the sales of other IP industries including pharmaceuticals, agriculture, and aerospace.
Copyright, competition and a fair market
Unfortunately, the manner in which many AI developers have exploited original content without consent or compensation – to build and operationalize their commercial products – has unjustifiably violated the rights of copyright holders. It has upended the existing balance which has historically sustained and promoted innovation.
AI developers use copyright protected content not only to “teach” their models to predict and mimic language skills, but also as a means to create compelling outputs which have the compounding harm of substituting for the original works on which the models were trained. This activity unfairly competes with those who invested in the creation of the original material and undermines their ability to seek a fair economic return. In fact, U.S. Senior District Judge Beryl Howell noted earlier this week in a copyright case attempting to argue fair use that the publisher’s content is “so valuable they put a copyright on it.” Exactly.
By “reaping that which they do not sow” AI companies cause harm to creators, publishers and the ecosystem as a whole. It is important that this form of destructive misappropriation be deterred, whether by copyright law or other appropriate means. In the U.S, there are 39 related lawsuits and counting. The outcome of these suits will provide much-needed clarity regarding the application of existing copyright law, including the fact-specific defense of fair use, to the infringement of the rights of copyright holders to develop generative AI technology.
However, one U.S. District Court recently confirmed that licensing is required for the use of copyrighted content to train an AI system. In Thomson Reuters Enter. Ctr. GmBH v. Ross Intel. Inc., the court, applying clear and recent precedent from the U.S. Supreme Court, held that the defendant’s unauthorized use of the plaintiff’s works to train the defendant’s AI system was direct infringement and did not constitute fair use. The Court reaffirmed that the impact of the use on existing and potential markets is the single most important element of a fair use analysis, and that there was clearly a potential market to use the materials at issue in the case to train AI.
Innovation flourishes within the copyright framework
Lest the VC crowd be dismayed, a licensing framework is emerging as many deals have been struck by publishers, record labels, motion picture industries, and others. OpenAI, Google, and Perplexity have all made efforts to pay for the right to use protected content to power their models and tools. This is a clear acknowledgment that this model is not only necessary, but eminently feasible.
While publishers’ rights are coming into clearer focus in the U.S., AI companies are beginning to feel a shared pain as evidenced recently by DeepSeek’s R1 model. OpenAI accused the company of IP theft, claiming that DeepSeek may have used OpenAI’s IP and violated its terms of service to develop its AI model.
“We know PRC (China) based companies – and others – are constantly trying to distill the models of leading US AI companies,” OpenAI said in a statement to Bloomberg. “As the leading builder of AI, we engage in countermeasures to protect our IP, including a careful process for which frontier capabilities to include in released models, and believe as we go forward that it is critically important that we are working closely with the US government to best protect the most capable models from efforts by adversaries and competitors to take US technology.”
A rising tide can lift all boats. Only maintaining existing copyright protections will lead to a robust, free market where creators are incentivized to make high quality works and AI companies are incentivized to license them. Importantly, in this robust market, AI companies would continue to have access to quality content which is critical for training and outputs. The American values of IP protection have been a cornerstone in our country’s innovative spirit and competitive edge over foreign adversaries. Protecting IP is a matter of preserving the core principles that distinguish American businesses in the global market. For the history of the U.S., copyright and innovation have gone hand in hand and there is no reason to deviate from that successful combination as we build the next chapter.
The rapid adoption of Generative AI (Gen AI) in newsrooms sparks important discussions among journalists and media professionals, especially about transparency and trust. Across the industry, publishers vary in how they communicate their AI strategies to their workforce. Reports suggest that some journalists seek more transparency around management’s AI implementation efforts and agreements with AI companies. This lack of clarity also applies to content, as some publishers explore AI-generated articles without consistently informing staff or readers.
A lack of transparency around AI fuels distrust
Mike Ananny and Jake Karr examine how news media unions are trying to manage and stabilize the use of Generative AI. Their analysis, How Media Unions Stabilize Technological Hype, draws from a review of industry reports, expert interviews, and case studies of newsrooms integrating generative AI. The methodology emphasizes qualitative insights to understand AI’s impact on editorial processes, ethics, and audience trust.
According to the authors’ analysis, many employees only learn about AI licensing deals through sudden announcements, often without prior consultation. Some must rely on external reporting to understand their company’s AI initiatives. Union representatives consistently face resistance when requesting information, reinforcing a broader mistrust of employer intentions.
However, solutions are emerging. Some unions are pushing for contractual guarantees to ensure greater transparency. The Associated Press and certain Gannett-owned publications propose contract language requiring 90 days’ notice before implementing new AI-related newsroom technology. Similarly, The Onion and Wirecutter unions successfully bargain for advance notice and transparency requirements regarding AI procurement. These efforts signal a path to restoring trust through openness and accountability.
Journalists defend creativity and quality
News professionals ensure accuracy, provide context, and uphold ethical standards that AI alone cannot fulfill. Ananny and Karr conclude that AI’s so-called “creativity” is a remix of existing human work, lacking the depth, insight, and contextual awareness that define quality journalism. No matter how advanced AI becomes, skilled journalists must verify facts, interpret events, and shape narratives with integrity.
Recognizing this, some media organizations are implementing safeguards. The Associated Press is committing to using Gen AI only with direct human oversight to maintain compliance with journalistic standards. Another example includes editorial employees reviewing AI content before publication at The Onion, The A.V. Club, Deadspin, and The Takeout. And the MinnPost treats AI-generated material as a source requiring human editing and fact-checking.
But beyond oversight, journalists are pushing for the right to decide whether or not to use AI. Many unions argue that workers, as experts in their field, should determine the appropriate role of AI in journalism. The CNET Media Workers Union demands the right to opt out of using AI if it fails to meet publishing standards. The Atlantic Union similarly insists that journalists may use AI within ethical guidelines, but no one should force them to do so.
These demands reflect a broader principle: journalism is, at its core, a human-driven endeavor. AI may assist but cannot replace the judgment, creativity, and accountability that define quality reporting. The analysis concludes that to integrate AI responsibly, newsrooms must prioritize transparency, trust, and human journalists’ role in safeguarding the profession’s integrity.
Ethical and legal ramifications of AI in journalism
Beyond transparency and journalistic integrity, the rise of Gen AI raises significant ethical and legal questions. One of the most pressing concerns is intellectual property: Who owns the content produced by AI models trained on vast amounts of copyrighted material? Many publishers argue that AI-generated work lacks originality and merely regurgitates existing human-created content. This also raises potential plagiarism and copyright infringement issues. In response, some media companies are taking legal action.
Additionally, there is concern about AI’s ability to spread misinformation. Unlike human journalists, AI lacks the critical thinking skills to discern fact from fiction. Without rigorous oversight, AI-generated content can amplify biases, fabricate sources, and misinterpret data, posing a direct threat to public trust in news media.
Regulatory bodies are beginning to take notice. Governments worldwide are considering policies to ensure AI transparency and ethical implementation in journalism. For example, the European Union’s AI Act includes provisions requiring companies to disclose AI-generated content and implement safeguards against misinformation. The Federal Trade Commission warns companies against deceptive AI practices, signaling potential regulatory intervention in the media industry.
Balancing innovation and integrity for journalism and AI
Despite the challenges, AI’s presence in newsrooms is likely to grow. Some publishers are taking a proactive approach by developing AI policies that prioritize ethical considerations. Reuters, for example, offers internal guidelines to ensure journalists use AI tools responsibly and transparently. The BBC is similarly on board to maintain human oversight over AI-generated content and clearly labeling AI-assisted reporting.
Ultimately, the future of AI in journalism will depend on striking the right balance between technological innovation and journalistic integrity. The authors concur that if publishers prioritize transparency, enforce accountability, and uphold journalists’ fundamental role, AI can be a valuable tool rather than a disruptive force.
Content licensing has long been an important revenue stream for digital media companies. For decades, it allowed publishers to monetize their content by granting rights for others to republish or repurpose their material, evolving from licensing to aggregators, databases, social platforms, to streaming video services. Now, content licensing faces another evolution: artificial intelligence (AI).
Digital media publishers are finding themselves in a unique position in that they possess decades worth of quality content AI companies crave. “Over the next few years, content creators and AI companies will deepen their relationships,” predicts Yulia Petrossian Boyle, founder and principal of YPB Global LLC and FIPP chair. “However, as AI players try to secure more original content, those relationships will need to transition from one-off deals to well-structured, ethical partnerships with strict IP protection and meaningful ongoing revenue for publishers.”
TIME’s COO Mark Howard believes that publishers currently have three ways they can approach the AI dilemma: “You can do nothing. That’s just not something we would consider, to sit on the sidelines and just let everybody else figure it out. The other two options are to litigate and negotiate. Litigation is a very, very large commitment… So, that leaves negotiation.”
For some media companies, AI licensing agreements offer an alluring mix of copyright protection and monetization opportunities as DCN contributor Damian Radcliffe points out. And, as they negotiate these deals, publishers are discovering they must balance the potential for monetization with the need to protect intellectual property rights, navigate complex legal challenges, and ensure responsible AI usage.
Fair value in AI content licensing
According to a recent INMA report, executives considering licensing deals need to understand the value of their content in an AI-driven market. Then they have to negotiate attribution and compensation models that align with business goals. The report recommends collaborating with industry peers to create standardized agreements. It emphasizes the importance of advocating for responsible AI practices, including transparency in data usage.
Image credit: Ezra Eeman, Strategy & Innovation Director – NPO
The report also highlights emerging licensing models, which include direct licensing, value-in-kind partnerships, training fees, bundled partnerships, and per-use compensation. Boyle notes promising approaches, like “data-as-currency” deals, where AI companies offer analytics in exchange for access to their platforms and services (in some cases in addition to some smaller flat fees).
“Revenue-sharing is on the rise, where publishers earn a portion of subscription revenue or performance-based compensation (based on lead-gen, or engagement analytics),” she says. “For example, Perplexity AI’s Publishing Program launched in July 2024 offers revenue share based on the number of a publisher’s web pages cited in AI-generated responses to user queries. Those in the program earn a variable percentage of ad revenue generated per cited page.”
Boyle says that, while compensation models are improving, she worries that AI companies do not adequately compensate for content that has higher production costs, such as investigative journalism. She points to pushback from publishers like Forbes, who rejected the Perplexity proposal.
Negotiating with AI companies on behalf of her consultancy, Boyle has observed that offers by some AI companies for training datasets are insufficient. “Since agreements are not indefinite, it is unclear to me how publishers will be compensated in future when AI companies may no longer need training data for their data sets.”
In her opinion, current compensation models between major AI companies and publishers do not adequately reflect the significant investments that publishers make in creating original content. She believes compared to the substantial amounts AI companies invest in technology, such as chips, their expenditure on content seems disproportionately low. This disparity highlights a need for a more balanced financial recognition of the value that original content creators bring to these partnerships, she says.
However, striking these deals isn’t simple. Howard notes that each one is different, each has different monetization models and philosophies on revenue sharing.
“Some of them are flat fee for training, some of them are variable based on user adoption of their own products, and some of them are based on future ad models that haven’t even launched yet,” Howard says. “Many of them have some form of value-in-kind around technology or technology resources, which makes me very excited. I think that that may end up being where most of the value is derived in the long term.”
A few of TIME’s AI partnerships are infrastructure-based, like Fox Verify, which uses their blockchain-based technology to verify all of the content TIME publishes in the CMS. This provides them with a ledger of all of their intellectual property going forward. After that, according to Howard, they worked with Tollbit and Scalepost to track and monitor all of the AI bots on TIME’s site any given day and see what they’re doing.
Access to technology is a key benefit of TIME’s AI partnerships for Howard. “We’re partners of theirs. I have direct access to their CTO and their senior leadership team. We get to hear what… they’re thinking about the market, that’s a really valuable conversation for us to have.”
“We brought money in as a result of these deals,” he says. “I’m happy about what we brought in. Some of it is fixed, a lot of it is variable and a lot of it is access to product resources and technology.”
Factiva puts trust first in its AI licensing
Dow Jones launched Factiva Smart Summary in November, a groundbreaking feature in its business intelligence platform engineered with Google’s Gemini models on Google Cloud. Smart Summary leverages generative AI technology to create concise summaries for Factiva users that are fully transparent and traceable, utilizing licensed content from each of their publishing partners.
To do so, Factiva approached every one of its nearly 4,000 sources in 160 countries with licensing agreements. “We did this because we are a publisher first and arbiter for publishers… We won’t ask any of our publishing partners to do anything that we’re not prepared to do ourselves,” explains Traci Mabrey, general manager of Factiva. “As such, we have elected and will continue to elect, to reach out to publishing entities and request additional licensing permissions and actual rights for generative AI use.” Today, its marketplace includes nearly 5,000 partners.
Dow Jones emphasizes the importance of respecting and compensating intellectual property and content creation. Mabrey outlines four key criteria guiding their AI partnerships: trust, transparency, segmentation, and compliance.
“We believe that trust is imperative. We believe there needs to be transparency in terms of content being created, used, surfaced and attributed,” Mabrey says. “There also needs to be relative segmentation in terms of use cases across different solutions. And there needs to be compliance and governance to adherence to the first three, of trust, transparency and segmentation.”
Deal points when licensing content for AI training
There’s no one-size-fits-all model for licensing deals, and the best approach depends on a publisher’s specific goals, content, and resources. Some determine how easily an LLM can integrate into their existing systems and CMS. Some choose LLMs based on those they already deal with.
But, data privacy and security are central concerns in these agreements. Vadim Supitskiy, chief digital and information officer at Forbes, told Digiday that ensuring interactions with AI products remain safe and protected is a key priority.
Mabrey echoes this sentiment, emphasizing that privacy and security are integral components to negotiations with AI partners. “As we’re looking at responsible delivery of AI, responsible usage of content and privacy and security in terms of technical infrastructure, that is our leading indicator.”
Publishers must have review rights over AI-generated outputs, ability to see proof of usage logs, and be able to enforce brand guidelines, according to Boyle. “All those things have to be clearly defined in the licensing agreements. Tracking metrics of engagement, attribution, and demographic insights is also important for publishers to receive, to be able to see how valuable their licensed content is,” she says.
Essential safeguards in the agreements themselves ought to include strong, sophisticated clauses to protect publishers’ IP, says Boyle, “including mechanisms to prevent unauthorized reproduction, clear ownership definitions, restrictions on data usage, well defined termination provisions, attribution and fair compensation.”
Howard emphasizes that no two content licensing deals with AI companies are the same, and each comes with significant legal and technical hurdles. “First, there’s the legal aspect and every company needs to come up with their own legal terms and what is acceptable to them and what is not. What do they have the rights to? What do they not have the rights to?” he says.
“Once you’ve determined all of that, you need a technology solution to be able to deliver the content to them… All of the delivery mechanisms are quite different and require some form of customization.”
These complexities point to why AI companies have slowed the pace of new licensing agreements after an initial rush. Negotiating unique terms and building tailored tech solutions for each partner has proven difficult to scale, Howard notes.
Where AI licensing is headed
AI is reshaping how content is distributed, discovered, and monetized. For media companies, the choice is clear: engage in legal battles or proactively negotiate terms that ensure fair compensation. The market is rapidly evolving with new players, technologies and partnership models.
For companies currently negotiating content licensing deals with AI, Howard says to move forward. He points out that, while there are benchmarks based on what other companies have secured, the initial rush of deals has likely passed. He doesn’t expect future deals to improve; in fact, he thinks they’ll probably get worse.
Mabrey believes that the industry has reached a unique inflection point, where generative AI gives it the chance to assert that content is intellectual property and requires compensation. “We, as a media community around the world, should be coming together to assure that all of us are asserting our rights in the same manner.”
In light of these shifts, there’s a clear message for media executives: the future of content licensing is in their hands. Instead of letting the industry define them, publishers can shape the future of the industry by hammering out a windfall through litigation and the courts, negotiating partnerships, and advocating for fair treatment.
Artificial intelligence is rapidly transforming the way media companies operate. From automating article summaries to addressing editorial efficiencies, the use of AI has helped media companies save time and streamline operations. While AI offers substantial benefits, recent studies have revealed a trust gap between media companies and their audiences around AI use:
Since trust is the cornerstone of media, AI implementation introduces new challenges. Missteps can result in loss of reader trust, damage to brand reputation and potential legal and regulatory challenges.
As AAM developed its new Ethical AI Certification program, we researched how media companies are implementing AI and studied industry-recommended best practices for increasing transparency and disclosing AI use. This research resulted in the development of several guidelines for media companies to increase transparency and maintain reader trust when integrating AI solutions into their operations.
1. Clear and consistent AI labeling
AI-generated or assisted content should be visibly labeled and disclosed. Labels should be placed prominently with an article or video rather than buried in fine print.
Here are two examples of how media companies are disclosing AI use:
The Associated Press created standards around generative AI. While the tools may change, the core values remain – journalists are accountable for the accuracy and fairness of the information they share.
USA Today adds disclosures to indicate when AI is used to write its “Key Points” at the top of selected articles. It also discloses that a journalist reviewed the AI-generated content before publication and includes a link its ethical conduct policy.
2. Create and publicize AI policies to build trust
Media companies should publish a clear AI policy outlining:
How and when AI is used
The company’s privacy policy when involving AI use
Editorial guidelines for AI-generated content
How the company will handle ethical issues including bias mitigation and misinformation prevention
Policies should be easily accessible on company websites and updated regularly. Media companies also should ensure that they have licensing agreements in place to use the information and data provided by their AI solutions in published content.
3. Human oversight and accountability
Human oversight of AI-generated content is also essential to include when implementing AI, especially in editorial. Assign clear roles and responsibilities for AI oversight within newsrooms and establish an internal AI ethics committee to assess AI applications, guide policy development and ensure ongoing compliance with ethical standards.
4. Ongoing education
Since AI best practices and regulations are constantly evolving, it’s important for media companies to provide ongoing training for staff on AI technology, ethics and best practices. Hosting regular training workshops and updating employees on policy changes helps companies stay ahead of evolving AI trends while ensuring responsible and ethical AI usage.
5. Regular audits and risk assessments
Media companies should conduct regular assessments to manage AI risks including assessing the accuracy of AI-generated content, the effectiveness of company transparency measures and potential challenges including bias and inaccuracy in AI-generated content.
As AI continues to evolve, transparency remains essential to preserving trust between media companies and audiences. By implementing these industry best practices and guidelines, media companies can take the lead in setting a higher industry standard, maintaining audience trust and ensuring ethical AI implementation within their operations.
The publishing industry has been of two minds on AI’s rapid advancements – optimistic and cautious – sometimes within the same company walls. Business development teams explore much-needed new revenue opportunities while legal teams work to protect their art and existing rights. However, two major legal developments, the Thomson Reuters v. Ross Intelligence ruling and shocking new revelations in Kadrey v. Meta, expose the fault lines in AI’s unchecked expansion and set the stage for publishers to negotiate fair value for their investments.
One case confirms that publishers have a right to license their content for AI training and that tech advocates’ tortured analysis of fair use doesn’t throw out rights engrained in the U.S. Constitution or require publishers to opt-in to attain them. The other case suggests that Meta may have knowingly pirated books in its high-stakes race to keep up with OpenAI and that Meta’s notorious growth-at-all-cost playbook is more exposed than ever.
AI companies can no longer operate in a legal gray zone, scraping content as if laws don’t apply to them. Courts, lawmakers, researchers and the public are taking notice. For publishers, the priority is clear: AI must respect copyright from the beginning including for training purposes, and the media industry must ensure it plays an active role in shaping AI’s future rather than being exploited by it.
Thomson Reuters v. Ross: A win for AI licensing, a loss for those who intentionally avoid it
In a landmark decision, a federal judge ruled this month in favor of Thomson Reuters against Ross Intelligence, a startup that trained its AI model without rights or permission using the Reuters’ Westlaw legal database.
Judge Stephanos Bibas’ ruling in the Delaware district court is notable because he explicitly recognized the emerging market for licensing AI training data. This undercuts the argument that AI developers can freely use copyrighted works under “fair use” factors. And, consistent with DCN’s policy team, it also highlights the significant importance of the fourth factor of fair use, which publishers have been demonstrating with the signing of each new licensing deal.
For publishers, this is a crucial precedent for two reasons:
AI training is not automatically fair use. Content owners have the right to be paid when their work is being used to train AI.
A market for AI licensing is forming – this is the fourth factor. Publishers should define and monetize it before platforms dictate the terms.
This decision marks a turning point, ensuring that AI development doesn’t come at the expense of the people and companies producing high-quality content. Sam Altman of OpenAI, and other leadership across the powerful AI industry, have attempted to invent a “right to learn” for their machines. That’s an absurd argument on its face but regularly repeated in high-profile interviews, as if the technocrats might will it into reality.
Kadrey v. Meta: Pirated Books, torrenting, and a familiar playbook
While the Reuters ruling validates AI licensing, Kadrey v. Meta reveals how some AI developers have worked to avoid it.
Recently unsealed court documents suggest that Meta employees knowingly pirated books to train LLaMA AI models used as their first commercial version (LLaMA2). Significantly, their fair use analysis shifted from “research” to making bank – a lot of it.
Evidence revealed that demonstrates this knowing strategic shift:
Meta employees downloaded pirated book datasets from a massive, pirated dataset, LibGen, with employees even using torrenting technology to pull it down.
They may have “seeded” and distributed this pirated content to others. That’s a potential violation of criminal code that their own employees sharedthis, “What is the probability of getting arrested for using torrents in the USA?”.
Meta worried that licensing even one book would weaken its fair use argument, so it didn’t license any at all.
Some employees explicitly avoided normal approval processes to keep leadership from having to formally sign off.
Some documents suggest Mark Zuckerberg himself may have been aware of these tactics with documents referencing escalations to “MZ.”
Meta appears to have stopped using this material ahead of LLaMA3, possibly signaling awareness that their actions were legally indefensible.
Making matters worse, Meta’s case is being overseen by Judge Vincent Chhabria in the Northern District of California. This is the same judge who sanctioned Facebook’s lawyers in its massive privacy settlement that led to record-breaking settlements approaching $6 billion with the FTC, SEC and private plaintiffs. In that case, Facebook was accused of stalling, misleading regulators, and withholding evidence related to its user data practices. In other words, Judge Chhabria knows Meta’s playbook: delay, deny, deflect.
Now, Meta faces a crime-fraud doctrine claim. This means that some currently sealed legal advice could be unsealed if it was in furtherance of a crime. If proven, this would not be a simple copyright dispute; it could potentially lead to criminal liability and further regulatory scrutiny. The Court is ordering Meta to unseal more documents this week.
Move fast, break things… again: Meta’s AI strategy mirrors its past scandals
The Kadrey case’s revelations closely resemble Meta’s past data controversies, particularly those that were all put into the basket of Cambridge Analytica. The many ongoing details of the cover up of the scandal are still emerging today. Unfortunately, they were mostly overlooked by the tech press corp who have not been tuned in to these issues for far too long.
For years, Facebook pursued a strategy of aggressive data harvesting to accelerate its growth in mobile where it had risk of being supplanted by new platforms. The company:
Scraped vast amounts of publisher and user data without clear consent.
Shared this data widely with developers in exchange for reciprocal access to their user data – fueling Facebook’s mobile market share grab.
Ultimately settled with regulators for billions after repeated privacy violations.
Now, in Kadrey v. Meta, history appears to be repeating itself. Internal documents show that Meta feared OpenAI and needed to accelerate its AI development. Thus, Meta felt pressured to take outsized risks. Meta’s approach to AI training follows a similar pattern:
Acquire the best data – legally or not.
Use it to gain an edge over AI competitors.
Deal with legal and regulatory fallout later, if necessary.
Recently unsealed documents even expose a documented mitigation strategy.
Remove data clearly marked as pirated (but only if it’s in the filename despite letting the coders strip out copyright info in the actual content)
Don’t let anyone know what data sets they’re using (including illegal datasets)
Do whatever possible to suppress prompts that spit out IP violations
Key takeaways for publishers and media companies
The Thomson Reuters and Kadrey cases demonstrate both the risks and the opportunities for publishers in the AI era. Courts are starting to push back on AI’s unlicensed use of copyrighted content. But it’s up to the publishing industry to define what comes next.
Here are the big issues we must address:
AI models need high-quality data. And publishers must ensure they’re compensated for it. The Reuters ruling proves that a growing licensing market for AI exists.
Litigation is working. The unsealed evidence in the Kadrey case suggests that even AI giants like Meta know they’ve crossed legal lines. Facebook isn’t dumb, evidence from other peer companies may be even more damaging. The plural press needs to be shining the light on these wrongs as national security isn’t an excuse for AI companies to break copyright law.
Publishers must be proactive in shaping AI policy. Big Tech will push its own narrative. Meta and Google pay front groups like Chamber of Progress to stretch the meaning of fair use both in the U.S. and across the pond. Media companies must work together to establish AI licensing frameworks and legal protections and reinforce existing copyright law.
Regulatory scrutiny on AI will intensify. If Meta is found to have used pirated data, it will accelerate AI regulations. This will not likely be confined to copyright but could extend across tech policy as it did in 2018, when one scandal exposed larger problems leading to Facebook being dragged before parliaments around the globe.
The future of AI depends on trust, ethics and media leadership
The past year has shown that AI is both a disruptor and an opportunity. The Reuters ruling confirmed publishers can and should demand licensing deals. The Meta revelations prove why that’s so necessary.
AI is reshaping media, but it must be built ethically. The publishing industry has both the legal and ethical high ground. And media companies must use it to define the next phase of AI’s evolution. The future of AI isn’t just about innovation. It’s about who controls the data and the IP – and whether the people who create it are respected or exploited.
Artificial intelligence is rapidly transforming the way people access and consume news. With AI assistants increasingly serving as intermediaries between audiences and trusted news sources, it is essential to understand how accurately and reliably they present information. Unfortunately, according to recent research from the BBC, AI does not accurately deliver news.
In new research, the BBC is evaluating how well leading AI assistants—ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity—deliver news-related answers. By granting these AI models access to its website, the BBC sought to assess its ability to effectively reference and represent its journalism.
This study examined the quality of AI-generated responses using 100 news-related questions, with BBC journalists evaluating them based on seven key criteria, including accuracy, attribution, and impartiality. The reviewers then determined whether the responses contain minor, significant, or no issues across these areas.
Significant errors in AI news
The results show that over half (51%) of AI-generated responses contain significant issues, while 91% exhibited some inaccuracy, bias, or misrepresentation. Specific issues include factual errors, misattribution of sources, and missing or misleading context. When evaluating how these AI assistants represented BBC content, the study finds that Gemini (34%), Copilot (27%), Perplexity (17%), and ChatGPT (15%) produce responses with errors in their use of BBC sources.
Accuracy and misinformation
AI-generated responses frequently report factual inaccuracies, even when citing BBC sources:
Gemini incorrectly states that the NHS discourages vaping as a smoking cessation method, despite BBC coverage explicitly confirming that the NHS supports vaping for smokers that want to quit.
Copilot misrepresents the case of rape survivor Gisèle Pelicot, falsely claiming that blackouts and memory loss led her to uncover the crimes against her.
Multiple assistants incorrectly report figures, such as significantly underestimating the number of UK prisoners released and misattributing Chrome’s market share statistics.
ChatGPT erroneously reports that Ismail Haniyeh, assassinated in July, is still an active Hamas leader.
Attribution and sourcing errors
AI assistants frequently misattribute or incorrectly source information. Some rely on older articles, leading to misleading conclusions. In several instances, assistants claim to summarize BBC reporting but include details that did not exist in the BBC articles.
Impartiality and editorialization
In addition to prevalent factual errors, AI assistants struggle with maintaining any semblance of journalistic impartiality. The study flags multiple instances where opinions are presented as facts, sometimes falsely attributing to the BBC as the source. For example, Perplexity characterized Iran’s actions in the Middle East conflict as “restrained” and described Israel’s response as “aggressive,” despite no such characterization appearing in the BBC article.
AI errors in news is a risk to public trust
These findings highlight serious risks in AI-generated news summaries. Misinformation can erode public trust in news media, whether due to factual errors, misleading context, or editorialized conclusions. Distortion of BBC’s content can have significant consequences. If these risks continue, audiences may question the credibility of BBC’s reporting.
AI assistants are set to play an increasing role in how people access news and because they do not generate meaningful traffic to media websites, it appears that the majority of people using them are not exploring further to determine the accuracy of the purported news AI delivers. This, it is critical that AI agents or chatbots endeavor to uphold the information ecosystem’s rigorous and trusted editorial standards.
Ultimately, AI developers are responsible for ensuring their products align with fundamental journalistic principles, including accuracy, impartiality, and reliable sourcing. The BBC warns that if these challenges go unaddressed, AI risks undermining the news organizations it depends on for credible information. As AI continues to evolve, the BBC emphasizes the need for the media industry to champion responsible AI integration to safeguard audiences and preserve journalism’s integrity.
Today’s headlines about artificial intelligence (AI) describe it as a transformative force reshaping entertainment and technology. Companies across the spectrum are adopting AI to streamline operations, enhance content production, marketing, and distribution efficiencies, and revolutionize the viewer experience. But are consumers keeping pace with this rapid evolution and what do they think about it?
Popular AI tools like OpenAI’s GPT-4 and DALL-E generate images, music, scripts, games, and even fully realized ads and video content. These advancements blur the lines between “real” and “fake” content. Post-production AI tools can now enhance not just backgrounds and environments but also character identities and storylines. AI algorithms also help to better match viewers with content they love, refining recommendation systems and personalizing viewer experiences.
Consumer awareness and usage of AI
A recent report by HUB, part of the Entertainment & Technology Tracker series, delves into consumer awareness and understanding of AI’s current capabilities. The study reveals that 71% of respondents are familiar with “generative AI,” and only 18% feel “very confident” in explaining what it is and does. Interestingly, 57% of respondents use one or more generative AI models, highlighting a significant gap between awareness and deep understanding.
Perceptions of AI: good vs. bad
The report finds that nearly half of the respondents view AI as a positive development, compared to about a quarter who see it as potentially negative. Among those who view AI favorably or unfavorably, a common belief is that AI fundamentally changes how we live and work. However, there are notable concerns, particularly around privacy, employment risks, and the potential misuse of AI for creating deepfake content.
Consumer interaction with AI
Consumers are increasingly interested in AI-driven features that enhance content discovery and selection. Despite this interest, there is a clear preference for human creativity in certain domains. For example, more consumers believe humans outperform AI in tasks like writing music or dialogue. Conversely, they view AI as equally capable of generating game dialogue or trailers and superior in tasks like CGI, writing descriptions, or creating subtitles.
The desire for transparency in AI-generated content is also strong. According to the report, 67% of respondents want to know if something is created using AI, underscoring the need for clear labeling and communication from content creators.
AI opportunities and challenges for the entertainment industry
For media executives, the rapid integration of AI presents both significant opportunities and challenges. AI streamlines production processes, reduces costs, and enables the creation of more personalized and engaging content. However, it raises critical questions about the balance between automation and human creativity. It also questions the ethical implications of AI use, and the need to address consumer concerns around transparency and data privacy.
AI is reshaping how content is produced and consumed and redefining the relationship between creators and audiences. As the industry adapts, embracing AI’s potential while addressing its challenges is key to sustaining growth and innovation in the entertainment sector.
The HUB report is useful for media executives looking to stay ahead in this rapidly changing landscape. By tracking consumer sentiment and behavior, companies can better anticipate trends, address concerns, and leverage AI to create a more dynamic and responsive entertainment ecosystem.
As AI continues to evolve, media companies must navigate these complexities strategically. Investing in AI technologies that enhance viewer engagement while maintaining transparency and ethical standards is crucial. Additionally, fostering consumer education around AI capabilities and limitations helps bridge the gap between consumer awareness and understanding.
As the dust settles on 2024’s festivities and we embark on a new year, media organizations will again consider how to navigate shrinking budgets, shifting audience expectations, and new technologies like AI. From redefining how journalists work to creating hyper-engaging and localized content, digital media must adapt to a competitive environment where relevance and trust are paramount.
Building trust through transparency and humanization
Trust in the media will remain a pressing issue this year, and taking steps to address it will be critical to ongoing media strategies. Audiences are increasingly skeptical of faceless institutions. They demand greater transparency in how stories are reported, making it vital for journalists to adopt a more human-centric approach by showcasing the people behind the bylines and the process behind news and information.
Media outlets must make their reporting processes visible by showing how information is sourced, verified, and fact checked. Moreover, humanizing journalists by highlighting their expertise, motivations, and personal stories can bridge the current gap between the media and the public.
Behind-the-scenes insights and candid discussions about reporting challenges within storytelling can make journalists more relatable, fostering trust and connection with audiences. Using formats like live comment blocks, which allow journalists to connect personally with their audience, makes it easier and more authentic to highlight their commitment to truth, fairness, and serving their communities. This openness builds accountability, combats misinformation, and fosters trust. In an era of media mistrust, these practices are beneficial and essential for journalism’s long-term survival and relevance.
Interactive micro-content: Winning the battle for attention
Shrinking attention spans and the popularity of short-form video platforms like TikTok and Instagram are reshaping how audiences consume content. Media organizations must evolve beyond traditional storytelling to deliver snackable, interactive micro-content that captures and retains attention.
In 2025, this will mean offering dynamic formats like live Q&A sessions with relevant authorities on a topic, live comment blocks to enable direct interaction, polls, and real-time updates that invite audience participation. Such features bridge the gap between passive consumption and active engagement, allowing media outlets to compete with social platforms for user attention. For example, German title FAZ used a live poll in US election coverage that garnered over 8,000 responses. By blending concise video content with interactive elements, digital publishers can create a loyal and participatory audience base, especially among younger demographics.
Hyper-localization: A lifeline for local media
Local newsrooms have faced significant challenges in the digital age. Among the biggest is the loss of advertisers to tech giants and struggling to maintain relevance in fragmented markets. However, hyper-local content offers a path to revitalization.
In particular, sports coverage presents an untapped opportunity to engage readers. Community and smaller league sports resonate deeply with local audiences, fostering a sense of connection and pride. Local outlets can rebuild trust and attract a loyal readership by focusing on these niche stories.
Beyond sports, local media that reflect the lives and interests of its community can gain traction by delivering tailored content that resonates emotionally and provides a platform for underrepresented voices. This approach not only helps to drive subscriptions but also counters misinformation by establishing trusted, credible platforms for civic engagement, combating the rise of the ‘news desert’ into which more extreme voices can creep.
Mobile-first and multi-screen engagement
Mobile-first strategies remain critical as audiences increasingly access content on the go. If this hasn’t already been a focus before 2025, media companies will need to optimize platforms for seamless mobile experiences, with responsive designs and easy-to-navigate interfaces.
At the same time, the rise of multi-screen usage will encourage outlets to complement televised or streamed events with mobile-friendly content, as with The Irish Independent’s Eurovision Song Contest live blog. Features like real-time stats, behind-the-scenes commentary, instant analysis, and viewer participation via comments provide added value for audiences wanting to deepen their connection to events.
AI and journalists: A partnership in efficiency
In 2025, journalists will increasingly turn to AI to enhance their workflows. Against the backdrop of budget cuts and leaner teams, particularly in local newsrooms, AI’s role in automating time-intensive tasks will allow journalists to focus on crafting compelling, human-centered stories. While fully AI-generated content remains controversial, applications like translation, data analysis, summarization, social media optimization and tone adjustments will become essential tools.
AI-powered audience tracking and personalization algorithms will also help to identify trends and inform publishing strategies, ensuring that content reaches the right audience at the right time and fostering deeper engagement. This targeted approach has the power to strengthen reader loyalty and drive subscription growth by delivering highly relevant content tailored to individual preferences.
Balancing broad appeal with niche interests
Indeed, understanding audience preferences will be a cornerstone of successful content strategies in 2025. Breaking news stories draw broad audiences, offering visibility and initial engagement opportunities. However, sustaining reader interest and dwell time requires niche content that aligns with individual passions and interests.
Media companies that deliver a mix of timely, high-profile stories and in-depth coverage of specialized topics—whether in politics, lifestyle, or technology—will position themselves as indispensable resources. This balance of breadth and depth enhances engagement and encourages subscriptions, as readers see consistent value in a publisher’s offerings.
New Year’s resolutions
In 2025, the emphasis for digital media companies will remain on innovation, relevance, and trust. Media organizations can thrive in an increasingly competitive landscape by leveraging AI to optimize workflows, creating engaging micro-content, focusing on localized and hyper-relevant reporting, and building transparent relationships with audiences. Personalization will be pivotal in driving loyalty and revenue, ensuring publishers remain relevant and, in fact, essential to their readers’ lives.
And, of course, it would be remiss of me not to point out that live blogs can help deliver on all these strategies. With their ability to provide real-time updates, foster audience interaction, and offer a platform for hyper-local and personalized content, live blogs are a powerful tool for building trust, engaging audiences, and maintaining relevance in a fast-paced digital world. By combining immediacy with transparency and interactivity, live blogs can anchor a content strategy that meets the demands of 2025 and beyond.
In 2024, the publishing and broader digital landscapes faced seismic shifts, many of which were beyond publishers’ control. New privacy regulations, continued identifier loss, and Google’s indecision on third-party cookies have left publishers grappling with uncertainty when it comes to how they translate their audience relationships into meaningful, monetizable ad experiences. That uncertainty will follow the industry into the new year and beyond.
Yet, amid this turbulence, publishers can still chart their own course toward sustainable revenue growth. The key lies in leveraging new tools and strategies to improve monetization while still staying true to editorial missions. Let’s explore four promising opportunities for publishers in 2025.
Tapping into AI for better ad experiences
The implications of generative AI extend well beyond content. New capabilities are also changing the way publishers approach ad experiences. With AI-driven tools, publishers can enable their advertisers to create tailored creative that aligns with individual audience preferences, increasing engagement and commanding higher premiums.
Dynamic ad creative has long been an underused tool. Now, with advancements in AI, publishers have a growing array of options for implementing real-time adjustments to ad content. With OpenAI revamping Sora and other text-to-video AI tools hitting the market, generative AI will increasingly extend into the realm of video ad customization based on the surrounding content or the visitor’s interests.
Native advertising is also poised to benefit significantly from AI. By generating and testing multiple headline variations, optimizing ad copy, and ensuring contextual relevance, AI empowers publishers to elevate native ad performance. These tools not only boost returns for advertisers but also streamline content creation processes, making premium advertising formats more accessible for advertisers and lucrative for publishers.
Using predictive modeling to unlock new revenue
For publishers, any unique signals they are able to draw from their first-party data and pass along to buyers are key to growing revenue. The importance of being able to package this first-party data and pass it into the bidstream to DSPs cannot be overstated.
When it comes to deriving value from first-party data, the increased sophistication of predictive modeling represents a game-changer for publishers seeking more-sustainable revenue growth. Using their valuable first-party data, publishers can create sophisticated models that anticipate user behavior and use the insights from those models to improve both content and ad delivery.
One of predictive modeling’s most compelling applications lies in audience extension. By analyzing their existing audience’s attributes and behaviors, publishers can identify similar audiences beyond their platforms. These audiences can be targeted off-site, with a portion of the resulting ad revenue feeding back to publishers.
Additionally, combining first-party data with contextual and engagement signals allows publishers to predict ad performance with greater precision. This approach enhances the value of inventory by ensuring ads resonate with the intended audience, thereby driving higher yields.
Finally, beyond immediate monetization, predictive modeling helps publishers expand their audience base. By leveraging data to attract and retain new users, publishers not only grow their first-party data assets but also position themselves for longer-term revenue gains.
Building profitable commerce media partnerships
The paths of digital publishing and commerce media are becoming more deeply intertwined as we move into 2025—and that’s a good thing for both sides. Strategic partnerships between publishers and commerce platforms can unlock new revenue streams across the board while enhancing the consumer experience. It’s one of those notorious but rare win-win-win scenarios.
Consider the partnership between Best Buy and CNET, through which curated CNET editorial content complements the Best Buy shopper journey. By integrating expert reviews and recommendations into its platform, Best Buy offers a richer user experience without having to develop content capabilities internally. Meanwhile, advertisers can share ad spaces across Best Buy and CNET, “allowing them to see the impact of their advertising campaigns through a full-funnel, closed-loop media solution.” For CNET, this means new valuable audience insights and ad revenue are being unlocked simultaneously. This isn’t terribly dissimilar from the benefits Yahoo sought to unlock with its partnership with Lowe’s back in 2022.
Publishers are uniquely positioned to provide the educational and inspirational content commerce platforms need to engage customers at the top and mid-funnel stages of their journeys. By joining forces, publishers and commerce platforms can deliver cohesive advertising solutions that cater to performance-driven and branding objectives alike, while also aligning with the needs of their respective audiences.
Unlocking the underleveraged mid-funnel
All three of the previously mentioned opportunities point toward a broader pivot that publishers should be making in 2025: deepening their focus on monetizing the mid-funnel, where consideration is born. The mid-funnel represents a crucial, often-overlooked stage of the customer journey that is a natural fit for publishers looking to derive the greatest possible value from their audiences and content.
By analyzing behavioral patterns and preferences, AI and predictive modeling can empower publishers to target audiences in the mid-funnel—those who are aware of brands but have yet to establish their consideration set—with greater precision. Likewise, commerce media partnerships further enhance mid-funnel monetization opportunities by spotlighting and placing value on publishers’ ability to create engaging, informative content that complements commerce platforms’ transactional focus.
Finally, as an important new piece of the puzzle, emerging mid-funnel measurement tools are not providing unprecedented visibility into mid-funnel performance. These solutions allow publishers to easily quantify the impact of content-driven campaigns on brand consideration, pre- and post-campaign, offering insights that can inform campaign enhancements around this important metric of success.
As publishers navigate the evolving digital landscape in 2025, they’re more in control of their own destinies than it might sometimes seem. By embracing generative AI, predictive modeling, commerce media partnerships, and a deepened focus on the mid-funnel, publishers can unlock sustainable new or improved revenue streams. These strategies not only enhance monetization but also preserve publishers’ ability to produce the vital content that informs and enriches society. In this regard, the above strategies aren’t just opportunities—they’re responsibilities.
OpenAI’s ChatGPT Search, an AI-driven alternative to traditional search engines, raises pressing concerns for news publishers, including attribution errors. OpenAI promotes its collaboration with select news organizations and uses mechanisms like robots.txt files to give publishers some control over their content, However, questions loom about its impact on journalism. These worries echo the backlash from two years ago when publishers discovered their content was used—without consent—to train OpenAI’s models.
OpenAI markets ChatGPT Search as a platform to enhance publisher reach. Yet, new research from the Tow Center for Digital Journalism, as reported in the Columbia Journalism Review, reveals significant issues with the tool’s ability to accurately attribute and represent content. This undermines trust between publishers who are working with OpenAI. It also represents a significant concern for publishers whose content is misattributed or misrepresented because of the risk of reputational damage. And, as such, it poses challenges for newsrooms adopting AI technologies.
Unreliable attribution and false confidence
The Tow Center analyzed ChatGPT Search using 200 traceable quotes from 20 publishers, including those with licensing agreements, those in litigation, and unaffiliated entities. Traditional search engines consistently surface the original articles in the top three results.
However, ChatGPT Search fails to correctly attribute 153 of the quotes. It fabricates citations, credits rewritten versions of articles or misattributes sources. Notably, in only seven cases, it admits being unable to locate the source, prioritizing plausible but incorrect answers over transparency.
Unlike traditional search engines that clearly indicate when no match is found, ChatGPT’s confident delivery of inaccurate citations risks misleading users and damaging the credibility of referenced publishers. The findings of these ChatGPT search errors emphasize the risks of integrating AI-driven search tools into journalism amid ongoing struggles with content protection.
Accurate attribution is critical for news organizations to maintain trust, brand value, and loyalty. However, ChatGPT Search frequently distances users from original sources by misidentifying premium publications or favoring syndicated or plagiarized versions.
For example, when identifying a New York Times quote, ChatGPT attributes it to a site that copied the article without credit. Such misrepresentation undermines intellectual property rights and rewards unethical practices. Similarly, it often cites syndicated versions of articles, such as attributing a Government Technology piece to MIT Technology Review, diluting the originating publisher’s visibility and impact.
These errors exacerbate publishers’ challenges with audience fragmentation and declining revenues. ChatGPT’s attribution flaws risk further eroding the vital connection between publishers and their readers.
Crawler policies and content control
In its marketing, OpenAI emphasizes its respect for publisher preferences via robots.txt files. However, the Tow Center’s findings suggest otherwise. Yet even those publishers that block OpenAI’s crawlers are not immune to misrepresentation. And those allowing crawler access saw little improvement in citation accuracy.
For instance, despite blocking OpenAI crawlers, the New York Times experienced content misattributions. Publications like The Atlantic and the New York Post, which have licensing agreements and permit crawler access, also faced frequent errors.
This inconsistency highlights publishers’ limited control over how ChatGPT Search represents their content. Blocking crawlers does not guarantee protection, and opting in does not ensure better outcomes.
Transparency, trust and revenue impact of ChatGPT errors
A core problem with ChatGPT Search lies in its lack of transparency. When it cannot access or verify a source, the AI often forms plausible but inaccurate responses. Thus, it leaves users unable to discern reliability.
Unlike traditional search engines, which clearly signal when no results match a query, ChatGPT usually fails to communicate uncertainty. This opacity risks misleading users and eroding trust in both the platform and the referenced publishers.
The rapid adoption of AI-driven tools like ChatGPT Search poses significant challenges for publishers. With an estimated 15 million U.S. users starting their searches on AI platforms, the potential disruption to search-driven traffic is profound. Publishers reliant on visibility for subscriptions, advertising, or membership revenue face growing threats. If readers cannot reliably trace content to its source, publishers lose critical opportunities for engagement and monetization.
What needs to change
To foster a sustainable relationship with newsrooms, Tow’s research recommends that OpenAI must address these challenges:
Commit to transparent attribution: ChatGPT must accurately cite original sources or explicitly indicate when an answer cannot be provided.
Increase accountability: To address systemic issues, meaningful partnerships with newsrooms—beyond select licensing deals—are essential.
Enable publisher control: Tools empowering publishers to dictate content access and representation will signal good faith.
ChatGPT Search’s flaws underscore the tensions between generative AI platforms and the news industry. While OpenAI claims to collaborate with publishers, its inconsistent handling of content undermines trust and fails to protect intellectual property.
As generative AI reshapes the future of search, publishers must advocate for stronger safeguards and fairer partnerships to ensure their work is accurately represented and valued. By addressing these issues, AI platforms and newsrooms can build a foundation for mutual benefit—and one that that will ultimately benefit consumers as well.