In the fast-paced world of digital publishing, the latest wave of developments in Artificial Intelligence (AI) has emerged as a welcome solution to the organized chaos of ad operations. Yet, despite its transformative potential, many media companies struggle with the adoption of AI technology. Costly implementation, complex integrations, and a shortage of AI-savvy professionals are hurdles slowing adoption to a snail’s pace.
For media executives looking to move faster, the answer is simple: purpose-built AI solutions. Forget everything you know about generic AI technology. The real magic happens with AI solutions specifically built – for a very specific purpose. By embracing a tailored approach, media executives can accelerate AI adoption with purpose-built solutions that deliver immediate value and growth.
But to harness AI’s power, understanding the strategic advantage of purpose-built AI solutions is crucial. These specialized tools can help media companies reduce implementation issues and offer tangible benefits. Let’s explore the common challenges in media operations that custom AI tools can address.
Unpacking the challenges in digital media operations
Operationally, digital publishers have their work cut out for them in today’s digital media ecosystem. Fragmented data, inefficient, manual workflows, and complexity management in ad operations create significant challenges. These issues slow down processes and hinder the ability to quickly adapt to market changes.
This is where a purpose-built AI solution can deliver a strategic advantage over a generic AI tool. Think of a generic AI tool as a Swiss Army knife – versatile but not specialized for any one task. In contrast, a purpose-built AI solution is like a precision scalpel, expertly designed for a specific function, ensuring optimal performance and efficiency in that area. Now, let’s explore how these tailored solutions can specifically address implementation challenges.
Finding and implementing AI solutions involves extensive testing and high costs. However, purpose-built AI sidesteps these challenges with pre-designed functionalities that can be implemented quickly and efficiently. Here’s how these tailored solutions address common implementation hurdles.
Lower implementation costs
The initial investment in AI technology, including hardware, software, and skilled personnel, can be prohibitively expensive. However, purpose-built AI solutions are pre-designed for specific tasks, reducing the need for extensive custom development. This lowers both initial investment and ongoing costs.
Simplified integrations
Integrating AI systems with existing workflows often proves difficult, requiring significant time and resources. But purpose-built solutions are designed to integrate seamlessly with existing workflows and technologies, minimizing the complexity and time required for setup. They offer specific capabilities that streamline the integration process.
Unified data management
Disparate data sources and poor data quality hinder AI performance. According to Theorem’s research, 33% of ad professionals cite a lack of centralized tools as a major pain point. Purpose-built AI solutions consolidate data sources, improving quality and consistency. This unified approach enables more accurate insights, better decision-making, and more effective ad targeting.
User-friendly
There is often a shortage of professionals with the expertise needed to develop, implement, and maintain AI systems. With user-friendly interfaces and automated features, purpose-built AI solutions reduce the dependency on specialized AI talent. This makes it easier for existing staff to utilize and manage the AI system.
Faster deployment
These solutions are designed for specific workflows and processes, which reduces the development cycle, while accelerating deployment, and team training. Organizations can rapidly implement the solution and hit the ground running.
With implementation challenges out of the way, more on the tangible benefits and rapid results purpose-built AI solutions have to offer.
Benefits and strategic growth opportunities
Purpose-built, custom AI solutions offer a number of benefits and opportunities for growth, including:
Immediate value
With an AI solution specifically designed to automate ad operations, implementation and adoption shift from labor-intensive to quick and easy. This allows media companies to quickly realize productivity gains by tapping into ready-to-launch solutions almost instantly.
Scalability
These solutions are built to scale seamlessly with the company’s growth. As your business expands and evolves, purpose-built AI solutions can adapt to new requirements and increased workloads. This flexibility ensures sustained performance and supports long-term success without the need for constant reinvestment in new technologies.
Cost-effectiveness
Purpose-built AI solutions offer significant cost benefits. Processes are streamlined, makegoods and errors decrease. And, as a result, implementation and operational costs are reduced.
New revenue generation
Purpose-built AI solutions can identify new revenue streams and optimize existing ones. For example, an AI solution built specifically to increase engagement through more targeted, personalized advertising can generate more ad revenue. Or consider the impact of a solution designed to predict what type of content will be popular in the future. This solution would allow publishers to focus on creating content that is more likely to attract and retain users, driving more revenue.
Maintaining a competitive edge in digital media’s turbulent ecosystem today requires the ability to act swiftly and strategically. Understanding the benefits is just the beginning, now it’s time to take action.
Practical steps to drive quick adoption of purpose-built AI solutions
Implementing purpose-built AI solutions can be streamlined with the right approach. By following these steps, your organization can swiftly integrate AI technology and start reaping the benefits.
Start by identifying key areas where AI can have the most impact with a thorough assessment of current processes.
Prioritize those that promise the quickest wins and greatest value. Next, research and select AI solutions with capabilities that align with your business goals and workflow challenges.
Measure the potential impact on data, infrastructure and governance to ensure smoother AI adoption.
Identify training needs and assess any ethical considerations.
Carefully evaluate vendors based on functionality, ease of integration, and proven success.
Begin with a pilot implementation, test the solution in a controlled environment, gather feedback, and make necessary adjustments before a full rollout.
Investing in a purpose-built AI solution is a long-term strategy that yields ongoing benefits as the technology evolves. Much like choosing a tailored suit that fits perfectly vs a suit off the rack, it offers the precise fit and functionality needed to drive strategic growth. Those who embrace it now stand to reap immediate productivity gains, scalability, and cost-effectiveness.
The introduction of AI-generated search results is just the next step in a long line of the platforms moving more of the audience interactions behind their walled gardens. This is an accelerating trend that’s not going to reverse. Google began answering common questions itself in 2012, Meta increased its deprioritization of news in 2023, and now some analysts are predicting that AI search will drop traffic to media sites by 40% in the next couple years.
It’s a dire prediction. Panic is understandable. The uncertainty is doubled by the sheer pace of AI developments and the fracturing of the attention economy.
However, it is important to know that this is another situation in which it is critical to focus on the fundamentals. Media companies need to develop direct relationships with audiences, double down on quality content, and use new technology to remove any inefficiencies in their publishing operation. Yes, the industry has already done this for decades. However, there are new approaches in 2024 that can allow publishers to improve experiences to attract direct audiences.
All-in on direct relationships
When there’s breaking news, is the first thought in your audience’s mind opening your app, or typing in your URL? Or are they going to take the first answer they can get – likely from someone else’s channel?
Some media companies view direct relationships as a “nice to have” or as a secondary objective. If that’s the case, it’s time to make them a priority.
Whether direct relationships are already the top priority or not, now’s a good time to take a step back to re-evaluate the website’s content experience and the business model that supports it. Does it emphasize—above all else—providing an audience experience that encourages readers to create a direct relationship with your business?
When the cost to produce content is zero, quality stands out
This brings us to the avenue that drives direct relationships: your website, and your app. Particularly as search declines as a traffic source, these become the primary interaction space with audiences. We’ll follow up next month with frameworks for your product team to use to make your website and apps more engaging to further build your direct audience traffic.
It’s no longer about competing for attention on a third-party platform—for example through a sheer quantity of content about every possible keyword. It’s about making the owned platform compelling. Quality over quantity has never been more important.
Incorporating AI into editorial workflow
As the cost to create content is increasingly commoditized via the large language models (LLMs), the internet will fill up with generic noise—even more so than it already is.
Content that’s actually of genuinely high quality will rise in appreciation, both by readers themselves and the search engines that deliver traffic to them. Google is already punishing low quality content. So are audiences. The teams using LLMs to generate entire articles, whole-cloth, are being downgraded by Google (and this approach is not likely to drive readers to you directly either).
But AI does have its uses. One big challenge in generating quality content is time. Ideally, technology gives time back to journalists. They’ll have extra time to dig into their research. They may gain another hour to interview more sources and find that killer quote. Editors have more time to really make the copy pop. The editorial team has more time for collaborating on the perfect news package. The list goes on.
AI is perfect for automating all the non-critical busywork that consumes so much time: generating titles, tags, excerpts, backlinks, A/B tests, and more. This frees up researchers, writers, and creatives to do the work that audiences value most, and deliver the content that drives readers to return to websites and download apps.
This approach has been emerging for a while now. For example, ChatGPT is great at creating suggestions for titles, excerpts, tags, and so on. However, there’s a new approach that’s really accelerating results: Retrieval Augmented Generation (RAG).
RAG is the difference maker when it comes to quality
The base-model LLMs are trained on the whole internet, rather than specific businesses. RAG brings an organization’s own data to AI generation. Journalists using ChatGPT to get generations will get “ok” results that they then need to spend time fixing. With RAG, they can focus the results to make sure they’re fine-tuned to your particular style. That’s important for branding, and also saves creatives time to use for other things.
The next level not only uses content data, but also performance data to optimize RAG setups. This way, AI is not just generating headline suggestions or excerpts that match a particular voice, it’s also basing them on what has historically generated the most results.
In other words, instead of giving a newsroom ChatGPT subscriptions and saying “have at it,” media companies can use middleware that intelligently prompts LLMs using their own historical content and performance data.
Do this right and journalists, editors, and content optimizers can effortlessly generate suggestions for titles, tags, links, and more. These generations will be rooted in brand and identity, instead of being generic noise. This means the team doesn’t need to spend time doing all that manually, and can focus on content quality.
Using RAG to leverage the back catalog
Media companies have thousands upon thousands of articles published going back years. Some of them are still relevant. But the reality is that leveraging the back catalog effectively has been a difficult undertaking.
Humans can’t possibly remember the entirety of everything an organization has ever published. But machines can.
A machine plugged into the CMS can use Natural Language Processing (NLP) to understand the content currently being worked—what is it about? Then it can check the back catalog for every single other article on the topic. It can also rank each of those historical articles by which generated the most attention and which floundered. Then it can help staff insert the most high-performing links into current pieces.
Similarly, imagine the same process, just in reverse. By automating the updating of historical evergreen content with fresh links, new articles can immediately jump-start with built-in traffic.
Removing silos between creation and analysis
While Google traffic might be declining, it will nonetheless remain important in this new world. And in this period of uncertainty, media organizations need to convert as much as possible of the traffic from this channel while it is still operating.
We call this “Leaving the platforms behind.” Media companies should focus on getting as much of the traffic from search and other channels into first-party data collection funnels as possible. This way, they can build enough moat to continue floating if any or all of these traffic channels completely disappear.
Most teams today have dedicated SEO analysts who are essentially gatekeepers between SEO insights and content production. The SEO analysts aren’t going anywhere any time soon. But the new table stakes are that every journalist needs to be able to self-serve keyword insights.
It is important to use analytics tools that bring search console data directly to the approachable/easy article analytics page that the editorial team already knows how to use. Ideally, analytics tools should connect keywords and other platform traffic to conversions, so everyone on your team can understand their impact on leaving the platforms behind.
Done well, you’ll create a feedback loop that evolves and improves your content in a way that resonates with readers and machines.
Quality is all that matters
This is not the first “all hands on deck,” moment for the media industry. That being said, what we’re seeing is that the barometer of success is a truly aligned strategy and execution that brings product, business development, and editorial teams together to pursue creating first party relationships with audiences. The organizations that have little brand identity, and pursue traffic instead of subscriptions, are suffering—and will likely continue to do so.
Last month, I co-led a week-long journalism program during which we visited 16 newsrooms, media outlets and tech companies in New York. This study tour provided an in-depth snapshot of the biggest issues facing the media today and offered insights into some of the potential solutions publishers are exploring to address them.
We met with everyone from traditional media players – like The New York Times, Associated Press, CBS and Hearst – to digital providers such as Complex Media and ProPublica, as well as conversations with academics and policy experts. Based upon these visits and conversations, here are four key takeaways about the state of media and content publishing today.
1. Hands-on AI experience matters
Not surprisingly, AI dominated many conversations. Although recent research shows the American public is both skeptical and surprisingly unaware of these tools, the emergence of Generative AI – and the discussions around it – are impossible to ignore.
One mantra oft repeated throughout the week was that everyone in the media will need to be conversant with AI. Despite this, research has shown that many newsrooms are hesitant about adopting these technologies. Others, however, are taking a more proactive approach. “I like playing offense, not defense, Aimee Rinehart, Senior Product Manager AI Strategy at the Associated Press, told us. “Figure out how the tools work and your limits.”
With many media companies having to do more with less, AI can help improve workflows, support labor-intensive work like investigative journalism, as well as streamline and diversify content creation and distribution. By harnessing these AI-powered functions, smaller outlets may benefit the most, given the efficiencies these resource-strapped players may be able to unlock.
Reporting on AI is also an emerging journalistic beat. This is an area more newsrooms are likely to invest in, given AI’s potential to radically reshape our lives. As Hilke Schellmann, an Emmy‑award winning investigative reporter and journalism professor at NYU, told us “we used to hold powerful people to account, now we have to add holding AI accountable.”
Echoing Schellmann’s sentiments, “every journalist should be experimenting with AI,” one ProPublica journalist said. “We owe it to our audience to know what this is capable of.”
2. Demonstrating distinctiveness and value is imperative
One fear of an AI-driven world is that traffic to publishers will tank as Generative Search, and tools like ChatGPT, remove the need for users to visit the sites of creators and information providers. In that environment, distinctiveness, trustworthy and fresh content becomes more valuable than ever. “You need to produce journalism that gives people a reason to show up,” says Ryan Knutson, co-host of The Wall Street Journal’s daily news podcast, The Journal.
In response, publishers will need to demonstrate their expertise and unique voice. That means leaning more into service journalism, exclusives, and formats like explainers, analysis, newsletters, and podcasts.
Bloomberg’s John Authers, exemplifies this in his daily Points of Return newsletter. With more than three decades of experience covering markets and investments, he brings a longitudinal and distinctive human perspective to his reporting. Alongside this, scoops still matter, Authers suggests. After all, “journalism is about finding out something other people don’t know,” he says.
Media players also need to make a more effective case as to why original content needs to be supported and paid for. As Gaetane Michelle Lewis, SEO leader at the Associated Press, put it, “part of our job is communicating to the audience what we have and that you need it.”
For a non-profit like ProPublica that means demonstrating impact. They publish three impact reports a year, and their Annual Report highlights how their work has led to change at a time when “many newsrooms can no longer afford to take on this kind of deep-dive reporting.”
“Our North Star is the potential to make a positive change through impact,” Communications Director, Alexis Stephens, said. And she emphasized how “this form of journalism is critical to democracy.”
The New York Times’ business model is very different but its publisher, A.G. Sulzberger, has similarly advocated for the need for independent journalism. As he put it, “a fully informed society not only makes better decisions but operates with more trust, more empathy, and greater care.”
Given the competition from AI, streaming services, and other sources of attention, media outlets will increasingly need to advocate more forcefully for support through subscriptions, donations, sponsorships, and advertising. In doing this, they’ll need to address what sets them apart from the competition, and why this matters on a wider societal level.
“This is a perilous time for the free press,” Sulzberger told The New Yorker last year. “That reality should animate anyone who understands its central importance in a healthy democracy.”
3. Analytics and accessibility go hand in hand
Against this backdrop, finding and retaining audiences is more important than ever. However, keeping their attention is a major challenge. Data from Chartbeat revealed that half the audiences visiting outlets in their network stay on a site for fewer than 15 seconds.
This has multiple implications. From a revenue perspective, this may mean users aren’t on a page long enough for ad impressions to count. It also challenges outlets to look at how content is produced and presented.
In a world where media providers continue to emphasize growing reader revenues, getting audiences to dig deeper and stay for longer, is essential. “The longer someone reads, the more likely they are to return,” explained Chartbeat’s CMO Jill Nicolson.
There isn’t a magic wand to fix this. Tools for publishers to explore include compelling headlines, effective formats, layout, and linking strategies. Sometimes, Nicolson said, even small modifications can make all the difference.
These efforts don’t just apply to your website. They apply to every medium you use. Brendan Dunne of Complex Media referred to the need for “spicy titles” for episodes of their podcasts and YouTube videos. Julia D’Apolito, Associate Social Editor at Hearst Magazines, shared how their approach to content might be reversed. “We’ve been starting to do social-first projects… and then turning them into an article,” she said, rather than the other way round.
Staff at The New York Times also spoke about the potential for counter-programing. One way to combat news fatigue and avoidance is to shine a light on your non-news content. The success of NYT verticals such as Cooking, Wirecutter, and Games shows how diversifying content can create a more compelling and immersive proposition, making audiences return more often.
Lastly, language and tone matters. As one ProPublica journalist put it, “My editor always says pretend like you’re writing for Sesame Steet. Make things accurate, but simple.” Reflecting on their podcasts, Dunne also stresses the need for accessibility. “People want to feel like they’re part of a group chat, not a lecture,” he said.
Fundamentally, this also means being more audience-centric in the way that stories are approached and told. “Is the angle that’s interesting to us as editors the same as our audiences?” Nicolson asked us. Too often, the data would suggest, it is not.
4. Continued concern about the state of local news
Finally, the challenges faced by local news media, particularly newspapers, emerged in several discussions. Steven Waldman, the Founder and CEO of Rebuild Local News, reminded us that advertising revenue at local newspapers had dropped 82% in two decades. The issue is not “that the readers left the papers,” he said, “it’s that the advertisers did.”
For Waldman, the current crisis is an opportunity not just to “revive local news,” but also to “make better local news.” This means creating a more equitable landscape with content serving a wider range of audiences and making newsrooms more diverse. “Local news is a service profession,” he noted. “You’re serving the community, not the newsroom.”
According to new analysis, the number of partisan-funded outlets designed to appear like impartial news sources (so-called “pink slime” sites) now surpasses the number of genuine local daily newspapers in the USA. This significantly impacts the news and information communities receive, shaping their worldviews and decision-making.
Into this mix, AI is also rearing its ugly head. While it can be hugely beneficial for some media companies—“AI is the assistant I prayed for,” saysParis Brown, associate editor of The Baltimore Times. However, it can also be used to fuel misinformation, accelerating pink slime efforts.
“AI is supercharging lies,” one journalist at ProPublica told us, pointing to the emergence of “cheap fakes” alongside “deep fakes,” as content which can confirm existing biases. The absence of boots on the ground makes it harder for these efforts to be countered. Yet, as Hilke Schellmann, reminded us “in a world where we are going to be swimming in generative text, fact-checking is more important [than ever].”
This emerging battleground makes it all the more important for increased funding for local news. Legislative efforts, increased support from philanthropy, and other mechanisms can all play a role in helping grow and diversify this sector. Steven Waldman puts it plainly: “We have to solve the business model and the trust model at the same time,” he said.
All eyes on the future
The future of media is being written today, and our visit to New York provided a detailed insight into the principles and mindsets that will shape these next few chapters.
From the transformative potential of AI, to the urgent need to demonstrate distinctiveness and value, it is clear that sustainability has to be rooted in adaptability and innovation.
Using tools like AI and Analytics to inform decisions, while balancing this with a commitment to quality and community engagement is crucial. Media companies who fail to harness these technologies are likely to get left behind.
In an AI-driven world, more than ever, publishers need to stand out or risk fading away. Original content, unique voices, counter-programming, being “audience first,” and other strategies can all play a role in this. Simultaneously, media players must also actively advocate for why their original content needs to be funded and paid for.
Our week-long journey through the heart of New York’s media landscape challenged the narrative that news media and journalism are dying. It isn’t. It’s just evolving. And fast.
The public has a knowledge gap around generative artificial intelligence (GenAI), especially when it comes to its use in news media, according to a recent study of residents in six countries. Younger people across countries are more likely to have used GenAI tools and be more comfortable and optimistic about the future of GenAI than older people. And a higher level of experience using Gen AI tools appears to correlate with more positive assessment of their utility and reliability.
Over two thousand residents in each of six countries were surveyed for the May 2024 report What Does the Public in Six Countries Think of Generative AI in News? (Reuters Institute for the Study of Journalism). The countries surveyed were Argentina, Denmark, France, Japan, the UK and the USA.
Younger people more optimistic about GenAI
Overall, younger people had higher familiarity and comfort with GenAI tools. They were also more optimistic about future use and more comfortable with the use of GenAI tools in news media and journalism.
People aged 18-24 in all six countries were much more likely to have used GenAI tools such as ChatGPT, and to use them regularly, than older respondents. Averaging across countries, only 16% of respondents 55+ report using ChatGPT at least once, compared to 56% aged 18 to 24.
Respondents 18-24 are much more likely to expect GenAI to have a large impact on ordinary people in the next five years. Sixty percent of people 18-24 expect this, while only 40% of people 55+ do.
In five out of six countries surveyed, people aged 18-34 are more likely to expect GenAI tools to have a positive impact in their own lives and on society. However, Argentia residents aged 45+ broke rank, expressing more optimism about GenAI improving both their own lives and society at large than younger generations.
Many respondents believe GenAI will improve scientific research, healthcare, and transportation. However, they express much more pessimism about its impact on news and journalism and job security.
Younger people, while still skeptical, have more trust in responsible use of GenAI by many sectors. This tendency is especially pronounced in sectors viewed with greatest skepticism by the overall public – such as government, politicians, social media, search engines, and news media.
Across all six countries, people 18-24 are significantly more likely than average to say they are comfortable using news produced entirely or partly by AI.
People don’t regularly use GenAI tools
Even the youngest generation surveyed reports infrequent use of GenAI tools. However, if the correlation between young people using GenAI more and feeling more optimistic and trusting about it holds true on a broader scale, it’s likely that as more people become comfortable using GenAI tools regularly, there will be less trepidation surrounding it.
Between 20-30% of the online public across countries have not heard of any of the most popular AI tools.
While ChatGPT proved by far the most recognized GenAI tool, only 1% of respondents in Japan, 2% in France and the UK, and 7% in the U.S. say they use ChatGPT daily. Eighteen percent of the youngest age group report using ChatGPT weekly, compared to only 3% of those aged 55+.
Only 5% of people surveyed across the six countries report using GenAI to get the latest news.
It’s worth noting that the populations surveyed were in affluent countries with higher-than-average education and internet connectivity levels. Less affluent, free, and connected countries likely have even fewer people experienced with GenAI tools.
The jury is out on public opinion of GenAI in news
A great deal of uncertainty prevails around GenAI use among all people, especially those with lower levels of formal education and less experience using GenAI tools. Across all six countries, over half (53%) of respondents answered “neither” or “don’t know” when asked whether GenAI will make their lives better or worse. Most, however, think it will make news and journalism worse.
When it comes to news, people are more comfortable with GenAI tools being used for backroom work such as editing and translation than they are with its use to create information (writing articles or creating images).
There is skepticism about whether humans are adequately vetting content produced using GenAI. Many believe that news produced using GenAI tools is less valuable.
Users have more comfort around GenAI use to produce news on “soft” topics such as fashion and sports, much less to produce “hard” news such as international affairs and political topics.
Thirty percent of U.S. and Argentina respondents trust news media to use GenAI responsibly. Only 12% in the UK and 18% in France agree. For comparison, over half of respondents in most of the countries trust healthcare professionals to use GenAI responsibility.
Most of the public believes it is very important to have humans “in the loop” overseeing GenAI use in newsrooms. Almost half surveyed do not believe that is happening. Across the six-country average, only a third believe human editors “always” or “often” check GenAI output for accuracy and quality.
A cross-country average of 41% say that news created mostly by AI will be “less worth paying for” and 19% “don’t know. 32% answered “about the same.”
Opportunities to lead
These findings present a rocky road for news leaders to traverse. However, they also offer also an opportunity to fill the knowledge gap with information that is educational and reassuring.
Research indicates that the international public overall values transparency in news media as a general practice, and blames news owners and leadership (rather than individual journalists) when it is lacking. However, some research shows users claim to want transparency around GenAI tools in news, but trust news less once they are made aware of its use.
The fact that the public at large is still wavering presents an opportunity for media leaders to get out in front on this issue. Creating policy and providing transparency around the use of GenAI tools in news and journalism is critical. News leaders especially need to educate the public about their standards for human oversight around content produced using GenAI tools.
These days, digital media companies are all trying to figure out how to best incorporate AI into their products, services and capabilities, via partnerships or by building their own. The goal is to gain a competitive edge as they tailor AI capabilities to their audiences, subscribers and clients’ specific needs.
By leveraging proprietary Large Language Models (LLMs) digital media companies have a new tool in their toolboxes. These offerings offer differentiation and added value, enhanced audience engagement and user experience. These proprietary LLMs also set them apart from companies that are opting for licensing partnerships with other LLMs, which offer more generalized knowledge bases and draw from a wide range of sources in terms of subject matter and quality.
A growing number of digital media companies are rolling out their own LLM-based generative AI features for search and data-based purposes to enhance user experience and create fine-tuned solutions. In addition to looking at several of the offerings media companies are bringing to market, we spoke to Dow Jones, Financial Times and Outside Inc. about the generative AI tools they’ve built and explore the strategies behind them.
Media companies fuel generative AI for better solutions
Digital media companies are harnessing the power of generative AI to unlock the full potential of their own – sometimes vast amounts – of proprietary information. These new products allow them to offer valuable, personalized, and accessible content to their audiences, subscribers, customers and clients.
Take for example, Bloomberg, which released a research paper in March detailing the development of its new large-scale generative AI model called BloombergGPT. The LLM was trained on a wide range of financial data to assist Bloomberg in improving existing financial natural language processing (NLP) tasks, such as sentiment analysis, named entity recognition, news classification, and question answering, among others. In addition, the tool will help Bloomberg customers organize the vast quantities of data available on the Bloomberg Terminal in ways that suit their specific needs.
Launched in beta June 4, Fortune partnered with Accenture to create a generative AI product called Fortune Analytics. The tool delivers ChatGPT-style responses based on 20 years of financial data from the Fortune 500 and Global 500 lists, as well as related articles, and helps customers build graphic visualizations.
Generative AI helps customers speed up processes
A deeper discussion of how digital media companies are using AI provides insights to help others understand the potential to leverage the technology for their own needs. Dow Jones, for example uses Generative AI for a platform that helps customers meet compliance requirements.
Dow Jones Risk Compliance is a global provider of risk and compliance solutions across banks and corporations which helps organizations perform checks on their counterparties. They do that from the perspective of complying with anti-money laundering regulation, anti-corruption regulation, looking to also mitigate supply chain risk and reputational issues. Dow Jones Risk Compliance provides tools that allow customers to search data sets and help manage regulatory and reputational risk.
In April, Dow Jones Risk & Compliance launched an AI-powered research platform for clients that enables organizations to build an investigative due diligence report covering multiple sources in as little as five minutes. Called Dow Jones Integrity Check, the research platform is a fully automated solution that goes beyond screening to identify risks and red flags from thousands of data sources.
The planning for Dow Jones Integrity Check goes back a few years, as the company sought to provide its customers with a quicker way to do due diligence on their counterparties, Joel Lange, executive Vice President and General Manager, Risk and Research at Dow Jones explained.
Lange said that Dow Jones effectively built a platform which automatically creates a report for customers on a person or company, using technology from AI firm Xapien. It brings together Dow Jones’ data that is plugged into other data sets, corporate registrar information, and wider web content. It then leverages the platform’s Generative AI capability to produce a piece of analysis or a report.
Dow Jones Risk & Compliance customers use their technology to make critical, often complex, business decisions. Often the data collection process can be incredibly time consuming, taking days if not weeks.
The new tool “provides investigations, teams, banks and corporations with initial due diligence. Essentially it’s a starting point for them to conduct their due diligence, effectively automating a lot of that data collection process,” according to Lange.
Lange points out that the compliance field is always in need of increased efficiency. However, it carries with it great risk to reputation. Dow Jones Integrity Check was designed to reshape compliance workflows, creating an additional layer of investigation that can be deployed at scale. “What we’re doing here is enabling them to more rapidly and efficiently aggregate, consolidate, and bring information to the fore, which they can then analyze and then take that investigation further to finalize an outcome,” Lange said.
Regardless of the quality of the generated results, most experts believe that it is important to have a human in the loop in order to maintain content accuracy, mitigate bias, and enhance the credibility of the content. Lange also said that it’s critical to have “that human in the loop to evaluate the information and then to make a decision in relation to the action that the customer wants to take.”
In recent months, digital media companies have been launching their own generative AI tools that allow users to ask questions in natural language and receive accurate and relevant results.
The Associated Press created Merlin, an AI-generated search tool that makes searching the AP archive more accurate. “Merlin pinpoints key moments in our videos to exact second and can be used for older archive material that lacks modern keywords or metadata,” explained AP Editor in Chief Julie Pace at The International Journalism Festival in Perugia in April.
Outside’s Scout: AI search with useful results
Chatbots have become a popular form of search. Originally pre-programmed and only able to answer select questions included in their programming, chatbots have evolved and increased engagement by providing a conversational interface. Used for everything from organizing schedules and news updates to customer service inquiries, Generative AI-based chatbots assist users in finding information more efficiently across a wide range of industries.
Much like The Guardian, The Washington Post, The New York Times and other digital media organizations that blocked OpenAI from using their content to power artificial intelligence, Outside CEO Robin Thurston explained that Outside Inc. wasn’t going to let third parties scrape their platforms to train LLM models.
Instead, they looked at leveraging their own content and data. “We had a lot of proprietary content that we felt was not easily accessible. It’s almost what I’d call the front page problem, which is you put something on the front page and then it kind of disappears into the ether,” Thurston said.
“We asked ourselves: How do we create something leveraging all this proprietary data? How do we leverage that in a way that really brings value to our user?” Thurston said. The answer was Scout, Outside Inc.’s AI search assistant. Scout is a custom-developed chatbot.
The company could see that generative AI offered a way to make that content accessible and even more useful to its readers. Outside had a lot of evergreen content that wasn’t adding value once it left the front page. Their brands inspire and inform audiences about outdoor adventures, new destinations and gear – a lot of which is evergreen and proprietary content that still had value if it could easily be surfaced by its audience. The chat interface allows their content to continue to be accessible to readers after it is no longer front and center on the website.
Scout gives users a summary answer to their question, leveraging Outside Inc’s proprietary data, and surfaces articles that it references. “It’s just a much more advanced search mechanism than our old tool was. Not only does it summarize, but it then returns the things that are most relevant,” he explained.
Additionally, Outside Inc’s old search function worked by each individual brand. Scout searches across the 20+ properties owned by the parent company which include Backpacker, Climbing, SKI Magazine, and Yoga Journal, among others. Scout brings all of the results together, from the 20+ different Outside brands, from the best camping destinations, to the best trails, outdoor activities for the family, gear, equipment and food all in one result.
One aspect that sets Outside Inc.’s model apart is their customer base, which differs from general news media customers. Outside’s customers engage in a different type of interaction, not just a quick transactional skim of a news story. “We have a bit of a different relationship in that they’re not only getting inspiration from us, which trip should I take? What gear should I buy? But then because of our portfolio, they’re kind of looking at what’s next,” Thurston said.
It was important to Thurston to use the LLM in a number of different ways, so Outside Inc launched a local newsletter initiative with the help of AI. “On Monday mornings we do a local running, cycling and outdoor newsletter that goes to people that sign up for it, and it uses that same LLM to pick what types of routes and content for that local newsletter that we’re now delivering in 64,000 ZIP codes in the U.S.”
Thurston said they had a team working on Scout and it took about six months. “Luckily, we had already built a lot of infrastructure in preparation for this in terms of how we were going to leverage our data. Even for something like traditional search, we were building a backend so that we could do that across the board. But this is obviously a much more complicated model that allows us to do it in a completely new way,” he said.
Connecting AI search to a real subscriber need
In late March, The Financial Times released its first generative AI feature for subscribers called Ask FT. Like Scout, the chat-based search tool allows users to ask any question and receive a response using FT content published over the last two decades. The feature is currently available to approximately 500 FT Professional subscribers. It is powered by the FT’s own internal search capabilities, combined with a third-party LLM.
The tool is designed to help users understand complicated issues or topics, like Ireland’s offshore energy policy, rather than just searching for specific information. Ask FT searches through Financial Times (FT) content, generates a summary and cites the sources.
“It works particularly well for people who are trying to understand quite complex issues that might have been going on over time or have lots of different elements,” explained Lindsey Jayne, the chief product officer of the Financial Times.
Jayne explained that they spend a lot of time understanding why people choose the FT and how they use it. People read the FT to understand the world around them, to have a deep background knowledge of emerging events and affairs. “With any kind of technology, it’s always important to look at how technology is evolving to see what it can do. But I think it’s really important to connect that back to a real need that your customers have, something they’re trying to get done. Otherwise it’s just tech for the sake of tech and people might play with it, but not stick with it,” she said.
Trusted sources and GenAI attribution
Solutions like those from Dow Jones, FT and Outside Inc. highlight the power of a brand with a trusted audience relationship to create deep, authentic relationships built on reliability and credibility. Trusted media brands are considered authoritative because their content is based on credible sources and facts, which ensures accuracy.
Currently, generative AI has demonstrated low accuracy and poses challenges to sourcing and attribution. Attribution is a central feature for digital media companies who roll out their own generative AI solutions. For Dow Jones compliance customers, attribution is critical to customers, to know if they’re going to make a decision based on information that is available in the media, according to Lange.
“They need to have that attributed to within the solution so that if it’s flowing into their audit trails or they have to present that in a court of law, or if they would need to present it to our internal audit, the attribution is really key. (Attribution) is going to be critical for a lot of the solutions that will come to market,” he said. “The attribution has to be there in order to rely on it for a compliance use case or really any other use case. You really need to know where that fact or that piece of information or data actually came from and be able to source it back to the underlying article.”
The Financial Times’ generative AI tool also offers attribution to FT articles in all of its answers. Ask FT pulls together lots of different source material, generates an answer, and attributes it to various FT articles. “What we ask the large language model to do is to read those segments of the articles and to turn them into a summary that explains the things you need to know and then to also cite them so that you have the opportunity to check it,” Jayne said.
They also ask the FT model to infer from people’s questions when it should be searching from. “Maybe you’re really interested in what’s happened in the last year or so, and we also get the model to reread the answer, reread all of the segments and check that, as kind of a guard against hallucination. You can never get rid of hallucination totally, but you can do lots to mitigate it.”
The Financial Times is also asking for feedback from the subscribers using the tool. “We’re literally reading all of the feedback to help understand what kinds of questions work, where it falls down, where it doesn’t, and who’s using it, why and when.”
Leaning into media strengths and adding a superpower
Generative AI seems to have created unlimited opportunities and also considerable challenges, questions and concerns. However it is clear that an asset many media companies possess is a deep reservoir of quality content and it is good for business to extract the most value from the investment in its creation. Leveraging their own content to train and program generative AI tools that serve readers seems like a very promising application.
In fact, generative AI can give trustworthy sources a bit of a super power. Jayne from the FT offered the example of scientists using the technology to read through hundreds of thousands of research papers and find patterns in a process that would otherwise take years to read in an effort to make important connections.
While scraped-content LLMs pose risks to authenticity, accuracy and attribution, proprietary learning models offer a promising alternative.
As Jayne put it, “The media has “an opportunity to harness what AI could mean for the user experience, what it could mean for journalism, in a way that’s very thoughtful, very clear and in line with our values and principles.” At the same time, she cautions that we shouldn’t be “getting overly excited because it’s not the answer to everything – even though we can’t escape the buzz at the moment.”
We are seeing many efforts bump up against the limits of what generative AI is able to do right now. However, media companies can avoid some of generative AI’s current pitfalls by employing the technology’s powerful language prediction, data processing and summarization capabilities while leaning into their own strengths of authenticity and accuracy.
New technologies will be critical to the media landscape in 2024, converging with trends towards immersive, personalized experiences and the increased impact of the creator economy, according to Arthur D. Little’s State of the Media Market 2024. The report is subtitled “Back to Balance: A Year of Prudent Economic Expectations,” reflecting the authors’ belief in the sector’s recovery and stabilization following a rocky 2023. Read on for a few takeaways from this extensive report.
The media embraces new technologies
A persistent theme in the ADL report is the need to employ new technologies to improve operations, engage new audiences, and customize experiences.
Artificial Intelligence (AI) and Machine Learning (ML) continue to transform the media landscape, helping to automate manual processes, personalize content and experiences, and enable data-driven decision-making to power industry growth. However, for all its utility and potential, AI is a powder keg of potentially explosive issues, as seen during the WGA strikes (which resulted in greater protections and compensation for writers). The ADL report maintains that early adopters will benefit from AI innovations, even as the regulatory and ethical landscape around AI continues to evolve.
VR and AR add dimension to immersive experiences for customers and will increasingly merge with other new technologies in the development of cutting-edge user experiences.
Cloud computing facilitates agility and reduces costs. Cloud gaming continues to expand globally, driven in part by immersive experience.
Big data and analytics should be wisely employed to discover customer preferences and behavior and inform industry decision-making.
Social media continues to be vital to the overall media industry, with huge capacity to engage audiences, build brand awareness, and boost content discovery. Platforms such as Twitch, Reddit, Discord, and TikTok are enticing content creators with AI tools that facilitate video and music editing, while also developing tools to label AI-generated content.
Audio is a big opportunity
Perhaps it’s a sign of multitasking culture, but the public’s appetite for music, podcasts, and audiobooks has remained robust and is forecast to remain strong.
Music streaming saw almost double-digit growth globally during the pandemic, and that growth is forecast to continue at a somewhat slower but still steady rate. The U.S. was the main driver, contributing about 40% of the growth in the global music streaming market in 2024. Spotify continues to dominate as a platform. Most streaming services increased consumer prices in 2023 but also expanded options such as audiobooks and podcasts.
Podcasts are still climbing in popularity and attracting advertisers. A significant portion of the public are tuning in to news podcasts, especially in the U.S. 19% of U.S. residents surveyed have tuned into a news podcast in the last month, compared to an average of 12% globally. Sweden is just behind the U.S. in news podcast use at 17%, with the UK lagging at only 8%, according to the ADL study.
Audiobooks continue in popularity overall and will benefit from a boom in education publishing (which is expected to achieve double-digit growth between 2020 to 2025), and in self-publishing. Spotify has moved into the audiobooks business, offering 15 free hours of audiobook listening to paid subscribers in the U.S., UK, and Australia.
Traditional news vs. the “creator” economy
Creator culture and the resulting creator economy have grown, and AI tools are making even it easier for individuals to create and edit content. Brands are recognizing the power of influencer marketing and giving creators more leeway to put forth fresh, albeit less polished, content.
A flipside of the enthusiasm for interactivity and user creation is declining interest in newsprint and linear television. Younger generations are driving this change. In the UK, only people aged 55 and older cited television as their primary source of news (42%). Those under age 45 showed a strong preference for online sites and apps as news sources, followed by social media. People under 25 relied on social media above all, with 41% of people in that age group citing it as their main source of news, according to the survey.
A concerning aspect of this trend is the lack of regulation, which makes misinformation much easier to launch and spread. Print news struggles to compete with free but often less reliable digital news platforms. Only a small minority of all age groups (ranging from 6% of people 55+ to 0% of those 45-54) in the ADL’s UK survey cited print as their primary source of news. Bundling and partnerships may be one path to combine more traditional linear media sources with more fluid and creator-friendly platforms.
Recommendations for media companies
In addition to the key theme of embracing and leveraging new technologies, the report’s authors offer a few more recommendations.
Forge strategic partnerships to reach new audiences, pool resources, and share expertise.
Balance user privacy with data-driven decision-making.
Invest in customer relationships, using new technologies to better understand and communicate with users and tailor content accordingly.
Deliver excellent content and experiences. There’s no substitute for outstanding content. Audiences seek high quality, engaging, unique experiences, so media leaders must invest in content that rises above that of competitors.
News has long relied on the power of visuals to tell stories: first through illustrations and more recently through photography and video. The recent rise in access to generative AI tools for making and editing images offers photojournalists, video producers and other journalists exciting new possibilities. However, it also poses unique challenges at each stage of the planning, production, editing, and publication process.
As an example, AI-generated assets can suffer from algorithmic bias. Therefore, organizations that use AI carelessly run the risk of reputational damage.
However, despite the risks, a recent Associated Press report found that one in five journalists uses generative AI to make or edit multimedia. But how are journalists using these tools, specifically, and what should other journalists and media managers look out for?
I recently undertook a study of how newsroom workers perceived and used generative visual AI in their organizations with Ryan J. Thomson and Phoebe Matich. That study, “Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policies,” uses interviews with newsroom personnel at 16 leading news organizations in seven countries, including the U.S. It reveals how newsroom leaders can protect their organizations from the dangers of careless generative visual AI use while also harnessing its possibilities.
Challenges for deploying AI visuals in newsrooms
Mis/disinformation
Those interviewed were most worried about the way in which generative AI tools or outputs can be used to mislead or deceive. This can happen even without ill intent. In the words of one of the editors interviewed:
When it comes to AI-generated photos, regardless of if we go the extra mile and tell everyone, “Hey, this is an AI-generated image” in the caption and things like that, there will still be a shockingly large amount of people who won’t see that part and will only see the image and will assume that it’s real and I would hate for that to be the risk that we put in every time we decide to use that technology.
The World Economic Forum has named the threat of AI-fuelled mis/disinformation as the world’s greatest short-term risk. They rank it above other pressing issues, such as armed conflict and climate change.
Labor concerns
The second biggest challenge, interviewees said, was the threat that generative AI posed to lens-based workers and other visual practitioners within news organizations. AI-generated visual content is much cheaper to produce than paying for bespoke content but the interviewees noted that quality is, of course, different.
An editor in Europe said he didn’t think AI tools would take peoples’ jobs. Instead, he felt it would be others who apply these tools well who would be hired instead, as the newsroom can thus be more efficient by using them.
Copyright
The third biggest challenge, according to the interviewees, was copyright concerns around AI-generated visual content. In the words of one of the editors interviewed:
“Programs like Midjourney and DALL-E are essentially stealing images and stealing ideas and stealing the creative labor of these illustrators and they’re not getting anything in return.”
Many text-to-image generators, including Stable Diffusion, Midjourney, and DALL-E, have been accused of training their models on vast swathes of copyrighted content online. The two biggest players in the market that said they are taking a different approach are Adobe (with its generative AI offering, Firefly) and Getty (with its offering, Generative AI by Getty Images).
Both of these claim they’re only training their generators with proprietary content or with content they have license to use, which makes using them less legally risky. (Although Adobe was later discovered to have trained its model partially on Midjourney images.)
The downside of not indiscriminately scraping the web for training data is that this affects the outputs that are possible. Firefly, for example, wasn’t able to fully render the prompt: “Donald Trump on the Steps of the Supreme Court.” It returned four images of the building itself sans Trump along with the error message: “One of more words may not meet User Guidelines and were removed.”
On its help center, Adobe notes, “Firefly only generates images of public figures available for commercial use on the Stock website, excluding editorial content. It shouldn’t generate public figures unavailable in the Stock data.”
Detection issues
The fourth biggest challenge was that journalists themselves didn’t always know when AI had been used to make or edit visual assets. Some of the traditional ways to fact-check images don’t always work for those made by or edited with AI.
Some participants mentioned the Content Authenticity Initiative and its Content Credentials, a kind of tamper-evident metadata used to show the history of an image. However, they also lamented significant barriers to implementation. These included having to buy new cameras equipped with the content credentials technology and also re-develop their digital asset management systems and websites to work with and display the credentials. Considering that at least half of all Americans get at least some news from social media platforms, content credentials will only be effective if they are adopted widely across the industry and by big tech giants, alike.
Despite these significant risks and challenges, newsroom workers also imagined ways that the technology could be used in productive and beneficial ways.
Opportunities for deploying AI tools and visuals in newsrooms
Creating illustrations
The newsroom employees interviewed were most comfortable with using generative AI to create illustrations that were not photorealistic. AI can be helpful to illustrate hard-to-visualize stories, like those dealing with bitcoin or with AI itself.
Brainstorming and idea generation
Those interviewed also thought generative AI could be used for story research and inspiration. Instead of just looking at Pinterest boards or conducting a Google Image search, journalists imagined asking a chatbot for help with how to show challenging topics, like visualizing the depth of the Mariana Trench. Interviewees also thought generative AI could be used to create mood boards to quickly and concretely communicate an editor’s vision to a freelancer.
Visualizing the past or future
Journalists also thought the potential existed to help them show the past or future. In one editor’s words:
“We always talk about how like it’s really hard to photograph the past. There’s only so much that you can do in terms of pulling archival images and things like that.”
This editor thought AI could be used in close consultation with relevant sources to narrate and then visualize how something looked in the past. Image-to-video AI tools like Runway can allow you to bring a historical still image to life or to describe a historical scene and receive a video in return.
More guidance (and research) needed
From our research, which also discusses principles and policies that newsrooms have in place to guide the responsible use of AI within news organizations, it is clear that the media industry finds itself at another major crossroads. As with each evolution of the craft, there are opportunities to explore and risks to be evaluated. But from what we saw, journalists need more guardrails to guide their use and allow for experimentation and innovation in ethically sound and responsible ways.
The environment for collecting and using data on the web has often been compared to the wild west – a place with no rules and where only the strong (and often morally-questionable) survive. Unfortunately, generative AI technology is developing in a similar vacuum of governance and ethical leadership.
Since the early days of the Internet, there were hundreds if not thousands of venture-backed companies competing to scoop up as much data as possible about consumers. They would then try to spin those datasets into a compelling product or service usually involving a model that included data-driven advertising. Nowadays, Meta and Google are the most often cited aggressive data collectors. Though arguably that’s because they killed off the competition and strong-armed their way into a dominant market position.
Google’s parent company, Alphabet collects massive amounts of data from Android devices, Google services, and its apps (Search, Maps, Gmail, etc.) and Chrome. It has even delayed killing off third party cookies in Chrome (the last major browser to do so) because it hasn’t developed a good way to maintain its dominant position as collector of consumer data.
Data vacuum meets governance vacuum
Meta set about to hoover up so much consumer data directly or indirectly that it failed to have controls in place around who could collect it or the purposes for which it could be used (see Cambridge Analytica). Another cringey example recently came to light when court documents were unsealed. Lest we think this behavior a thing of the past, Meta was reportedly using Onavo (a VPN it purchased in 2013) as a trojan horse to gather valuable analytics data on Snapchat, Amazon and YouTube. Meta is now being sued for violating wiretapping laws.
While regulators and legislative bodies are working to clean up the debris left in the aftermath of the wild west data industry, the race to compete in the Generative AI market might take data collection to a whole new level, likely with unforeseen and potentially catastrophic results.
Large Language Models (LLMs) need data to get better – lots of it. The hockey stick progress we’ve seen in the last 18 months among generative AI systems is almost completely attributable to the massive increase in datasets upon which the LLMs are trained. The New York Times recently reported on the red hot competition among AI companies to find new data for training with companies scraping any and all content they can get their hands on. And this is taking place with no regard for copyright law, terms of use or consumer privacy laws (and without any respect for consumers’ reasonable expectations of privacy).
That said, as The New York Times’ article also notes, AI systems may exhaust all available data for training by 2026. In the absence of high-quality original data, they might even turn to synthetic data – data that was created by AI systems – for training. Who knows what kind of consequences that could render?
Legal safeguards needed for generative AI
Sure, there are some existing safeguards that could be helpful in setting a more responsible course forward. AI companies have been confronted with numerous legal challenges to their unfettered data collection. These companies face a number of lawsuits around copyright infringement as well. However, these suits could take years to fully play out given the AI companies are well-funded and would likely appeal any setbacks in court.
There are privacy laws on the books that likely impact data collection by AI companies. But those laws exist only in a handful of states and it’s not clear exactly how the law applies since AI companies won’t disclose what and whose data they use for training.
Against this bleak backdrop, there have been some promising recent developments around generative AI governance in Congress. This week, a new bipartisan consumer privacy bill was unveiled. While there are some serious concerns and questions to address in that bill, at least the issue is front and center. At the same time, Members of Congress from both parties appear to be actively and constructively wrestling with how best to regulate the emerging AI industry. In fact, nearly every AI bill that has been introduced is bipartisan in nature.
As the wild west of data collection gets even wilder, it’s clear we need basic rules for AI systems and stronger protections for consumers. Without this, we are likely doomed to repeat the mistakes of the previous data collection bonanza – possibly with far more severe consequences.
From Google to Facebook and Instagram to TikTok (and so many more), publishers have spent the last couple of decades chasing their audiences from one platform to another—only to be betrayed by changing algorithms and shifting platform priorities. For years, popular wisdom held that you had to go where the audience is. Now, despite the fact that audiences (particularly younger ones) seek out news and information on social platforms, those platforms are “backing away” from making that content visible. But regardless of a media brand’s position on social media, search has remained the undisputed path to traffic.
Now, publishers face a whole new threat: generative AI search. Years of fine-tuning search engine optimization strategies may all be for naught as Google embraces AI-driven answers in lieu of links to relevant content. Meanwhile, Gartner predicts that traditional search engine volume will drop 25% by 2026 as users shift to AI chatbots and virtual agents for their answers.
The Wall Street Journal reports that publishers expect a 20% to 40% drop in their Google-generated traffic if the search giant rolls out its AI search tool to a broad audience. So, what are media executives supposed to do in the face of yet another shift in the technology landscape that threatens to put them on the outs once again? There’s really only one solution: devise a plan to regain control of their audience relationships once and for all.
Discovery: a problem as old as algorithms
AI search has yet to reach its full potential, but referral traffic is already taking a hit. AI-driven search results that fail to link to the content they scrape from is just one part of the problem. Searchers are often satisfied with AI “answers” and have little need to click through for more. And platforms from across the web are trying to keep more users within the walls of their gardens, and that means the likes of Facebook and Google have gone from partners in traffic acquisition to the opposition.
“We’re seeing an industry in real crisis,” says Jim Chisholm, a news media analyst. While Chisholm says he is not seeing evidence that AI is impacting traffic just yet, that does not mean publishers are not already feeling the squeeze from elsewhere.
Liam Andrew, Chief Product Officer at The Texas Tribune, says that while his team expects generative AI to impact search traffic, they are still waiting to see a substantial impact. The bigger problem facing the Tribune now is social media traffic or the lack of it.
While social platforms across the spectrum are pulling the rug out from under publishers, our old friend search is slowly changing the rules of the game. “Search is still working,” Andrew says. The Texas Tribune sees that explainers and guides still drive traffic and even subscriptions. However, other sites have not been so lucky.
Back in October 2023, Press Gazette found that of 70 leading publishers, half saw their search visibility scores drop—and 24 of those saw double-digit dips. That was the result of one update—more bad news is certain to follow as new updates make their way to the masses.
AI bots: To block or not to block
Publishers may be preparing for a more significant battle when it comes to traffic. However, right now, there’s another fight on their doorsteps: bots are crawling their sites and using their work to train the AI poised to steal their traffic. Some are already taking steps to stop the free—and possibly illegal—use of their content. The Reuters Institute found that 48% of the most widely used news websites across 10 countries blocked OpenAI’s crawlers by the close of 2023. Far fewer—just 24%—blocked Google’s AI crawler.
For Andrew and The Texas Tribune, blocking AI crawlers is not a major concern. They already have an open-republishing model and are used to seeing their content scraped and used on other sites (often without the requested attribution). “It improves our readership and impact, but we compete with ourselves for SEO,” he says. He also says they see versions of their stories on news sites where the content is entirely AI-written. However, it is “not affecting our core audience traffic,” according to Andrew. So — at least for now — The Texas Tribune is not planning to block the bots.
Meanwhile, Google is reportedly paying publishers to use its AI tools to write content. While in the short term, this may offer (smaller) publishers relatively small sums as well as an easier way to create low-lift content, like other Google News Initiative (GNI) projects, there’s an underlying concern that Google is not focused on publisher health in the long term.
Developing a direct-to-reader strategy
As Andrew and the product team at The Texas Tribune look toward a less search-dependent future, they are changing strategies. For 2025 and beyond, “we are not going to be focusing on a really good SERP [Search Engine Results Pages] unnecessarily,” Andrew says. Instead, they’ll focus on products built directly for readers.
“Newsletters have been part of our model for over 10 years. It’s nothing new, but we’re continuing to see success with it,” Andrew says. Not only do the newsletters still drive traffic, but they also drive conversions. Subscribers become members at a higher rate, vital to a publication that does not depend on paywalls for revenue.
DCN conducted an informal survey on concerns around the impact of AI search on traffic, and while the sample size may not hold up to scientific scrutiny, it was clear that newsletters are a crucial tactic for publishers looking to own their audiences. Other stats suggest this is a good move. Storydoc research found that 90% of Americans subscribe to at least one newsletter. That number goes up for younger audiences as 95% of Gen Z, Millennials, and even Gen X receive newsletters, compared to 84% of Baby Boomers.
Experiment with engagement approaches
The solutions to the Google problem don’t end at email, though.
“We also have a big robust event system,” Andrew notes. The Texas Tribune holds dozens every year. They range from “pre-gaming the Texas primary” to deep dives into transportation in the Austin/San Antonio area. They gather experts and pundits to share their expertise on topics that interest readers. The team also live-streams these events — a universally important tactic for engaging younger, more diverse audiences. These events also turn out to be effective for converting casual readers into subscribers and members.
Andrew alluded to products his team is working on that are still under wraps. Still, it’s clear that, like many publishers, The Texas Tribune is preparing for a future when search no longer drives most traffic.
Chisholm thinks mobile apps are another excellent direct-to-reader strategy, and research backs this up. Pew reports that “A large majority of U.S. adults (86%) say they often or sometimes get news from a smartphone, computer or tablet, including 56% who say they do so often. This is more than the 49% who said they often got news from digital devices in 2022 and the 51% of those who said the same in 2021.” Cultivating a relationship with readers through their mobile devices—where you can use push notifications and other native capabilities to grab their attention—will likely be one of the many tools publishers must deploy going forward.
“I’ve been in the news industry – which I love – for 48 years. Now we are at a crossroads,” says Chisholm. “Either we choose the road to recovery, rebuilding relationships with our readers, or we continue down the road we are on, subject to algorithms, more confusion between legitimate news and social media infested with AI nonsense.”
Artificial intelligence (AI) is generating a new era of content creation. However, with this innovation comes the challenge of distinguishing AI-generated content from human-made material. This creates another issue media companies must grapple with to build and maintain audience trust.
Mozilla’s latest report, In Transparency We Trust?, delves into the transformative impact of AI-generated content and its challenges. Ramak Molavi Vasse’I and Gabriel Udoh co-authored this research, exploring disclosure approaches in practice across many platforms and services. The report also raises concerns about AI-generated content and social media’s powerful reach ― intensifying the spread of algorithmic preferences and emotionally charged material.
Where generative AI falls on the synthetic content spectrum
AI-generated content is a subset of synthetic content. It includes images, videos, sounds, or any other form generated, edited, or enabled by artificial intelligence. Synthetic content exists on a spectrum with varying degrees of artificiality. One end of the spectrum features raw content, comprising hand-drawn illustrations, unaltered photographs, and human-written texts. These elements are untouched, representing the most natural form of creative expression. Moving along the spectrum is minimally edited content. Subtle refinements characterize this stage, like using Grammarly for text refinement or adjusting image contrast with photo editing apps. These adjustments enhance the quality and clarity of the content while maintaining its original essence.
Stepping up from minimally processed content is ultra-processed, where automated methods and software play a more significant role in altering or enhancing human-generated material. Applications like Adobe Photoshop can easily enable intricate image manipulations, such as replacing one person’s face with another’s. This level of processing represents a deeper form of content alteration facilitated by advanced technology.The spectrum of synthetic content presents authenticity challenges, and the credibility of digital content comes into question.
AI-generated content can potentially negatively impact society, from spreading misinformation to eroding public trust in digital platforms. This includes concerns like identity theft, security problems, privacy breaches, and the risk of cheating and fraud. The growing use of AI-generated content mandates the the need for rules to limit its harm.
Regulatory mechanisms
Mozilla’s report notes that regulatory requirements across the globe mandate clearly identifying and labeling AI-generated content. Current approaches include visible labels and audible warnings to address the challenges of undisclosed synthetic content effectively. However, human-facing disclosure methods are only partially effective due to their vulnerability to manipulation and the potential to increase public mistrust.
Machine-readable methods, such as invisible watermarking, offer relative security. However, they require robust detection mechanisms to be truly effective. Machine-readable methods show promise but require standardized, robust watermarking techniques and unbiased detection systems.
The authors advocate for a holistic approach to governance that combines technological, regulatory, and educational measures. This includes prioritizing machine-readable methods, investing in “slow AI” solutions that embed corporate social responsibility, and balancing transparency with privacy concerns. Furthermore, they propose reimagining regulatory sandboxes as spaces for testing and refining AI governance strategies in collaboration with citizens and communities.
Ensuring the authenticity and safety of digital content in the age of AI demands innovation in governance strategies is a complex challenge. As the report points out, navigating the AI content challenge requires supporting a trustworthy digital ecosystem by leveraging machine-readable methods, fostering stakeholder collaboration and user education.
Transparent governance is essential to combat the risks associated with AI-generated content and uphold the integrity of digital platforms. Regulatory frameworks and technological solutions must adapt to safeguard against misinformation in order to promote trust in digital media content.
As November 5 draws increasingly near, the rhetoric surrounding the 2024 election is almost inescapable. Although think pieces, pundits, and poll numbers populate publications across the country, much less is said about how the outcome of this monumental election will affect the future of these outlets.
At first glance, the attitudes of President Joe Biden and former President Donald Trump about digital publishers appear to be diametrically opposed. Former President Trump’s adversarial attitude towards the media industry requires little preamble; after all, the term “fake news” is synonymous with his tenure and persona.
Meanwhile, President Biden has embraced legacy media and publishers, choosing to author op-eds in some of the nation’s most prominent outlets, such as the Washington Post and The New York Times. However, while a return by former-President Trump to the White House would probably mean another four years of him snubbing the White House Correspondents’ Dinner, his policies surrounding the industry wouldn’t be all that different to President Biden’s.
Regardless of their personal attitudes towards the media and digital publishers, the winning candidate will have to make important decisions that will have deep implications for the future of the industry. Even amidst the polarization that permeates the halls of Congress, lawmakers have found surprising agreement in their critiques of Big Tech. Proposals to regulate these companies, such as the rolling back of Section 230 protections, would send ripples across the entire industry.
This increased scrutiny has also opened the floodgates of antitrust considerations, and, if elected, Biden and Trump would both face historic antirust considerations surrounding the tech industry. Without doubt, their motives differ: President Biden being spurred by concerns over the effects of the industry’s shifting tectonic plates and former-President Trump being spurred by personal dislike and distrust towards internet companies. However, both are likely to take actions to try and halt Big Tech from dominating most aspects of modern society.
With that being said, we summarize below some of the most pressing policy issues at the intersection of Big Tech and digital media, as well as the actions President Biden and former-President Trump are expected to take that are likely to impact those in the business of digital media.
Regulating Artificial Intelligence (AI)
If there is one topic that has galvanized the industry during the past year it is without a doubt AI. Its promise and perils have made it the most controversial topic across industry discussions, and the approach taken by the whomever is sworn in on January 20, 2025, will send waves across the industry.
For a Biden administration, their top AI priorities would center on ensuring that agencies comply with the directives and deadlines established in the AI Executive Order, published in October of 2023. Earlier this year, the White House touted that all agencies had completed their 90-day deliverables.
However, the most relevant deliverable for digital publishers won’t arrive until mid-2024, when the Director of the United States Patent and Trademark Office (USPTO) is expected to issue recommendations to the President on potential executive actions relating to copyright and AI. Earlier this month, Ben Buchanan, White House Special Advisor for AI, stated that while the Administration does not yet have an official stance on AI and copyright issues, the Administration’s general priorities are “making sure that we have an innovative AI ecosystem and making sure the people who create meaningful content are appropriately compensated for it.”
For a Trump administration, it can be expected that a significant portion of the USPTO’s recommendations will be adopted, given that these are set to arrive only a few months before election day. Although earlier this month former-President Trump stated that AI was “maybe the most dangerous thing out there,” he has given little indication as to what his administration’s policies would be.
Nonetheless, it is expected that a Trump administration would give special focus to AI’s potential discrimination or censorship of conservative or MAGA voices. For example, given the former President’s acrimonious attitude towards his adversaries, he is sure not to be content with Chat GPT’s refusal to produce a poem in his administration.
Data privacy policy and online safety
Following the landmark hearing on Big Tech held earlier this year, children’s online safety has dominated the national discussion surrounding data privacy. A Biden administration can be expected to throw its weight behind the Kids Online Safety Act (KOSA), which has shaped up to be the paramount bill on the topic.
This legislative effort, which has faced several roadblocks since its introduction in 2022, is closer than ever to this finish line, now that is has earned the backing of over 60 senators including Senate Majority Leader Chuck Schumer (D-NY). As presently written, KOSA has limited applicability to digital publishers, although its eventual passage, coupled with more stringent data privacy efforts being proposed across multiple states, might signify stricter privacy requirements for publishers further down the road.
At a broader level, President Biden is expected to announce executive actions aimed at preventing foreign adversaries from illegitimately gaining access to Americans’ data sometime in 2024. Such actions would probably impose new data privacy requirements on digital publishers, although the span of these requirements would be determined by the breadth of the actions announced. However, these new requirements and publishers’ compliance with them could boost consumer trust, at a time when data breaches are more rampant than ever.
It can be expected that if elected, President Trump would also support KOSA, given its broad bipartisan support. However, he can also be expected to reject international data privacy standards, as well as rescind any actions taken by the current FCC on net neutrality or broadband privacy.
Corporate taxes
A Biden administration is expected to continue its support for progressive tax reforms, including increasing the corporate tax rate. As part of his FY2024 budget, President Biden proposed to increase the corporate tax rate from 21% up to 28%. President Biden is also expected to close certain tax loopholes that could impact corporate financial strategies, as well as increase funding for social programs.
Meanwhile, if elected, former-President Trump would at the very least focus on maintaining the corporate tax rate at 21%, although he could very well attempt to lower the rate to 15%, the rate the former-President initially sought for the 2017 Tax Cuts and Jobs Act. When asked about the possibility of lowering the corporate tax rate to 15% during a September interview with NBC’s Meet the Press, former President Trump stated that he’d “like to lower them a little bit.” More broadly, a Trump administration would seek to extend, or even make permanent, a majority of provisions from the 2017 tax cuts, such as the Qualified Business Income Deduction.
What’s next?
Many of most relevant decisions regarding the future of digital publishing lie outside the purview of the executive branch. As brought up in the Senate Judiciary Committee hearing on AI and Copyright, members of Congress are exploring legislation that would require the implementation of licensing agreements between publishers and AI companies, as well as legislation that would at the very least “clarify” the applicability of Section 230 and copyright law to content scraping carried out by AI companies. Additionally, other major questions surrounding copyright law are in the hands of the courts, as is the case with The New York Times’ lawsuit against Open AI and Microsoft.
Still, a multitude of regulatory decisions will land on the desk of whomever is elected on November 5, 2024. What is of most concern to digital publishers isn’t necessarily which candidate emerges victorious, but how to best advocate for the protection and preservation of the digital publishing industry amidst a confluence of interests and voices that will crowd the White House on these industry-defining issues.
Artificial intelligence (AI) is rapidly integrating into news and content, prompting a necessary reflection on its implications for democratic societies, which rely on trustworthy and diverse media sources. A new report, Artificial intelligence and media policy: Plurality from the meat grinder, from Professor Dr. Rupprecht Podszun, Heinrich Heine University, and Ruth Meyer, Director of the Saarland State Media Authority (LMS), delves into the potential risks AI poses as it is applied in the media industry as well as the need for regulation to govern its usage.
Guardrails for AI
Podszun and Meyer identify three areas needing guidelines and policies to uphold democracies:
Trust in information is about ensuring that the information people receive is accurate and reliable, whether it’s news or other content. Laws like the Saarland Media Act stress the importance of journalists being careful and accurate when reporting. However, when artificial intelligence involves creating content, it can be hard to know if the information is trustworthy because the processes behind AI are often hidden. This lack of transparency can make people doubt the reliability of AI-generated content.
Public discourse refers to the conversations and discussions in society, especially around important issues. With the rise of AI-powered recommendation systems, people often get information that aligns with their beliefs and interests. This can create “bubbles” where people only hear opinions like their own, leading to societal divisions. The idea of the public sphere, where people from different backgrounds come together to discuss and solve problems, is important for democracy. However, social media platforms, which play a big role in public discourse today, can make it harder for diverse opinions to be heard.
Plurality is under pressure despite the many different media sources available today. The reality is that companies like Alphabet (Google’s parent company) and Meta (formerly Facebook) have too much control over the information people see. This concentration of power can limit the variety of perspectives and ideas people are exposed to, especially when AI algorithms are used to personalize content delivery.
A regulatory approach for AI
Regulation will be needed to address concerns and provide guidance for the most constructive development of the AI industry and its applications. This could include laws to prevent monopolistic companies from having too much control over information. Further, there is a need for transparency about how AI is used in creating content. It’s also important to ensure that the data used to train AI systems is diverse and representative of different viewpoints.
The authors provide recommendations for regulatory approaches to ensure diverse and trustworthy access to information. They suggest that:
Preventing monopolistic concentration is crucial, given the dominance of big tech companies in both data-driven business models and AI control. Media concentration laws should counteract this trend, promoting diverse data pools and open technology.
Ensuring transparency and responsibility are fundamental. Trust in AI-driven media necessitates transparency regarding its usage, training data, and information sources.
Identifying prohibitions is necessary to enforce accountability. If AI crosses ethical boundaries, explicit bans with sanctions are imperative.
Addressing the training data problem is vital. Guaranteeing open and diverse data selection mitigates distortions in AI-generated content. Embracing adaptable AIs capable of correcting errors ensures ongoing development and integrity.
While AI offers unprecedented opportunities for media innovation, its unchecked proliferation poses significant risks to democratic principles. Effective regulation is imperative to harness the potential of AI in media while safeguarding pluralism, transparency, and trust in information dissemination. Only through collaborative efforts between policymakers, media stakeholders, and technologists can the transformative potential of AI be harnessed responsibly in the service of democratic societies.