The public has a knowledge gap around generative artificial intelligence (GenAI), especially when it comes to its use in news media, according to a recent study of residents in six countries. Younger people across countries are more likely to have used GenAI tools and be more comfortable and optimistic about the future of GenAI than older people. And a higher level of experience using Gen AI tools appears to correlate with more positive assessment of their utility and reliability.
Over two thousand residents in each of six countries were surveyed for the May 2024 report What Does the Public in Six Countries Think of Generative AI in News? (Reuters Institute for the Study of Journalism). The countries surveyed were Argentina, Denmark, France, Japan, the UK and the USA.
Younger people more optimistic about GenAI
Overall, younger people had higher familiarity and comfort with GenAI tools. They were also more optimistic about future use and more comfortable with the use of GenAI tools in news media and journalism.
People aged 18-24 in all six countries were much more likely to have used GenAI tools such as ChatGPT, and to use them regularly, than older respondents. Averaging across countries, only 16% of respondents 55+ report using ChatGPT at least once, compared to 56% aged 18 to 24.
Respondents 18-24 are much more likely to expect GenAI to have a large impact on ordinary people in the next five years. Sixty percent of people 18-24 expect this, while only 40% of people 55+ do.
In five out of six countries surveyed, people aged 18-34 are more likely to expect GenAI tools to have a positive impact in their own lives and on society. However, Argentia residents aged 45+ broke rank, expressing more optimism about GenAI improving both their own lives and society at large than younger generations.
Many respondents believe GenAI will improve scientific research, healthcare, and transportation. However, they express much more pessimism about its impact on news and journalism and job security.
Younger people, while still skeptical, have more trust in responsible use of GenAI by many sectors. This tendency is especially pronounced in sectors viewed with greatest skepticism by the overall public – such as government, politicians, social media, search engines, and news media.
Across all six countries, people 18-24 are significantly more likely than average to say they are comfortable using news produced entirely or partly by AI.
People don’t regularly use GenAI tools
Even the youngest generation surveyed reports infrequent use of GenAI tools. However, if the correlation between young people using GenAI more and feeling more optimistic and trusting about it holds true on a broader scale, it’s likely that as more people become comfortable using GenAI tools regularly, there will be less trepidation surrounding it.
Between 20-30% of the online public across countries have not heard of any of the most popular AI tools.
While ChatGPT proved by far the most recognized GenAI tool, only 1% of respondents in Japan, 2% in France and the UK, and 7% in the U.S. say they use ChatGPT daily. Eighteen percent of the youngest age group report using ChatGPT weekly, compared to only 3% of those aged 55+.
Only 5% of people surveyed across the six countries report using GenAI to get the latest news.
It’s worth noting that the populations surveyed were in affluent countries with higher-than-average education and internet connectivity levels. Less affluent, free, and connected countries likely have even fewer people experienced with GenAI tools.
The jury is out on public opinion of GenAI in news
A great deal of uncertainty prevails around GenAI use among all people, especially those with lower levels of formal education and less experience using GenAI tools. Across all six countries, over half (53%) of respondents answered “neither” or “don’t know” when asked whether GenAI will make their lives better or worse. Most, however, think it will make news and journalism worse.
When it comes to news, people are more comfortable with GenAI tools being used for backroom work such as editing and translation than they are with its use to create information (writing articles or creating images).
There is skepticism about whether humans are adequately vetting content produced using GenAI. Many believe that news produced using GenAI tools is less valuable.
Users have more comfort around GenAI use to produce news on “soft” topics such as fashion and sports, much less to produce “hard” news such as international affairs and political topics.
Thirty percent of U.S. and Argentina respondents trust news media to use GenAI responsibly. Only 12% in the UK and 18% in France agree. For comparison, over half of respondents in most of the countries trust healthcare professionals to use GenAI responsibility.
Most of the public believes it is very important to have humans “in the loop” overseeing GenAI use in newsrooms. Almost half surveyed do not believe that is happening. Across the six-country average, only a third believe human editors “always” or “often” check GenAI output for accuracy and quality.
A cross-country average of 41% say that news created mostly by AI will be “less worth paying for” and 19% “don’t know. 32% answered “about the same.”
Opportunities to lead
These findings present a rocky road for news leaders to traverse. However, they also offer also an opportunity to fill the knowledge gap with information that is educational and reassuring.
Research indicates that the international public overall values transparency in news media as a general practice, and blames news owners and leadership (rather than individual journalists) when it is lacking. However, some research shows users claim to want transparency around GenAI tools in news, but trust news less once they are made aware of its use.
The fact that the public at large is still wavering presents an opportunity for media leaders to get out in front on this issue. Creating policy and providing transparency around the use of GenAI tools in news and journalism is critical. News leaders especially need to educate the public about their standards for human oversight around content produced using GenAI tools.
These days, digital media companies are all trying to figure out how to best incorporate AI into their products, services and capabilities, via partnerships or by building their own. The goal is to gain a competitive edge as they tailor AI capabilities to their audiences, subscribers and clients’ specific needs.
By leveraging proprietary Large Language Models (LLMs) digital media companies have a new tool in their toolboxes. These offerings offer differentiation and added value, enhanced audience engagement and user experience. These proprietary LLMs also set them apart from companies that are opting for licensing partnerships with other LLMs, which offer more generalized knowledge bases and draw from a wide range of sources in terms of subject matter and quality.
A growing number of digital media companies are rolling out their own LLM-based generative AI features for search and data-based purposes to enhance user experience and create fine-tuned solutions. In addition to looking at several of the offerings media companies are bringing to market, we spoke to Dow Jones, Financial Times and Outside Inc. about the generative AI tools they’ve built and explore the strategies behind them.
Media companies fuel generative AI for better solutions
Digital media companies are harnessing the power of generative AI to unlock the full potential of their own – sometimes vast amounts – of proprietary information. These new products allow them to offer valuable, personalized, and accessible content to their audiences, subscribers, customers and clients.
Take for example, Bloomberg, which released a research paper in March detailing the development of its new large-scale generative AI model called BloombergGPT. The LLM was trained on a wide range of financial data to assist Bloomberg in improving existing financial natural language processing (NLP) tasks, such as sentiment analysis, named entity recognition, news classification, and question answering, among others. In addition, the tool will help Bloomberg customers organize the vast quantities of data available on the Bloomberg Terminal in ways that suit their specific needs.
Launched in beta June 4, Fortune partnered with Accenture to create a generative AI product called Fortune Analytics. The tool delivers ChatGPT-style responses based on 20 years of financial data from the Fortune 500 and Global 500 lists, as well as related articles, and helps customers build graphic visualizations.
Generative AI helps customers speed up processes
A deeper discussion of how digital media companies are using AI provides insights to help others understand the potential to leverage the technology for their own needs. Dow Jones, for example uses Generative AI for a platform that helps customers meet compliance requirements.
Dow Jones Risk Compliance is a global provider of risk and compliance solutions across banks and corporations which helps organizations perform checks on their counterparties. They do that from the perspective of complying with anti-money laundering regulation, anti-corruption regulation, looking to also mitigate supply chain risk and reputational issues. Dow Jones Risk Compliance provides tools that allow customers to search data sets and help manage regulatory and reputational risk.
In April, Dow Jones Risk & Compliance launched an AI-powered research platform for clients that enables organizations to build an investigative due diligence report covering multiple sources in as little as five minutes. Called Dow Jones Integrity Check, the research platform is a fully automated solution that goes beyond screening to identify risks and red flags from thousands of data sources.
The planning for Dow Jones Integrity Check goes back a few years, as the company sought to provide its customers with a quicker way to do due diligence on their counterparties, Joel Lange, executive Vice President and General Manager, Risk and Research at Dow Jones explained.
Lange said that Dow Jones effectively built a platform which automatically creates a report for customers on a person or company, using technology from AI firm Xapien. It brings together Dow Jones’ data that is plugged into other data sets, corporate registrar information, and wider web content. It then leverages the platform’s Generative AI capability to produce a piece of analysis or a report.
Dow Jones Risk & Compliance customers use their technology to make critical, often complex, business decisions. Often the data collection process can be incredibly time consuming, taking days if not weeks.
The new tool “provides investigations, teams, banks and corporations with initial due diligence. Essentially it’s a starting point for them to conduct their due diligence, effectively automating a lot of that data collection process,” according to Lange.
Lange points out that the compliance field is always in need of increased efficiency. However, it carries with it great risk to reputation. Dow Jones Integrity Check was designed to reshape compliance workflows, creating an additional layer of investigation that can be deployed at scale. “What we’re doing here is enabling them to more rapidly and efficiently aggregate, consolidate, and bring information to the fore, which they can then analyze and then take that investigation further to finalize an outcome,” Lange said.
Regardless of the quality of the generated results, most experts believe that it is important to have a human in the loop in order to maintain content accuracy, mitigate bias, and enhance the credibility of the content. Lange also said that it’s critical to have “that human in the loop to evaluate the information and then to make a decision in relation to the action that the customer wants to take.”
In recent months, digital media companies have been launching their own generative AI tools that allow users to ask questions in natural language and receive accurate and relevant results.
The Associated Press created Merlin, an AI-generated search tool that makes searching the AP archive more accurate. “Merlin pinpoints key moments in our videos to exact second and can be used for older archive material that lacks modern keywords or metadata,” explained AP Editor in Chief Julie Pace at The International Journalism Festival in Perugia in April.
Outside’s Scout: AI search with useful results
Chatbots have become a popular form of search. Originally pre-programmed and only able to answer select questions included in their programming, chatbots have evolved and increased engagement by providing a conversational interface. Used for everything from organizing schedules and news updates to customer service inquiries, Generative AI-based chatbots assist users in finding information more efficiently across a wide range of industries.
Much like The Guardian, The Washington Post, The New York Times and other digital media organizations that blocked OpenAI from using their content to power artificial intelligence, Outside CEO Robin Thurston explained that Outside Inc. wasn’t going to let third parties scrape their platforms to train LLM models.
Instead, they looked at leveraging their own content and data. “We had a lot of proprietary content that we felt was not easily accessible. It’s almost what I’d call the front page problem, which is you put something on the front page and then it kind of disappears into the ether,” Thurston said.
“We asked ourselves: How do we create something leveraging all this proprietary data? How do we leverage that in a way that really brings value to our user?” Thurston said. The answer was Scout, Outside Inc.’s AI search assistant. Scout is a custom-developed chatbot.
The company could see that generative AI offered a way to make that content accessible and even more useful to its readers. Outside had a lot of evergreen content that wasn’t adding value once it left the front page. Their brands inspire and inform audiences about outdoor adventures, new destinations and gear – a lot of which is evergreen and proprietary content that still had value if it could easily be surfaced by its audience. The chat interface allows their content to continue to be accessible to readers after it is no longer front and center on the website.
Scout gives users a summary answer to their question, leveraging Outside Inc’s proprietary data, and surfaces articles that it references. “It’s just a much more advanced search mechanism than our old tool was. Not only does it summarize, but it then returns the things that are most relevant,” he explained.
Additionally, Outside Inc’s old search function worked by each individual brand. Scout searches across the 20+ properties owned by the parent company which include Backpacker, Climbing, SKI Magazine, and Yoga Journal, among others. Scout brings all of the results together, from the 20+ different Outside brands, from the best camping destinations, to the best trails, outdoor activities for the family, gear, equipment and food all in one result.
One aspect that sets Outside Inc.’s model apart is their customer base, which differs from general news media customers. Outside’s customers engage in a different type of interaction, not just a quick transactional skim of a news story. “We have a bit of a different relationship in that they’re not only getting inspiration from us, which trip should I take? What gear should I buy? But then because of our portfolio, they’re kind of looking at what’s next,” Thurston said.
It was important to Thurston to use the LLM in a number of different ways, so Outside Inc launched a local newsletter initiative with the help of AI. “On Monday mornings we do a local running, cycling and outdoor newsletter that goes to people that sign up for it, and it uses that same LLM to pick what types of routes and content for that local newsletter that we’re now delivering in 64,000 ZIP codes in the U.S.”
Thurston said they had a team working on Scout and it took about six months. “Luckily, we had already built a lot of infrastructure in preparation for this in terms of how we were going to leverage our data. Even for something like traditional search, we were building a backend so that we could do that across the board. But this is obviously a much more complicated model that allows us to do it in a completely new way,” he said.
Connecting AI search to a real subscriber need
In late March, The Financial Times released its first generative AI feature for subscribers called Ask FT. Like Scout, the chat-based search tool allows users to ask any question and receive a response using FT content published over the last two decades. The feature is currently available to approximately 500 FT Professional subscribers. It is powered by the FT’s own internal search capabilities, combined with a third-party LLM.
The tool is designed to help users understand complicated issues or topics, like Ireland’s offshore energy policy, rather than just searching for specific information. Ask FT searches through Financial Times (FT) content, generates a summary and cites the sources.
“It works particularly well for people who are trying to understand quite complex issues that might have been going on over time or have lots of different elements,” explained Lindsey Jayne, the chief product officer of the Financial Times.
Jayne explained that they spend a lot of time understanding why people choose the FT and how they use it. People read the FT to understand the world around them, to have a deep background knowledge of emerging events and affairs. “With any kind of technology, it’s always important to look at how technology is evolving to see what it can do. But I think it’s really important to connect that back to a real need that your customers have, something they’re trying to get done. Otherwise it’s just tech for the sake of tech and people might play with it, but not stick with it,” she said.
Trusted sources and GenAI attribution
Solutions like those from Dow Jones, FT and Outside Inc. highlight the power of a brand with a trusted audience relationship to create deep, authentic relationships built on reliability and credibility. Trusted media brands are considered authoritative because their content is based on credible sources and facts, which ensures accuracy.
Currently, generative AI has demonstrated low accuracy and poses challenges to sourcing and attribution. Attribution is a central feature for digital media companies who roll out their own generative AI solutions. For Dow Jones compliance customers, attribution is critical to customers, to know if they’re going to make a decision based on information that is available in the media, according to Lange.
“They need to have that attributed to within the solution so that if it’s flowing into their audit trails or they have to present that in a court of law, or if they would need to present it to our internal audit, the attribution is really key. (Attribution) is going to be critical for a lot of the solutions that will come to market,” he said. “The attribution has to be there in order to rely on it for a compliance use case or really any other use case. You really need to know where that fact or that piece of information or data actually came from and be able to source it back to the underlying article.”
The Financial Times’ generative AI tool also offers attribution to FT articles in all of its answers. Ask FT pulls together lots of different source material, generates an answer, and attributes it to various FT articles. “What we ask the large language model to do is to read those segments of the articles and to turn them into a summary that explains the things you need to know and then to also cite them so that you have the opportunity to check it,” Jayne said.
They also ask the FT model to infer from people’s questions when it should be searching from. “Maybe you’re really interested in what’s happened in the last year or so, and we also get the model to reread the answer, reread all of the segments and check that, as kind of a guard against hallucination. You can never get rid of hallucination totally, but you can do lots to mitigate it.”
The Financial Times is also asking for feedback from the subscribers using the tool. “We’re literally reading all of the feedback to help understand what kinds of questions work, where it falls down, where it doesn’t, and who’s using it, why and when.”
Leaning into media strengths and adding a superpower
Generative AI seems to have created unlimited opportunities and also considerable challenges, questions and concerns. However it is clear that an asset many media companies possess is a deep reservoir of quality content and it is good for business to extract the most value from the investment in its creation. Leveraging their own content to train and program generative AI tools that serve readers seems like a very promising application.
In fact, generative AI can give trustworthy sources a bit of a super power. Jayne from the FT offered the example of scientists using the technology to read through hundreds of thousands of research papers and find patterns in a process that would otherwise take years to read in an effort to make important connections.
While scraped-content LLMs pose risks to authenticity, accuracy and attribution, proprietary learning models offer a promising alternative.
As Jayne put it, “The media has “an opportunity to harness what AI could mean for the user experience, what it could mean for journalism, in a way that’s very thoughtful, very clear and in line with our values and principles.” At the same time, she cautions that we shouldn’t be “getting overly excited because it’s not the answer to everything – even though we can’t escape the buzz at the moment.”
We are seeing many efforts bump up against the limits of what generative AI is able to do right now. However, media companies can avoid some of generative AI’s current pitfalls by employing the technology’s powerful language prediction, data processing and summarization capabilities while leaning into their own strengths of authenticity and accuracy.
New technologies will be critical to the media landscape in 2024, converging with trends towards immersive, personalized experiences and the increased impact of the creator economy, according to Arthur D. Little’s State of the Media Market 2024. The report is subtitled “Back to Balance: A Year of Prudent Economic Expectations,” reflecting the authors’ belief in the sector’s recovery and stabilization following a rocky 2023. Read on for a few takeaways from this extensive report.
The media embraces new technologies
A persistent theme in the ADL report is the need to employ new technologies to improve operations, engage new audiences, and customize experiences.
Artificial Intelligence (AI) and Machine Learning (ML) continue to transform the media landscape, helping to automate manual processes, personalize content and experiences, and enable data-driven decision-making to power industry growth. However, for all its utility and potential, AI is a powder keg of potentially explosive issues, as seen during the WGA strikes (which resulted in greater protections and compensation for writers). The ADL report maintains that early adopters will benefit from AI innovations, even as the regulatory and ethical landscape around AI continues to evolve.
VR and AR add dimension to immersive experiences for customers and will increasingly merge with other new technologies in the development of cutting-edge user experiences.
Cloud computing facilitates agility and reduces costs. Cloud gaming continues to expand globally, driven in part by immersive experience.
Big data and analytics should be wisely employed to discover customer preferences and behavior and inform industry decision-making.
Social media continues to be vital to the overall media industry, with huge capacity to engage audiences, build brand awareness, and boost content discovery. Platforms such as Twitch, Reddit, Discord, and TikTok are enticing content creators with AI tools that facilitate video and music editing, while also developing tools to label AI-generated content.
Audio is a big opportunity
Perhaps it’s a sign of multitasking culture, but the public’s appetite for music, podcasts, and audiobooks has remained robust and is forecast to remain strong.
Music streaming saw almost double-digit growth globally during the pandemic, and that growth is forecast to continue at a somewhat slower but still steady rate. The U.S. was the main driver, contributing about 40% of the growth in the global music streaming market in 2024. Spotify continues to dominate as a platform. Most streaming services increased consumer prices in 2023 but also expanded options such as audiobooks and podcasts.
Podcasts are still climbing in popularity and attracting advertisers. A significant portion of the public are tuning in to news podcasts, especially in the U.S. 19% of U.S. residents surveyed have tuned into a news podcast in the last month, compared to an average of 12% globally. Sweden is just behind the U.S. in news podcast use at 17%, with the UK lagging at only 8%, according to the ADL study.
Audiobooks continue in popularity overall and will benefit from a boom in education publishing (which is expected to achieve double-digit growth between 2020 to 2025), and in self-publishing. Spotify has moved into the audiobooks business, offering 15 free hours of audiobook listening to paid subscribers in the U.S., UK, and Australia.
Traditional news vs. the “creator” economy
Creator culture and the resulting creator economy have grown, and AI tools are making even it easier for individuals to create and edit content. Brands are recognizing the power of influencer marketing and giving creators more leeway to put forth fresh, albeit less polished, content.
A flipside of the enthusiasm for interactivity and user creation is declining interest in newsprint and linear television. Younger generations are driving this change. In the UK, only people aged 55 and older cited television as their primary source of news (42%). Those under age 45 showed a strong preference for online sites and apps as news sources, followed by social media. People under 25 relied on social media above all, with 41% of people in that age group citing it as their main source of news, according to the survey.
A concerning aspect of this trend is the lack of regulation, which makes misinformation much easier to launch and spread. Print news struggles to compete with free but often less reliable digital news platforms. Only a small minority of all age groups (ranging from 6% of people 55+ to 0% of those 45-54) in the ADL’s UK survey cited print as their primary source of news. Bundling and partnerships may be one path to combine more traditional linear media sources with more fluid and creator-friendly platforms.
Recommendations for media companies
In addition to the key theme of embracing and leveraging new technologies, the report’s authors offer a few more recommendations.
Forge strategic partnerships to reach new audiences, pool resources, and share expertise.
Balance user privacy with data-driven decision-making.
Invest in customer relationships, using new technologies to better understand and communicate with users and tailor content accordingly.
Deliver excellent content and experiences. There’s no substitute for outstanding content. Audiences seek high quality, engaging, unique experiences, so media leaders must invest in content that rises above that of competitors.
News has long relied on the power of visuals to tell stories: first through illustrations and more recently through photography and video. The recent rise in access to generative AI tools for making and editing images offers photojournalists, video producers and other journalists exciting new possibilities. However, it also poses unique challenges at each stage of the planning, production, editing, and publication process.
As an example, AI-generated assets can suffer from algorithmic bias. Therefore, organizations that use AI carelessly run the risk of reputational damage.
AI-generated images can suffer from algorithmic biases. As examples, without specifying any demographic or environmental attributes, text-to-image AI generator Midjourney returned four images—all of light-skinned men and all in seemingly urban environments—for the prompt, “wide-angle shot of journalist with camera.
However, despite the risks, a recent Associated Press report found that one in five journalists uses generative AI to make or edit multimedia. But how are journalists using these tools, specifically, and what should other journalists and media managers look out for?
I recently undertook a study of how newsroom workers perceived and used generative visual AI in their organizations with Ryan J. Thomson and Phoebe Matich. That study, “Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policies,” uses interviews with newsroom personnel at 16 leading news organizations in seven countries, including the U.S. It reveals how newsroom leaders can protect their organizations from the dangers of careless generative visual AI use while also harnessing its possibilities.
Challenges for deploying AI visuals in newsrooms
Mis/disinformation
Those interviewed were most worried about the way in which generative AI tools or outputs can be used to mislead or deceive. This can happen even without ill intent. In the words of one of the editors interviewed:
When it comes to AI-generated photos, regardless of if we go the extra mile and tell everyone, “Hey, this is an AI-generated image” in the caption and things like that, there will still be a shockingly large amount of people who won’t see that part and will only see the image and will assume that it’s real and I would hate for that to be the risk that we put in every time we decide to use that technology.
The World Economic Forum has named the threat of AI-fuelled mis/disinformation as the world’s greatest short-term risk. They rank it above other pressing issues, such as armed conflict and climate change.
Labor concerns
The second biggest challenge, interviewees said, was the threat that generative AI posed to lens-based workers and other visual practitioners within news organizations. AI-generated visual content is much cheaper to produce than paying for bespoke content but the interviewees noted that quality is, of course, different.
An editor in Europe said he didn’t think AI tools would take peoples’ jobs. Instead, he felt it would be others who apply these tools well who would be hired instead, as the newsroom can thus be more efficient by using them.
Copyright
The third biggest challenge, according to the interviewees, was copyright concerns around AI-generated visual content. In the words of one of the editors interviewed:
“Programs like Midjourney and DALL-E are essentially stealing images and stealing ideas and stealing the creative labor of these illustrators and they’re not getting anything in return.”
Many text-to-image generators, including Stable Diffusion, Midjourney, and DALL-E, have been accused of training their models on vast swathes of copyrighted content online. The two biggest players in the market that said they are taking a different approach are Adobe (with its generative AI offering, Firefly) and Getty (with its offering, Generative AI by Getty Images).
Both of these claim they’re only training their generators with proprietary content or with content they have license to use, which makes using them less legally risky. (Although Adobe was later discovered to have trained its model partially on Midjourney images.)
The downside of not indiscriminately scraping the web for training data is that this affects the outputs that are possible. Firefly, for example, wasn’t able to fully render the prompt: “Donald Trump on the Steps of the Supreme Court.” It returned four images of the building itself sans Trump along with the error message: “One of more words may not meet User Guidelines and were removed.”
Adobe Firefly wasn’t able to fully render the prompt “Donald Trump on the Steps of the Supreme Court.” It returned this image of the building itself, instead.
On its help center, Adobe notes, “Firefly only generates images of public figures available for commercial use on the Stock website, excluding editorial content. It shouldn’t generate public figures unavailable in the Stock data.”
Detection issues
The fourth biggest challenge was that journalists themselves didn’t always know when AI had been used to make or edit visual assets. Some of the traditional ways to fact-check images don’t always work for those made by or edited with AI.
Some participants mentioned the Content Authenticity Initiative and its Content Credentials, a kind of tamper-evident metadata used to show the history of an image. However, they also lamented significant barriers to implementation. These included having to buy new cameras equipped with the content credentials technology and also re-develop their digital asset management systems and websites to work with and display the credentials. Considering that at least half of all Americans get at least some news from social media platforms, content credentials will only be effective if they are adopted widely across the industry and by big tech giants, alike.
Despite these significant risks and challenges, newsroom workers also imagined ways that the technology could be used in productive and beneficial ways.
Opportunities for deploying AI tools and visuals in newsrooms
Creating illustrations
This is how text-to-image generator Midjourney responded to a prompt about visualizing generative AI. Journalists said they could see the potential for using generative AI to show difficult-to-visualize topics, such as AI itself.
The newsroom employees interviewed were most comfortable with using generative AI to create illustrations that were not photorealistic. AI can be helpful to illustrate hard-to-visualize stories, like those dealing with bitcoin or with AI itself.
Brainstorming and idea generation
Those interviewed also thought generative AI could be used for story research and inspiration. Instead of just looking at Pinterest boards or conducting a Google Image search, journalists imagined asking a chatbot for help with how to show challenging topics, like visualizing the depth of the Mariana Trench. Interviewees also thought generative AI could be used to create mood boards to quickly and concretely communicate an editor’s vision to a freelancer.
Visualizing the past or future
Journalists also thought the potential existed to help them show the past or future. In one editor’s words:
“We always talk about how like it’s really hard to photograph the past. There’s only so much that you can do in terms of pulling archival images and things like that.”
This editor thought AI could be used in close consultation with relevant sources to narrate and then visualize how something looked in the past. Image-to-video AI tools like Runway can allow you to bring a historical still image to life or to describe a historical scene and receive a video in return.
Image-to-video AI tool Runway allows a user to bring life to a still image from history.
More guidance (and research) needed
From our research, which also discusses principles and policies that newsrooms have in place to guide the responsible use of AI within news organizations, it is clear that the media industry finds itself at another major crossroads. As with each evolution of the craft, there are opportunities to explore and risks to be evaluated. But from what we saw, journalists need more guardrails to guide their use and allow for experimentation and innovation in ethically sound and responsible ways.
The environment for collecting and using data on the web has often been compared to the wild west – a place with no rules and where only the strong (and often morally-questionable) survive. Unfortunately, generative AI technology is developing in a similar vacuum of governance and ethical leadership.
Since the early days of the Internet, there were hundreds if not thousands of venture-backed companies competing to scoop up as much data as possible about consumers. They would then try to spin those datasets into a compelling product or service usually involving a model that included data-driven advertising. Nowadays, Meta and Google are the most often cited aggressive data collectors. Though arguably that’s because they killed off the competition and strong-armed their way into a dominant market position.
Google’s parent company, Alphabet collects massive amounts of data from Android devices, Google services, and its apps (Search, Maps, Gmail, etc.) and Chrome. It has even delayed killing off third party cookies in Chrome (the last major browser to do so) because it hasn’t developed a good way to maintain its dominant position as collector of consumer data.
Data vacuum meets governance vacuum
Meta set about to hoover up so much consumer data directly or indirectly that it failed to have controls in place around who could collect it or the purposes for which it could be used (see Cambridge Analytica). Another cringey example recently came to light when court documents were unsealed. Lest we think this behavior a thing of the past, Meta was reportedly using Onavo (a VPN it purchased in 2013) as a trojan horse to gather valuable analytics data on Snapchat, Amazon and YouTube. Meta is now being sued for violating wiretapping laws.
While regulators and legislative bodies are working to clean up the debris left in the aftermath of the wild west data industry, the race to compete in the Generative AI market might take data collection to a whole new level, likely with unforeseen and potentially catastrophic results.
Large Language Models (LLMs) need data to get better – lots of it. The hockey stick progress we’ve seen in the last 18 months among generative AI systems is almost completely attributable to the massive increase in datasets upon which the LLMs are trained. The New York Times recently reported on the red hot competition among AI companies to find new data for training with companies scraping any and all content they can get their hands on. And this is taking place with no regard for copyright law, terms of use or consumer privacy laws (and without any respect for consumers’ reasonable expectations of privacy).
That said, as The New York Times’ article also notes, AI systems may exhaust all available data for training by 2026. In the absence of high-quality original data, they might even turn to synthetic data – data that was created by AI systems – for training. Who knows what kind of consequences that could render?
Legal safeguards needed for generative AI
Sure, there are some existing safeguards that could be helpful in setting a more responsible course forward. AI companies have been confronted with numerous legal challenges to their unfettered data collection. These companies face a number of lawsuits around copyright infringement as well. However, these suits could take years to fully play out given the AI companies are well-funded and would likely appeal any setbacks in court.
There are privacy laws on the books that likely impact data collection by AI companies. But those laws exist only in a handful of states and it’s not clear exactly how the law applies since AI companies won’t disclose what and whose data they use for training.
Against this bleak backdrop, there have been some promising recent developments around generative AI governance in Congress. This week, a new bipartisan consumer privacy bill was unveiled. While there are some serious concerns and questions to address in that bill, at least the issue is front and center. At the same time, Members of Congress from both parties appear to be actively and constructively wrestling with how best to regulate the emerging AI industry. In fact, nearly every AI bill that has been introduced is bipartisan in nature.
As the wild west of data collection gets even wilder, it’s clear we need basic rules for AI systems and stronger protections for consumers. Without this, we are likely doomed to repeat the mistakes of the previous data collection bonanza – possibly with far more severe consequences.
From Google to Facebook and Instagram to TikTok (and so many more), publishers have spent the last couple of decades chasing their audiences from one platform to another—only to be betrayed by changing algorithms and shifting platform priorities. For years, popular wisdom held that you had to go where the audience is. Now, despite the fact that audiences (particularly younger ones) seek out news and information on social platforms, those platforms are “backing away” from making that content visible. But regardless of a media brand’s position on social media, search has remained the undisputed path to traffic.
Now, publishers face a whole new threat: generative AI search. Years of fine-tuning search engine optimization strategies may all be for naught as Google embraces AI-driven answers in lieu of links to relevant content. Meanwhile, Gartner predicts that traditional search engine volume will drop 25% by 2026 as users shift to AI chatbots and virtual agents for their answers.
The Wall Street Journal reports that publishers expect a 20% to 40% drop in their Google-generated traffic if the search giant rolls out its AI search tool to a broad audience. So, what are media executives supposed to do in the face of yet another shift in the technology landscape that threatens to put them on the outs once again? There’s really only one solution: devise a plan to regain control of their audience relationships once and for all.
Discovery: a problem as old as algorithms
AI search has yet to reach its full potential, but referral traffic is already taking a hit. AI-driven search results that fail to link to the content they scrape from is just one part of the problem. Searchers are often satisfied with AI “answers” and have little need to click through for more. And platforms from across the web are trying to keep more users within the walls of their gardens, and that means the likes of Facebook and Google have gone from partners in traffic acquisition to the opposition.
“We’re seeing an industry in real crisis,” says Jim Chisholm, a news media analyst. While Chisholm says he is not seeing evidence that AI is impacting traffic just yet, that does not mean publishers are not already feeling the squeeze from elsewhere.
Liam Andrew, Chief Product Officer at The Texas Tribune, says that while his team expects generative AI to impact search traffic, they are still waiting to see a substantial impact. The bigger problem facing the Tribune now is social media traffic or the lack of it.
While social platforms across the spectrum are pulling the rug out from under publishers, our old friend search is slowly changing the rules of the game. “Search is still working,” Andrew says. The Texas Tribune sees that explainers and guides still drive traffic and even subscriptions. However, other sites have not been so lucky.
Back in October 2023, Press Gazette found that of 70 leading publishers, half saw their search visibility scores drop—and 24 of those saw double-digit dips. That was the result of one update—more bad news is certain to follow as new updates make their way to the masses.
AI bots: To block or not to block
Publishers may be preparing for a more significant battle when it comes to traffic. However, right now, there’s another fight on their doorsteps: bots are crawling their sites and using their work to train the AI poised to steal their traffic. Some are already taking steps to stop the free—and possibly illegal—use of their content. The Reuters Institute found that 48% of the most widely used news websites across 10 countries blocked OpenAI’s crawlers by the close of 2023. Far fewer—just 24%—blocked Google’s AI crawler.
For Andrew and The Texas Tribune, blocking AI crawlers is not a major concern. They already have an open-republishing model and are used to seeing their content scraped and used on other sites (often without the requested attribution). “It improves our readership and impact, but we compete with ourselves for SEO,” he says. He also says they see versions of their stories on news sites where the content is entirely AI-written. However, it is “not affecting our core audience traffic,” according to Andrew. So — at least for now — The Texas Tribune is not planning to block the bots.
Meanwhile, Google is reportedly paying publishers to use its AI tools to write content. While in the short term, this may offer (smaller) publishers relatively small sums as well as an easier way to create low-lift content, like other Google News Initiative (GNI) projects, there’s an underlying concern that Google is not focused on publisher health in the long term.
Developing a direct-to-reader strategy
As Andrew and the product team at The Texas Tribune look toward a less search-dependent future, they are changing strategies. For 2025 and beyond, “we are not going to be focusing on a really good SERP [Search Engine Results Pages] unnecessarily,” Andrew says. Instead, they’ll focus on products built directly for readers.
“Newsletters have been part of our model for over 10 years. It’s nothing new, but we’re continuing to see success with it,” Andrew says. Not only do the newsletters still drive traffic, but they also drive conversions. Subscribers become members at a higher rate, vital to a publication that does not depend on paywalls for revenue.
DCN conducted an informal survey on concerns around the impact of AI search on traffic, and while the sample size may not hold up to scientific scrutiny, it was clear that newsletters are a crucial tactic for publishers looking to own their audiences. Other stats suggest this is a good move. Storydoc research found that 90% of Americans subscribe to at least one newsletter. That number goes up for younger audiences as 95% of Gen Z, Millennials, and even Gen X receive newsletters, compared to 84% of Baby Boomers.
Experiment with engagement approaches
The solutions to the Google problem don’t end at email, though.
“We also have a big robust event system,” Andrew notes. The Texas Tribune holds dozens every year. They range from “pre-gaming the Texas primary” to deep dives into transportation in the Austin/San Antonio area. They gather experts and pundits to share their expertise on topics that interest readers. The team also live-streams these events — a universally important tactic for engaging younger, more diverse audiences. These events also turn out to be effective for converting casual readers into subscribers and members.
Andrew alluded to products his team is working on that are still under wraps. Still, it’s clear that, like many publishers, The Texas Tribune is preparing for a future when search no longer drives most traffic.
Chisholm thinks mobile apps are another excellent direct-to-reader strategy, and research backs this up. Pew reports that “A large majority of U.S. adults (86%) say they often or sometimes get news from a smartphone, computer or tablet, including 56% who say they do so often. This is more than the 49% who said they often got news from digital devices in 2022 and the 51% of those who said the same in 2021.” Cultivating a relationship with readers through their mobile devices—where you can use push notifications and other native capabilities to grab their attention—will likely be one of the many tools publishers must deploy going forward.
“I’ve been in the news industry – which I love – for 48 years. Now we are at a crossroads,” says Chisholm. “Either we choose the road to recovery, rebuilding relationships with our readers, or we continue down the road we are on, subject to algorithms, more confusion between legitimate news and social media infested with AI nonsense.”
Artificial intelligence (AI) is generating a new era of content creation. However, with this innovation comes the challenge of distinguishing AI-generated content from human-made material. This creates another issue media companies must grapple with to build and maintain audience trust.
Mozilla’s latest report, In Transparency We Trust?, delves into the transformative impact of AI-generated content and its challenges. Ramak Molavi Vasse’I and Gabriel Udoh co-authored this research, exploring disclosure approaches in practice across many platforms and services. The report also raises concerns about AI-generated content and social media’s powerful reach ― intensifying the spread of algorithmic preferences and emotionally charged material.
Where generative AI falls on the synthetic content spectrum
AI-generated content is a subset of synthetic content. It includes images, videos, sounds, or any other form generated, edited, or enabled by artificial intelligence. Synthetic content exists on a spectrum with varying degrees of artificiality. One end of the spectrum features raw content, comprising hand-drawn illustrations, unaltered photographs, and human-written texts. These elements are untouched, representing the most natural form of creative expression. Moving along the spectrum is minimally edited content. Subtle refinements characterize this stage, like using Grammarly for text refinement or adjusting image contrast with photo editing apps. These adjustments enhance the quality and clarity of the content while maintaining its original essence.
Stepping up from minimally processed content is ultra-processed, where automated methods and software play a more significant role in altering or enhancing human-generated material. Applications like Adobe Photoshop can easily enable intricate image manipulations, such as replacing one person’s face with another’s. This level of processing represents a deeper form of content alteration facilitated by advanced technology.The spectrum of synthetic content presents authenticity challenges, and the credibility of digital content comes into question.
AI-generated content can potentially negatively impact society, from spreading misinformation to eroding public trust in digital platforms. This includes concerns like identity theft, security problems, privacy breaches, and the risk of cheating and fraud. The growing use of AI-generated content mandates the the need for rules to limit its harm.
Regulatory mechanisms
Mozilla’s report notes that regulatory requirements across the globe mandate clearly identifying and labeling AI-generated content. Current approaches include visible labels and audible warnings to address the challenges of undisclosed synthetic content effectively. However, human-facing disclosure methods are only partially effective due to their vulnerability to manipulation and the potential to increase public mistrust.
Machine-readable methods, such as invisible watermarking, offer relative security. However, they require robust detection mechanisms to be truly effective. Machine-readable methods show promise but require standardized, robust watermarking techniques and unbiased detection systems.
The authors advocate for a holistic approach to governance that combines technological, regulatory, and educational measures. This includes prioritizing machine-readable methods, investing in “slow AI” solutions that embed corporate social responsibility, and balancing transparency with privacy concerns. Furthermore, they propose reimagining regulatory sandboxes as spaces for testing and refining AI governance strategies in collaboration with citizens and communities.
Ensuring the authenticity and safety of digital content in the age of AI demands innovation in governance strategies is a complex challenge. As the report points out, navigating the AI content challenge requires supporting a trustworthy digital ecosystem by leveraging machine-readable methods, fostering stakeholder collaboration and user education.
Transparent governance is essential to combat the risks associated with AI-generated content and uphold the integrity of digital platforms. Regulatory frameworks and technological solutions must adapt to safeguard against misinformation in order to promote trust in digital media content.
As November 5 draws increasingly near, the rhetoric surrounding the 2024 election is almost inescapable. Although think pieces, pundits, and poll numbers populate publications across the country, much less is said about how the outcome of this monumental election will affect the future of these outlets.
At first glance, the attitudes of President Joe Biden and former President Donald Trump about digital publishers appear to be diametrically opposed. Former President Trump’s adversarial attitude towards the media industry requires little preamble; after all, the term “fake news” is synonymous with his tenure and persona.
Meanwhile, President Biden has embraced legacy media and publishers, choosing to author op-eds in some of the nation’s most prominent outlets, such as the Washington Post and The New York Times. However, while a return by former-President Trump to the White House would probably mean another four years of him snubbing the White House Correspondents’ Dinner, his policies surrounding the industry wouldn’t be all that different to President Biden’s.
Regardless of their personal attitudes towards the media and digital publishers, the winning candidate will have to make important decisions that will have deep implications for the future of the industry. Even amidst the polarization that permeates the halls of Congress, lawmakers have found surprising agreement in their critiques of Big Tech. Proposals to regulate these companies, such as the rolling back of Section 230 protections, would send ripples across the entire industry.
This increased scrutiny has also opened the floodgates of antitrust considerations, and, if elected, Biden and Trump would both face historic antirust considerations surrounding the tech industry. Without doubt, their motives differ: President Biden being spurred by concerns over the effects of the industry’s shifting tectonic plates and former-President Trump being spurred by personal dislike and distrust towards internet companies. However, both are likely to take actions to try and halt Big Tech from dominating most aspects of modern society.
With that being said, we summarize below some of the most pressing policy issues at the intersection of Big Tech and digital media, as well as the actions President Biden and former-President Trump are expected to take that are likely to impact those in the business of digital media.
Regulating Artificial Intelligence (AI)
If there is one topic that has galvanized the industry during the past year it is without a doubt AI. Its promise and perils have made it the most controversial topic across industry discussions, and the approach taken by the whomever is sworn in on January 20, 2025, will send waves across the industry.
For a Biden administration, their top AI priorities would center on ensuring that agencies comply with the directives and deadlines established in the AI Executive Order, published in October of 2023. Earlier this year, the White House touted that all agencies had completed their 90-day deliverables.
However, the most relevant deliverable for digital publishers won’t arrive until mid-2024, when the Director of the United States Patent and Trademark Office (USPTO) is expected to issue recommendations to the President on potential executive actions relating to copyright and AI. Earlier this month, Ben Buchanan, White House Special Advisor for AI, stated that while the Administration does not yet have an official stance on AI and copyright issues, the Administration’s general priorities are “making sure that we have an innovative AI ecosystem and making sure the people who create meaningful content are appropriately compensated for it.”
For a Trump administration, it can be expected that a significant portion of the USPTO’s recommendations will be adopted, given that these are set to arrive only a few months before election day. Although earlier this month former-President Trump stated that AI was “maybe the most dangerous thing out there,” he has given little indication as to what his administration’s policies would be.
Nonetheless, it is expected that a Trump administration would give special focus to AI’s potential discrimination or censorship of conservative or MAGA voices. For example, given the former President’s acrimonious attitude towards his adversaries, he is sure not to be content with Chat GPT’s refusal to produce a poem in his administration.
Data privacy policy and online safety
Following the landmark hearing on Big Tech held earlier this year, children’s online safety has dominated the national discussion surrounding data privacy. A Biden administration can be expected to throw its weight behind the Kids Online Safety Act (KOSA), which has shaped up to be the paramount bill on the topic.
This legislative effort, which has faced several roadblocks since its introduction in 2022, is closer than ever to this finish line, now that is has earned the backing of over 60 senators including Senate Majority Leader Chuck Schumer (D-NY). As presently written, KOSA has limited applicability to digital publishers, although its eventual passage, coupled with more stringent data privacy efforts being proposed across multiple states, might signify stricter privacy requirements for publishers further down the road.
At a broader level, President Biden is expected to announce executive actions aimed at preventing foreign adversaries from illegitimately gaining access to Americans’ data sometime in 2024. Such actions would probably impose new data privacy requirements on digital publishers, although the span of these requirements would be determined by the breadth of the actions announced. However, these new requirements and publishers’ compliance with them could boost consumer trust, at a time when data breaches are more rampant than ever.
It can be expected that if elected, President Trump would also support KOSA, given its broad bipartisan support. However, he can also be expected to reject international data privacy standards, as well as rescind any actions taken by the current FCC on net neutrality or broadband privacy.
Corporate taxes
A Biden administration is expected to continue its support for progressive tax reforms, including increasing the corporate tax rate. As part of his FY2024 budget, President Biden proposed to increase the corporate tax rate from 21% up to 28%. President Biden is also expected to close certain tax loopholes that could impact corporate financial strategies, as well as increase funding for social programs.
Meanwhile, if elected, former-President Trump would at the very least focus on maintaining the corporate tax rate at 21%, although he could very well attempt to lower the rate to 15%, the rate the former-President initially sought for the 2017 Tax Cuts and Jobs Act. When asked about the possibility of lowering the corporate tax rate to 15% during a September interview with NBC’s Meet the Press, former President Trump stated that he’d “like to lower them a little bit.” More broadly, a Trump administration would seek to extend, or even make permanent, a majority of provisions from the 2017 tax cuts, such as the Qualified Business Income Deduction.
What’s next?
Many of most relevant decisions regarding the future of digital publishing lie outside the purview of the executive branch. As brought up in the Senate Judiciary Committee hearing on AI and Copyright, members of Congress are exploring legislation that would require the implementation of licensing agreements between publishers and AI companies, as well as legislation that would at the very least “clarify” the applicability of Section 230 and copyright law to content scraping carried out by AI companies. Additionally, other major questions surrounding copyright law are in the hands of the courts, as is the case with The New York Times’ lawsuit against Open AI and Microsoft.
Still, a multitude of regulatory decisions will land on the desk of whomever is elected on November 5, 2024. What is of most concern to digital publishers isn’t necessarily which candidate emerges victorious, but how to best advocate for the protection and preservation of the digital publishing industry amidst a confluence of interests and voices that will crowd the White House on these industry-defining issues.
Artificial intelligence (AI) is rapidly integrating into news and content, prompting a necessary reflection on its implications for democratic societies, which rely on trustworthy and diverse media sources. A new report, Artificial intelligence and media policy: Plurality from the meat grinder, from Professor Dr. Rupprecht Podszun, Heinrich Heine University, and Ruth Meyer, Director of the Saarland State Media Authority (LMS), delves into the potential risks AI poses as it is applied in the media industry as well as the need for regulation to govern its usage.
Guardrails for AI
Podszun and Meyer identify three areas needing guidelines and policies to uphold democracies:
Trust in information is about ensuring that the information people receive is accurate and reliable, whether it’s news or other content. Laws like the Saarland Media Act stress the importance of journalists being careful and accurate when reporting. However, when artificial intelligence involves creating content, it can be hard to know if the information is trustworthy because the processes behind AI are often hidden. This lack of transparency can make people doubt the reliability of AI-generated content.
Public discourse refers to the conversations and discussions in society, especially around important issues. With the rise of AI-powered recommendation systems, people often get information that aligns with their beliefs and interests. This can create “bubbles” where people only hear opinions like their own, leading to societal divisions. The idea of the public sphere, where people from different backgrounds come together to discuss and solve problems, is important for democracy. However, social media platforms, which play a big role in public discourse today, can make it harder for diverse opinions to be heard.
Plurality is under pressure despite the many different media sources available today. The reality is that companies like Alphabet (Google’s parent company) and Meta (formerly Facebook) have too much control over the information people see. This concentration of power can limit the variety of perspectives and ideas people are exposed to, especially when AI algorithms are used to personalize content delivery.
A regulatory approach for AI
Regulation will be needed to address concerns and provide guidance for the most constructive development of the AI industry and its applications. This could include laws to prevent monopolistic companies from having too much control over information. Further, there is a need for transparency about how AI is used in creating content. It’s also important to ensure that the data used to train AI systems is diverse and representative of different viewpoints.
The authors provide recommendations for regulatory approaches to ensure diverse and trustworthy access to information. They suggest that:
Preventing monopolistic concentration is crucial, given the dominance of big tech companies in both data-driven business models and AI control. Media concentration laws should counteract this trend, promoting diverse data pools and open technology.
Ensuring transparency and responsibility are fundamental. Trust in AI-driven media necessitates transparency regarding its usage, training data, and information sources.
Identifying prohibitions is necessary to enforce accountability. If AI crosses ethical boundaries, explicit bans with sanctions are imperative.
Addressing the training data problem is vital. Guaranteeing open and diverse data selection mitigates distortions in AI-generated content. Embracing adaptable AIs capable of correcting errors ensures ongoing development and integrity.
While AI offers unprecedented opportunities for media innovation, its unchecked proliferation poses significant risks to democratic principles. Effective regulation is imperative to harness the potential of AI in media while safeguarding pluralism, transparency, and trust in information dissemination. Only through collaborative efforts between policymakers, media stakeholders, and technologists can the transformative potential of AI be harnessed responsibly in the service of democratic societies.
As the technology continues to develop at breakneck speed, publishers are taking every imaginable stance on AI. From The Telegraph forbidding staff from using AI-generated text in its journalism, to Politico actively optimizing its website for generative AI crawlers, a wait-and-see approach is not an option.
There are still many unknowns around both the use of generative AI technology and the potential legal and ethical implications. So, it is understandable that many media executives are hesitant to take a stance. But in all likelihood, you’ve got staff using it already; one survey found that 70% of workers did not disclose their ChatGPT usage to their boss.
So how can senior leaders work to introduce AI in a way that benefits the organization, or at least empower the experts and evangelists to explore and share solutions? Here, we look at publishers who have created frameworks for experimenting in ways which benefit the businesses, as well as some tips from an innovation expert to help shape your own AI strategies.
“Waddle Inn” at William Reed
William Reed is a data and events publisher focused on the food and drinks sector. They have found success with their “Waddle Inn” project, named lightheartedly after the way ducks and penguins waddle together, with the extra ‘n’ to make it sound “more like a pub,” explained Chief Digital Officer John Barnes.
William Reed has three locations in the UK, as well as offices in France, Chicago and Singapore, not to mention staff working remotely. So they decided that an online forum would be the best way to bring everyone together. They set up a Microsoft Teams group composed not just of techies and experts, but also those who are new to AI, intrigued, or even frightened by it.
“Whatever your starting position is, here’s a group where you can come together.” Barnes explained. “We use it to share articles, pose questions to each other, we use it to discuss and come up with ideas. It helped form our AI statement… What’s come out of it is a whole series of ideas, some of which we’re working on, some of which we’re still debating, and some of which we’ve just put a spike through because it was too crazy to even contemplate!”
A year on, the Teams group has now become a “record” of William Reed’s AI journey. Barnes noted that some of the ideas they discussed when it first started now seem “less crazy” given the speed at which the AI landscape is developing. “It’s such a useful social resource that seems to be well-liked and well-attended,” he said. “It’s very, very active.”
Immediate Media’s AI experimentation days
Magazine and special interest consumer publisher Immediate Media have found that offering staff events focused on AI has driven experimentation and innovation. They started off with an AI Immersion Day in the summer, which Roxanne Fisher, Immediate’s Director of Digital Content Strategy said was about setting a level playing field of understanding.
“We had a morning of keynote speakers come in and talk about things like the ethical implications and the opportunities, and we also had our CEO stand up and say, ‘This is what we’re doing with AI, and this is what we believe we shouldn’t do,’” she explained. “Then in the afternoon we had really hands-on workshops about prompting, ChatGPT, basic frameworks, Midjourney and how to decide what the right tool is for your use-case.”
But they didn’t just leave it at that. To encourage practical outcomes, Immediate then set up an AI Experimentation Days where they asked staff to submit projects that they might want to use AI for. From over 70 submissions, they took forward 27 projects to work on a hackathon-style event.
“The experimentation days were really useful because people were getting to work on something they had submitted, and were quite excited about,” Fisher said. “But also we had expert facilitators going around the business for the two days. So, if you got stuck…we had people to give really good prompting tips.”
The days were seen as a success by staff as well. Fisher noted that having no expectations around experimentation or productivity gains really helped take the pressure off. Although they’re planning more days in the future, she acknowledged that they’re now working on ways to build the outcomes into the business more strategically. “The experimentation is going really well, but building that into a process or ways to use these tools on a regular basis that’s really meaningful is more difficult,” Fisher said.
Have a clear company-wide generative AI statement
Plenty of publishers have put out statements and policy guidelines to clarify their approach to AI tools over the past year or so. This shouldn’t be seen as a frivolous exercise or a PR move; both Barnes and Fisher noted that publishing clear, public guidelines have really helped reassure staff, as well as provide a framework for internal experimentation.
“If I hear somebody in our company saying, ‘I’m really worried about AI, I’m going to lose my job,’ I don’t see that as a problem. It’s something I need to try and address so that they, for their own well-being, aren’t feeling frightened and worried,” William Reed’s John Barnes said. “We’re not going to be doing certain things, hence having a statement that’s transparent and published that everyone can see.”
Immediate has also put together their own manifesto for how they will use AI, and have communicated that to staff. “The first thing we wanted to do is be really clear in the business what our stance was for using it; what we would and wouldn’t do,” Fisher explained. “We really want to stay up to date and understand it, and understand how it’s going to change everything, but we also want to really maintain the trust of the brands.”
For Fisher, it was important to put guardrails and rules in place for experimentation. This includes basics like not putting personal or sensitive data into tools, but also ensuring staff feel secure. “We did a lot of psychological safety work around it, and tried to make sure people felt they could have the conversations and be really honest about how they’re feeling about it,” she said. “But also lots of reassurance and having that clear stance of what we did and didn’t do was helpful. It set out to people that we’re not about to churn out hundreds of news articles or anything. That’s not what we’re about.”
Lead AI innovation from the top
The success of AI initiatives like these are heavily predicated on support from senior executives. Fisher found that, at Immediate, early encouragement to try and use the technology didn’t take off until more structure was provided. “We realized that the testing and usage needed to come from [the leadership team] more,” she explained. “So we were more prescriptive with people about what we thought they should test.”
Immediate now has an “AI champions board” of people from different disciplines around the business who are passionate about, or interested in the technology. This helps galvanize others. It also means that there are recognized representatives staff can go to with ideas or concerns.
Barnes explained that he sees three types of people who experiment with new technologies in a business: the over enthusiastic early adopters, the pragmatists, and those who are afraid they’ll lose their job to it. “You want to rein in the early adopters who are going to waste lots of time, and you want to make sure that the people that are being pragmatic about what it could or couldn’t do aren’t seen as being the party poopers,” he outlined.
But AI policies and strategies aren’t something that can simply be delegated out to someone else. Media consultant Ian Betteridge, who has worked in leadership positions at publishers like Bauer and Dennis, says that it needs to come right from the CEO. “Firstly, you need to understand this new technology,” he said. “You need to understand how it impacts your ways of working. Because quite often, when it gets to a certain level, everyone just assumes that everybody else knows, or everybody else can just find it all out themselves.”
This means going through several change management processes. “You’ve got to run, as the CEO, a change management process with your executive team,” explained Betteridge. “They then have to run the change management process with the senior leadership team…and they then have to run it with the broader business.”
While that’s going on, media leaders have to “stop people just going off and doing it themselves.” “You’ve got to put guidelines in place from the off,” Betteridge advised.
That doesn’t mean media leaders need to understand the technical detail, or every use case. “But we need to understand it to a point to be able to say to people, that’s not possible, or how to help them unlock things,” Immediate’s Fisher emphasized. “One of the biggest things that people in leadership can do is actually just get hands on with it… therefore you can make sure what you’re asking people in the company to do is realistic and reasonable.”
“Somebody on the leadership team that is looking after digital in some way, shape or form needs to own [it],” Barnes echoed, noting that it can’t simply be delegated to the IT team. “And if they’re not able to lead it on their own, they need to put a group of people around them from their teams or from around the company that can own it.”
Get your AI experimentation going
Some publishers will have teams and individuals eager and ready to experiment with AI. But for those at organizations with leadership teams who are more cautious about the technology, there are still ways to get things moving.
For organizations without strategies in place or a willingness to experiment, “the best way is to always just be able to share results. That’s the thing anybody sitting at the top is going to care about,” Betteridge advised. “So if you can share results and say, we’re now getting five exclusives and more traffic through there, then that matters.”
Barnes suggested simply talking about it as a way to get things started, even if full understanding isn’t there yet. “You want to make sure you’re working within the law, you’re doing something that isn’t going to damage your brand, you’re using tools in the right way to be efficient but not at the expense of humans,” he said, noting that these are points most leaders can agree on.
Whether organizations choose to appoint a board, hire or earmark individual champions, or bring everyone into the conversation, AI is not an issue media leaders can afford to stay silent on. It is a given that there are people within nearly every media organization using generative AI tools in some way. While you might not hesitate for staff to use Grammarly, you’ll likely feel quite differently about using ChatGPT for research or writing. Therefore, it is not only critical that you get started, the impetus and innovation need to start at the top with clear leadership driving smart experimentation.
At the 2024 DCN Next:Summit, held February 7th – 9th in Charleston, SC, senior executives from DCN’s member companies discussed the biggest issues and opportunities impacting the future of media. It was fitting that CEO Jason Kint opened the event by reflecting on the human agency, energy and collaboration that drive the industry forward.
“Let’s face it, algorithms can’t write a Pulitzer Prize winning exposé. They can’t tap out a heart-wrenching screenplay or build a company culture that thrives on innovation, inclusion and empathy,” he said. “We all understand that algorithms crunch data. Tech is very much part of our future. But it’s the human creativity spark that ignites us all.”
As the industry navigates the ever-changing media landscape–which continues to be shaped by technology–Kint reminded attendees that it’s not machines, but humans, who will write the next chapter of our collective story. While he pointed out that media organizations must be positioned to best-leverage and benefit from emerging technologies, Kint emphasized the critical role of those gathered at the Summit, particularly as they guide the teams they lead.
The impact of generative AI
Unsurprisingly, a broad theme of the event was the many ways publishers are approaching, using, experimenting with and challenged by generative AI. On that note, Todd Krizelman, CEO and co-founder of advertising intelligence platform MediaRadar, said his company approaches AI from two directions: the business and editorial sides of media. For him, AI is the most exciting part of his business, and the media business, right now. “Many companies will be testing the kind of stuff we are, but I do think it is a genuine opportunity,” he said.
Rafael Urbina-Quintero, the chief operating officer of ViX at TelevisaUnivision Inc. told Axios media reporter Sara Fischer that his company is starting to experiment with short form content using AI to do translation. “We have certain properties that travel really well.”
Kedar Prabhu, vice president of ad product and technology at Dow Jones, explained that Dow Jones uses AI for audience lookalike models and contextual targeting in ad tech. The company is also looking at use cases for AI including translation, voice transcription, personalization and video production.
POLITICO is focused on using AI to improve its subscriber experience. Goli Sheikholeslami, CEO of POLITICO Media Group explained that, while POLITICO has tens of thousands of legislative bills on their platform, only about 40% of them had summaries. However, “over the last three months, we’ve now taken all of our bills and through AI, now provide summaries for every single bill that’s on the platform,” Sheikholeslami said. POLITICO is also looking into how to use AI tools to help journalists do their jobs better and improve workflows that allow them to do more value-added work.
Andrea Brimmer, chief marketing and public relations officer at Ally Financial said that they are not only testing the Ally.ai platform, a large language model for campaign development, they’re also using AI to keep up with consumers’ “insatiable” demand for content on their Conversationally blog.
However, as many echoed throughout the event, AI is a powerful tool, best employed wisely by people. “We’ve used AI to make our writers more proficient to accelerate the amount of content that we’re able to put out into the atmosphere,” Brimmer said. “But, the difference at Ally is everything’s got to have a human touch. So, everything that we do has a human in the center of the transaction.”
In general, both speakers and attendees made it clear that despite AI’s huge potential value and use cases, they have concerns. “There’s a gap between tech capability and business value that is yet to be crossed, in particular when it comes to generative AI. I expect that, given what we’ve seen, we will eventually cross that gap, but we’re not there yet,” Prabhu said.
AI can be a powerful tool for making decisions, predictions or completing tasks. However, it is critical to keep in mind that it is not immune to problems like racism, sexism, ableism, explained author of More Than a Glitch, Meredith Broussard, who is an associate professor at the Arthur L. Carter Journalism Institute of New York University.
Problematic platform partnerships
From Apple, to Samsung, Facebook, Google or Amazon, publishers and brands seek partnerships with platforms because it helps drive revenue, build audience and market share. However, these days everyone is approaching platform partnerships with caution.
The Weather Company is very selective in the partners that they’ll partner with, explained CEO Sheri Bachstein. She added that they have partnered with Apple, Samsung, Facebook and Google, which helps extend their brand. Platforms add value, drive revenue and add the ability to create more segmentation, Bachstein said. “So, you make a decision strategically, ‘I’m going to partner with this person versus for this data’ to drive your business.”
Brimmer explained that Ally Financial has been somewhat tepid on platforms. “We use platforms episodically. Until we feel comfortable that all brand safety measures are in place, there are platforms that we will just stay away from,” she said.
Indeed, as Paramount COO Steve Ellis put it, in many ways “it is not an even playing field when we talk about platforms because we have to abide by a whole different set of standards as a premium company.”
Marketing and advertising to culture
While coping with cookie depreciation was a recurring theme, the subject of equitable and diverse advertising was also a topic of much discussion. As Axios’ Sara Fischer pointed out, the media industry still lacks systems and processes to support marketing and advertising to culture.
TelevisaUnivision COO Rafael Urbina-Quintero, whose over-the-top streaming service ViX targets the 600 million Spanish-speaking market, agreed. It’s a huge opportunity for the company, which has a large share of the broadcast market in Mexico and about 60% of the Spanish-speaking market in the US.
Urbina-Quintero explained that the company has developed an entire creative services unit to help advertisers create pieces that allow them to effectively market to these audiences. “It’s a massive opportunity where you’re talking about over a trillion dollars in purchasing power, and one of the fastest growing parts of the population.”
Ally Financial’s Brimmer said that, over the past several years, the company has been focused on ensuring that women’s sports get a fair deal. While women’s sports have grown in popularity, they still received a fraction of the marketing investment and sports media coverage. With this in mind, in May 2022 the company committed to spend equally on men’s and women’s sports within five years, and is poised to reach its 50-50 goal ahead of schedule–by the end of this year.
Becoming a champion of women’s sports has been more than a feel-good project for Ally, according to Brimmer. She pointed out that it takes a lot of effort, funding and differentiation to break through in sports. So, leaning into women’s sports gave Ally Financial the opportunity for disruption.
She said that the company could see that “the moment for women’s sports was really coming [and asked] ‘how can we naturally and authentically integrate into that to be one of the brands that can punch above our weight, especially in this highly complex media ecosystem?” The decision has paid off. Brimmer says that the company “finished last year with the highest level of all of our KPIs in the history of our company, highest level of awareness ever in the history of our company, highest level of positive brand sentiment ever in the history of our company…the earned media impressions alone have been staggering.”
As Paramount COO Ellis described, there’s an opportunity being missed in which marketers can align their messaging and actions with underrepresented communities. “It speaks to how entrenched processes and systems are,” he said. “We have to change processes to change the outcomes and I don’t think we’ve done nearly enough as an industry to change how we market to culture.”
Antitrust action
Platforms are coming under heavy scrutiny in 2024. Just last month, the Federal Trade Commission (FTC) launched an inquiry into big tech companies including Alphabet, Amazon.com, Microsoft Corp, and their investments in AI companies OpenAI and Anthropic PBC.
The inquiry will examine the relationship between some of these big tech companies and newer AI firms that they’ve been investing in, explained FTC Chair Lina Khan in a live-streamed interview with Axios’ Fischer. Khan says the agency is asking “Are there certain expectations of exclusivity? Are there certain rights to board seats, or other mechanisms influencing business strategy or direction of innovation?”
Historically, at technological inflection points, you have potential paradigm shifts in the technologies that are available can be enormously important for opening up markets and injecting competition and disrupting existing incumbents, Khan said. “We just want to make sure that these relationships and partnerships are not being misused to undermine competition.”
In a session recorded for The Verge editor Nilay Patel’s podcast “Decoder,” Jonathan Kanter, Assistant Attorney General for Antitrust at the department of justice, set the stakes even higher. “News and journalism is the raw material of our democracy and the marketplace of ideas is vital to a thriving… democratic free society.” Therefore, his work examines whether monopolies have arisen, or are poised to, which threaten the health and survival of the news media.
As Kanter described it, “if monopolization and harm to competition is harming journalism, it means that companies can’t invest in original journalism in the kind of reporting and infrastructure that is necessary, not just on a national level but on a local level, to keep our country free of corruption, to make sure that our political discourse is well-informed, to make sure that people can learn about exciting new things, to make sure that we can vote in an informed way. It’s hard to imagine something that’s more important, more critical to the fabric of our nation.”
Press freedom, democracy and the human cost of doing journalism
Throughout the program, storytellers from around the world discussed the human cost of journalism, and the need to unify as an industry to protect journalists and the right and responsibility of the media to speak truth to power.
Paul Beckett, assistant editor at The Wall Street Journal, detailed the work he and a wide team from the WSJ and others are doing to release his reporter Evan Gershkovich, who is being detained in Russia. He emphasized that it is up to the entire industry to protect journalists from forces that would stifle the free press. “It has ripple effects and, at its broadest, it is an attack on one of the great freedoms,” Beckett said.
Rappler CEO and 2021 Nobel laureate Maria Ressa discussed her ongoing efforts to hold the line on press freedom with Pivot podcast co-host Kara Swisher. She also explored her 10 point plan to support journalism. “In 2021, I compared what tech did to an atom bomb exploding in our information ecosystem… because it’s changing the cellular level of democracy. It’s only gotten worse since 2021,” Ressa said.
In 2024, at least 64 countries (including the European Union) will hold national elections. That’s almost 50% of the world’s population. “Since we don’t have integrity of facts, how are we going to have integrity of elections?” Ressa pointed out.
She called for publishers to stop competing with each other when it comes to larger issues of misinformation, disinformation, and the impact of dominant platforms on the media industry and society as a whole. While she recognizes that news organizations vie for audiences, Ressa believes there are areas where collaboration is critical. “I know that we cannot continue competing against each other because we’re on the same side of facts,” said Ressa. “We’re on the same side of a shared reality, right? Collaborate, collaborate, collaborate.”
Amid the attacks on the press by those in power, it’s important that the press behave as professionals, not combatants, former Washington Post Executive Editor Marty Baron explained to the audience. “I think one of the critical things for us is to actually persuade the public that we’re giving them information… that we’re not just participating in ideological or partisan warfare,” Baron said.
People shape the future
As digital media progresses with artificial intelligence, navigates platform partnerships, protects democracy, and markets to new cultural opportunities into 2024, it’s clear that the future of the industry lies in its people. As DCN CEO Jason Kint remarked, “it’s in the hearts and minds of extraordinary individuals who make up our membership, it’s the humans who uncover the truths, who amplify silenced voices and who hold the powerful accountable.”
On January 10, 2024, the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law held a hearing titled “Oversight of A.I.: The Future of Journalism,” kickstarting legislative activity on AI for 2024. The central question of this hearing wasn’t whether copyright law covers AI, most witnesses and members of Congress seemed to agree that it does, it was whether existing law properly and effectively protects AI’s infringement on the intellectual property of journalists. As Subcommittee Chairman Senator Richard Blumenthal (D-CT) stated, rights need remedies, and for these remedies to be effective, they must be enforceable. It was that effectiveness and enforceability that was the true centerpiece of this Congressional discussion.
The witnesses at the hearing were: Danielle Coffey, President and Chief Executive Officer of the News Media Alliance, Jeff Jarvis, Tow Professor of Journalism Innovation at the CUNY Graduate School of Journalism, Curtis LeGeyt, President and Chief Executive Officer of the National Association of Broadcasters and Roger Lynch, Chief Executive Officer of Condé Nast.
For senators, a sense of urgency
During his opening statement, Senator Blumenthal (D-CT) highlighted the importance of this subject and this hearing, touting it as critical to democracy. Careful not to vilify the possibilities awarded by AI, Senator Blumenthal argued it is essential for reporters and readers to be able to reap the benefits of AI while avoiding its pitfalls. Nonetheless, he clearly called out how the rise of big tech and generative AI has led to the decline of the news industry, with the hard work of authors being utilized without credit or compensation.
Evident in Senator Blumenthal’s remarks was a sense of urgency, as he expressed that it was essential that Congress learn from their mistakes in tackling social media. He also floated several areas of consensus around the topic of AI, such as licensing, transparency, incentive structures for companies to develop trustworthy products, limiting big tech’s monopolistic practices when it comes to advertising, and clarifying that Section 230 does not apply to AI.
As a refresher, Section 230 of the Telecommunications Act of 1996 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Since coming into effect, Section 230 has granted websites and social media companies immunity from liability for content posted on their platforms by others.
It is no surprise that several of these areas of consensus are present in legislative proposals introduced by Senator Blumenthal. In 2023, he, alongside Subcommittee Ranking Member Senator Josh Hawley (R-MO) introduced the “No Section 230 Immunity for AI Act”as well as an AI Legislative Framework which tackled licensing regimes, transparency, and trustworthiness.
In his opening statement, Senator Hawley echoed Senator Blumenthal’s sense of urgency in protecting the work product, data, and information of consumers, at a time when the largest tech companies attempt to monopolize these areas.
For witnesses, a (somewhat) clear solution
Across the board, the hearing’s four witnesses illustrated the invaluable contributions the news industry has made to society. Danielle Coffey, Curtis LeGeyt, and Roger Lynch all agreed that licensing agreements are an essential component in combating the risks AI poses to the industry.
Coffey highlighted that such agreements could help avoid protracted uncertainty in the courts, while LeGeyt and Lynch raised how licensing agreements have become standard practice in the music, radio, and local television industries. Jeff Jarvis was more optimistic about the positive use cases of AI in the industry and advocated for the measured embrace and implementation of AI in journalistic practices.
A fork in the road for the industry
Following witness testimonies, committee members expressed their support of licensing agreements as a solution to some of the copyright issues raised by the interaction between AI and the news industry. Even more so, several committee members expressed their eagerness to tackle the issue directly and immediately.
Senator Mazie Hirono (D-HI) inquired whether Congress needed to enact legislation for these kinds of licensing procedures to be implemented, while Senator Blumenthal stated that when it comes to both licensing and Section 230 issues, Congress has an obligation to clarify current law, ensure that licensing is legally required and reinforce the inapplicability of Section 230. Somewhat surprisingly, it was some of the witnesses who pumped the legislative breaks on these comments. Regarding Senator Hirono’s comments, LeGeyt argued that such Congressional action would be premature while Coffey stated she believed the industry would prevail in addressing these issues through pending litigation.
What is undeniable, is that 2024 is set to be a landmark year for Congressional action on AI, and that copyright issues offer legislators a path to AI “victory” that is targeted, discreet, and not overtly controversial. Because of this, regardless of what was advocated for in this hearing, members of Congress can be expected to at the very least attempt to “clarify” the applicability of existing copyright law to generative AI models. Of course, the distinction between a limited clarification of current law and the outright enforcement of these types of agreements is up to legislators.
While witnesses adamantly made the case that copyright law is on their side, legislators continuously expressed concerns with the efficacy of existing protections. Going back to Senator Blumenthal’s statement, about rights needing remedies that are effective and enforceable, participants agreed that the rights of journalists certainly exist in copyright law, but for legislators, efficacy and enforceability need an extra push from Congress to come to fruition.
Looking towards 2024, with copyright litigation in its nascent stages, the digital content industry may certainly find relief in the legal system but would be wise to hedge some of its bets in the hands of legislators who seem keen on engaging with this industry-defining issue.