Today’s headlines about artificial intelligence (AI) describe it as a transformative force reshaping entertainment and technology. Companies across the spectrum are adopting AI to streamline operations, enhance content production, marketing, and distribution efficiencies, and revolutionize the viewer experience. But are consumers keeping pace with this rapid evolution and what do they think about it?
Popular AI tools like OpenAI’s GPT-4 and DALL-E generate images, music, scripts, games, and even fully realized ads and video content. These advancements blur the lines between “real” and “fake” content. Post-production AI tools can now enhance not just backgrounds and environments but also character identities and storylines. AI algorithms also help to better match viewers with content they love, refining recommendation systems and personalizing viewer experiences.
Consumer awareness and usage of AI
A recent report by HUB, part of the Entertainment & Technology Tracker series, delves into consumer awareness and understanding of AI’s current capabilities. The study reveals that 71% of respondents are familiar with “generative AI,” and only 18% feel “very confident” in explaining what it is and does. Interestingly, 57% of respondents use one or more generative AI models, highlighting a significant gap between awareness and deep understanding.
Perceptions of AI: good vs. bad
The report finds that nearly half of the respondents view AI as a positive development, compared to about a quarter who see it as potentially negative. Among those who view AI favorably or unfavorably, a common belief is that AI fundamentally changes how we live and work. However, there are notable concerns, particularly around privacy, employment risks, and the potential misuse of AI for creating deepfake content.
Consumer interaction with AI
Consumers are increasingly interested in AI-driven features that enhance content discovery and selection. Despite this interest, there is a clear preference for human creativity in certain domains. For example, more consumers believe humans outperform AI in tasks like writing music or dialogue. Conversely, they view AI as equally capable of generating game dialogue or trailers and superior in tasks like CGI, writing descriptions, or creating subtitles.
The desire for transparency in AI-generated content is also strong. According to the report, 67% of respondents want to know if something is created using AI, underscoring the need for clear labeling and communication from content creators.
AI opportunities and challenges for the entertainment industry
For media executives, the rapid integration of AI presents both significant opportunities and challenges. AI streamlines production processes, reduces costs, and enables the creation of more personalized and engaging content. However, it raises critical questions about the balance between automation and human creativity. It also questions the ethical implications of AI use, and the need to address consumer concerns around transparency and data privacy.
AI is reshaping how content is produced and consumed and redefining the relationship between creators and audiences. As the industry adapts, embracing AI’s potential while addressing its challenges is key to sustaining growth and innovation in the entertainment sector.
The HUB report is useful for media executives looking to stay ahead in this rapidly changing landscape. By tracking consumer sentiment and behavior, companies can better anticipate trends, address concerns, and leverage AI to create a more dynamic and responsive entertainment ecosystem.
As AI continues to evolve, media companies must navigate these complexities strategically. Investing in AI technologies that enhance viewer engagement while maintaining transparency and ethical standards is crucial. Additionally, fostering consumer education around AI capabilities and limitations helps bridge the gap between consumer awareness and understanding.
As news consumption evolves, The New York Times’ R&D Lab department is at the forefront of redefining how stories are told. Their goal? To enhance the narrative experience without sacrificing journalistic integrity.
With a focus on exploring how emerging technologies can be applied in service of journalism, the department is proving that innovation can enrich journalism rather than distract from it. But it’s a fine balance. In a landscape littered with shiny new things, where innovation is often considered a buzzword, the R&D Lab team pioneer new technologies to serve the story.
Scott Lowenstein, director of research and development strategy at The New York Times, oversees the R&D team, leading experimentation with the next generation of storytelling. The R&D group is a mostly technical team that works on applications of emerging technology in the service of journalism, but also for their products and business, he explained.
From the NYT’s R&D Lab’s experiments in mixed reality journalism.
The launch of virtual reality (VR) in 2016 marked a significant achievement, as the R&D Lab created over 30 immersive films that showcased the narrative potential of VR. The films received high viewer engagement marked by over 1 million downloads of their VR app. In 2017, the team supported newsroom storytelling by integrating emerging technologies like mixed reality and connected home devices, while also improving public figure identification in photos through automation.
In 2021, the team developed the Times’ first AR game and animated sequences for augmented reality, continuing to explore spatial journalism and improve media transmission with 5G technology. By 2022, they emphasized integrating locative data and spatial understanding into journalism, aiming to better reflect and connect with communities, thus enhancing storytelling.
“Never ending cycle of exciting, interesting problems to tackle”
A lot of the R&D team’s work starts when Times’ staffers come to them with an ambitious project they’d like to do, but don’t know where to start. Lowenstein said the team does their research on technologies, the latest tools in the market, and projects in adjacent industries from architecture to game design.
“We prioritize the ones that feel achievable. We find the technology to match the ambition of the story,” Lowenstein said. “We will oftentimes say, this technology is really cool, but it’s not ready. But, when we do find an idea that works and match it well with the story, then we’ll help that team come up with the first execution.”
And, when the team finds a technology that works, they’ll find ways to make it repeatable, so that future stories within The Times can use it. “We’ll also often open source our work and share it with a broader journalism community to help replicate what we’ve done.”
spatial journalism in a broad sense, which encompasses things from 3D to mixed reality and virtual reality.
“The mission is to try to use these tools in service of great storytelling and not try to do something gimmicky. Instead, we want do something where this new tool could help us do what we couldn’t do before,” Lowenstein said.
A lot of the projects are seriously ambitious. They’re not only hard to do, but they also require a large team of technologists, editors, and resources. So, the Lab is invested in how to facilitate this process.
“A lot of our ambition is trying to just make that whole process easier for the teams so that they can just say, we want to do this and that they can do it the same day,” Lowenstein said. “Some of it is building software and templates and things that don’t require a super-specialized 3D engineer to build a 3D story. That’s where a lot of our effort goes: finding those repeatable things and creating the tools that allow editors to do that work with less or no help from us.”
Some of Lowenstein’s favorite stories were during the early days of augmented reality. “We built a lot of stories that we published as 2D stories on the web, and then translated specific slices of them into AR, that people could use on their phones. Some of those were very well received and we learned a ton from that experience.”
Balancing innovation and integrity
The New York Times R&D Lab is not interested in employing the latest technology just for the sake of doing it. Technology is a tool in the service of their mission, not an end in itself. “One of the great things about working with our amazing journalists and editors is that their gimmick meter is really strong,” Lowenstein said.
Timeline of innovation at The New York Times’ R&D Lab
2006 – Launch of R&D Lab
Foundation: The New York Times R&D Lab was established to innovate and experiment with new technologies and storytelling methods.
2009 – Multimedia Applications
Commuter App: An innovative app that combines traffic cameras, Google Maps, and location-specific Times content, enhancing news delivery through geocoded articles and blog posts.
2011 – Interactive Technology
Magic Mirror: A data-bearing mirror using motion sensing and voice recognition to deliver on-demand information via the Times’ APIs.
2014 – Semantic Listening
New Tools: Developed systems for extracting semantic information and crowdsourcing data on cultural artifacts.
2015 – Wearable Technology
Focused on wearables, examining their potential uses in the media landscape.
2016 – Virtual Reality (VR)
Created over 30 immersive films.
2017 – Emerging Technologies
Shifted focus to support newsroom storytelling by integrating technologies such as mixed reality and connected home devices, while also improving public figure identification in photos through automation.
2019 – Photogrammetry, Spatial
Computer Vision: Revealing hidden narratives within visual content.
Photogrammetry: Creating immersive 3D environments from 2D photos.
Spatial Computing: Exploring new storytelling formats in augmented reality.
Media Transmission & NLP: Enhancing photo and video transmission and extracting insights from the NYT’s extensive archive.
Developed the Times’ first AR game and animated sequences for augmented reality, while continuing to explore spatial journalism and improving media transmission with 5G technology.
2022 – Spatial Journalism
Emphasized integrating locative data and spatial understanding into journalism to better reflect and connect with communities, thereby enhancing storytelling.
Technology evolves rapidly, and having a solid understanding of emerging tech applications is incredibly valuable. The R&D team began exploring large language models back in 2019. Now that these models are widely adopted, they have a significant advantage in knowing how to apply them responsibly.
From the NYT R&D Labs’ work on media transmission and provenance.
Lowenstein emphasized the importance of transparency in their work. “We’re always going to tell our readers when and how we’re using these emerging technologies,” he said. “I view it as a duty to our readers to explain how we’re using these technologies. And in that process of explanation, that is the process of balancing storytelling and integrity in action.”
Those principles show up in reader-facing ways. This is evident on their website, where stories include methodologies, labels, and sometimes tutorials or explanatory text. Those principles also show up in industry-facing ways. The R&D Lab shares their work as much as they can on their website, and through open source libraries for others in media.
“I think that’s one of the great benefits of having this kind of open source mentality is that we can see, we put our stuff out there and we can see how the community is using it,” Lowenstein said.
A lot of people are using the photogrammetry, the spatial journalism and 3D storytelling, using the libraries and contributing to them, he explained. “It’s super gratifying to see this little community built up around some of the photogrammetry, the spatial journalism. It’s awesome to see how people have taken that work and evolved it and made it their own.”
From the NYT’s R&D Lab’s experiments in spatial journalism.
Technology should be in service of the story
As technology and journalism continue to evolve, other organizations can glean valuable insights from The New York Times R&D Lab’s approach to experimenting with emerging technologies. For one, Lowenstein emphasizes that technology should be in service of the story. “That’s first and foremost. We don’t do it for the sake of doing it. The story should come first and you should really look through if the ambition of the story warrants it.”
Embracing experimentation is equally vital, and even though trial may lead to error, each misstep could lead to unexpected insights. “We often try things that don’t work the first time or the fifth time, and then you find the perfect fit,” Lowenstein said.
Lowenstein suggested keeping a record of what you’ve tried and what you’ve learned along the way. He says that oftentimes, things will pop up you didn’t know would be useful in the future. Keeping records allows you to adapt to new possibilities.
Finally, the director suggested that it is important for media companies to have principles about how they use technology responsibly and in service of their work. It helps guide decision making.
By establishing clear principles for the responsible application of emerging tech, organizations can carve out a focused space to innovate while delivering quality journalism that resonates with audiences. Ultimately, it’s this blend of technology, experimentation, and integrity that will empower the industry to navigate into the future.
“In some ways, having those clear principles about when and how you use technology is a great creative constraint. It allows you to define the parameters of how you want to use the technology in service of quality journalism. And, working within those boundaries from the beginning makes it achievable and makes the space of things that you could do smaller so that you can actually focus on the right stuff,” Lowenstein said.
Generative AI enables the production of content at scale. It has been found to generate narratives with essential elements like concrete characters, logical plot development, and coherence, which are all key to the story quality. Narratives play a vital role in conveying culture, enforcing norms, and sharing knowledge. But what happens to authenticity, emotion, and persuasive power in AI-generated stories?
A recent report by Haoran Chu and Sixiao Liu, Can AI Tell Good Stories?, examines AI’s potential to craft narratives that engage audiences and influence beliefs and behaviors. Through a new three-part series of studies, these researchers build on studies highlighting tools like ChatGPT, which show promise for AI-driven storytelling. However, they emphasize that impactful storytelling relies on authenticity, which goes beyond factual accuracy. Chu and Liu question ChatGPT’s ability to evoke narrative immersion and persuasion, elements often anchored in human creativity and personal experience.
The researchers also investigate whether labeling a story as AI-generated affects reader engagement. Overall, the study examines AI’s potential to tell compelling stories and influence audiences and how AI-generated stories compare to traditional human storytelling.
Study 1: Can AI tell good stories?
In the first study, Chu and Liu explore the question, “Can AI tell good stories?” They examine whether AI-generated narratives can captivate and impact audiences as deeply as human-created ones. This study emphasizes AI’s technical abilities in crafting stories. The researchers program AI to create stories, from short fictional tales to complex, multi-layered narratives, and then analyze how these AI-generated stories affect people emotionally and intellectually.
The study finds that AI can adhere to structural storytelling elements like character development, conflict, and resolution. Yet, it often needs more depth of human experiences and emotions that make stories compelling. Despite this, AI-generated narratives sometimes capture audience interest and influence certain beliefs or attitudes. The study concludes that AI storytelling has limitations but is advancing quickly, with promising potential for more engaging narratives. The researchers suggest that enhancing AI’s emotional intelligence and contextual understanding could further improve its storytelling capabilities.
Study 2: The influence of AI-generated stories on beliefs and behavior
The second study goes beyond the question of AI’s ability to tell a good story to ask whether AI-generated stories can genuinely influence beliefs and behaviors. This research investigates whether AI-crafted stories can impact audiences as effectively as human-created ones. It focuses on the potential to shape opinions, foster empathy, or inspire actions.
In this study, participants read stories created by either humans or AI. The participants are unaware of the source of each story, allowing the researchers to assess the impact of the content without bias. They find that AI-generated stories could somewhat influence readers’ opinions and emotions, especially regarding straightforward or factual topics. However, human-created stories tend to have a more substantial and lasting effect on the audience for more nuanced or emotionally charged subjects.
The study suggests that while AI-generated stories have the potential to influence behavior, they often lack the subtlety and emotional complexity required to create deep, long-lasting connections. This limitation is likely because AI needs more personal experience and an understanding of human emotions, which can make its narratives feel less relatable or profound. Despite these limitations, the study notes that AI storytelling could still be useful in specific contexts, such as educational or informative storytelling.
Study 3: Comparing AI and human storytelling
The third study directly compares AI storytelling with human storytelling, examining their strengths and weaknesses. This study seeks to understand whether AI can tell a good story and where it falls short compared to human creativity. The researchers analyze the storytelling components, including character development, emotional appeal, and the ability to engage audiences.
The findings show that while AI excels at consistency and structure. However, it struggles with the abstract aspects of storytelling, such as creating realistic emotions and building complex character relationships. For example, AI can write a well-organized story with a clear beginning, middle, and end but often struggles with subtleties like irony, humor, or empathy.
On the other hand, human storytellers can draw from personal experiences and emotions to create stories that resonate more deeply. This is particularly important for genres that rely on empathy, like drama or romance, where the audience connects with the characters’ emotional journeys.
AI can create content. It can’t tell impactful stories (yet)
The study concludes that while AI might be useful for quickly producing structured content, it needs to improve its ability to replicate the depth of human storytelling. However, it also points out that AI can be a powerful tool when paired with human creativity. For instance, AI can generate story ideas, help with content organization, or even create drafts that human writers can refine and deepen.
These three studies provide a nuanced understanding of AI storytelling. The first study finds that AI has the structural capability to create stories but lacks the depth of human experiences. The second study shows that AI-generated stories can influence beliefs and behaviors but are less impactful than human stories for emotionally complex topics. The third study compares AI storytelling to human storytelling, showing that while AI is effective in structured storytelling, it struggles with emotional depth and complexity. Together, these studies suggest that AI storytelling is still most effective when complemented by human creativity and emotional insight.
Two small letters – AI – are proving major disruptors for the digital news industry. Already, 70% of journalists and newsroom leaders use generative AI in some capacity. While we hear much excitement over its potential, opinion remains divided. Senior editorial representatives at titles such as UK newspaper The Sun have expressed concern that AI in journalism will deliver a “tsunami of beige” content. And readers say they prefer AI-free news.
As such, there is a balancing act to be performed for AI and human input in newsrooms, pitting its potential to revolutionize the industry against the risks of over-reliance and ethical concerns to find a middle ground that benefits all. Against a backdrop of budget and team cuts, however, it’s crucial to realize that AI can enhance workflow efficiency, allow journalists to do more with less. That, in turn, can allow teams to focus on what matters the most: creating human-centered, authentic content that engages the reader.
Using AI to bridge the gap with users
Primary motivations among journalists for using AI technologies includeautomating mundane tasks. But as Ranty Islam, Professor of Digital Journalism, pointed out at the 2024 DW Global Media Forum, this isn’t the be-all and end-all of AI. The key lies in integrating AI into a holistic strategy that brings journalism closer to readers. Using AI to perform necessary but time-intensive tertiary tasks means that journalists—notably those in local newsrooms with smaller budgets and teams—can get more actual journalism done. They can get out there to connect with real people and stories.
Moreover, as audience needs change, AI can help newsrooms track and enhance the stories and formats that perform for them in an audience-first strategy. Using AI alongside tracking means newsrooms can harness suggestions about when to write stories and the kinds of topics and formats that audiences want to see. Content suggestions can be made based on AI tagging systems,while algorithms that suggest stories based on user behavior or interests can help tailor content for different audiences. This enhances the reader experience for greater engagement and retention while also helping boost subscription offerings through data-driven personalization.
Doing more with less
There is a wealth of opportunities for newsrooms to use AI to help with everyday tasks, and the benefits for understaffed newsrooms are clear. The local news sector has particularly felt the impact, losing nearly two-thirds of its journalists since 2005. AI can serve as a helping hand for tasks that would otherwise require multiple staff members. We have integrated AI into our liveblogging software, for example, to enable users to generate liveblog content summaries in seconds, assimilate live sports results, adjust tone and language, create social media posts, and generate image captions for enhanced SEO.
AI’s potential to localize content and engage new audiences is widely recognized.FT Strategies has highlighted AI translation as “truly disruptive in the context of news”, particularly for multilingual communities or multinational publishers seeking to replicate regional content across multiple sites.
Indeed, AI excels at summarizing and extracting information, making it extremely useful for summary generation. While most reporters aren’t keen on full AI copy generation, enabling teams to recycle their content quickly and easily or suggest headlines based on keywords can be a huge help. Moreover, since the training data is their own, the summaries reflect the author’s original, authentic style.
This summarization can be carried through to data analysis to create charts and infographics. AI can even create text descriptions for supporting media to help with search engine optimization or social media posts and matching hashtags to promote stories. Journalists aren’t typically SEO or social media managers, but small teams sometimes need to wear multiple hats. Using AI as a virtual assistant allows reporters to focus more energy on their reporting.
AI can also be harnessed to support a journalist’s development or to augment collaboration or brainstorming that might once have been done in a newsroom among a large team. AI can be used to create identify story gaps or flaws, a tool to suggest improvements, or to proofread, make composure suggestions, and adapt tonality to the situation at hand. This is particularly useful when wanting to address various user needs or even different age groups.
Liveblogs offer a prime example of how AI can be harnessed to enhance reporting, helping manage and update live content in real-time, automatically pulling in relevant information, images, and social media posts. This allows journalists to focus on providing valuable insights and context, delivering a dynamic and engaging experience for readers.
Trust and transparency while using AI in journalism
Using AI behind the scenes in this multitude of ways chimes with reader comfort levels. The Reuters Institute for the Study of Journalism revealed that while readers are skeptical of AI-generated content, they are generally happier with AI handling behind-the-scenes tasks under human supervision—as long as newsrooms are transparent about AI usage.
While AI can help to detect disinformation, fact-check, or moderate online comments, adding to the integrity of journalistic content, its tendency to hallucinate and invent also means that human oversight is vital. We must train journalists to work alongside AI, using it to enhance, not supersede, their skills, striking a balance between AI utility and the preservation of the human element in news reporting.
AI is a tool, not a substitute. It can automate mundane tasks, save time and assist research and brainstorming processes, but its power lies in complementing human effort, not replacing or overshadowing it.
The future of journalism lies in a hybrid approach in which AI supports, not replaces, the essential human touch that defines quality journalism. Whatever the medium—print, online, liveblog—by fostering collaboration between technology and editorial expertise, newsrooms can navigate this evolving landscape, ensuring that AI enhances, rather than diminishes, the integrity and creativity of journalistic work.
The shifting dynamics of the digital news industry are reshaping how outlets connect with audiences, and the definition of “journalist” is changing. Influencers on platforms like TikTok and YouTube successfully engage audiences in ways that traditional newsrooms sometimes struggle to achieve. As media consumption habits shift, the creator community offers a valuable vehicle for traditional news organizations.
By collaborating with influential creators, news outlets can access new, often younger, audiences they might overlook. Evolving its newsroom strategies, the Baltimore Banner surpassed subscription goals and expanded its newsroom to 80 staff members. Similarly, the 133-year-old Seattle Times reached record circulation levels, reflecting a broader trend of local and nonprofit outlets successfully adapting to new challenges.
The Poynter Institute’s new report highlights these trends. It identifies how the journalism industry increasingly relies on innovative strategies to adapt, with content creators and influencers playing a critical role in this transformation. While traditional news organizations face ongoing disruptions, this report shows that the demand for credible news remains strong.
Evolving newsroom
Poynter sees the rise of digital content creators, influencers, and the “creator economy” as an opportunity to redefine journalism. Rather than viewing them as purely competition for attention, traditional news organizations can collaborate with a new generation of content creators who bring fresh perspectives and innovative formats.
Social media influencers often cover viral stories, reaching those who get their news primarily from platforms like TikTok or Instagram Reels. Working with content creators can help newsrooms diversify their storytelling formats and engage with younger, digitally savvy audiences. Journalists and creators both serve essential roles in the evolving news ecosystem.
Engaging audiences with news fatigue
Audiences face news fatigue. Wars, political instability, climate change, and economic uncertainty contribute to this exhaustion. However, it’s essential to recognize that news fatigue does not equate to a lack of interest in journalism. On the contrary, data from the Pew Research Center shows that news consumption remains steady, with audiences following major events like the 2024 elections more closely than in previous years.
News organizations must present these topics in ways that resonate with audiences. Context, relevance, and credibility are key to engaging readers and viewers. This is where the rise of the creator economy becomes highly useful. Content creators, with their ability to present news in relatable and entertaining formats, play a unique role in combating news fatigue.
Audiences today are fragmented, consuming information from various platforms and influencers. Journalists and content creators can embrace this reality by delivering tailored, high-quality stories serving distinct audiences. Poynter recommends that rather than diluting content out of fear that audiences will turn away, the focus should be on creating stories that provide context and actionable insights.
Newsrooms must innovate and adapt
Despite ongoing challenges in the news industry, organizations are finding ways to adapt. Collaboration between traditional journalists and digital content creators is key in this evolving landscape, each bringing distinct strengths. The Poynter report highlights the growing influence of creators and influencers in news delivery and building trust with younger audiences. It also explores how they are reshaping the broader media ecosystem.
As the industry transforms, one constant remains: high-quality journalism—whether produced by a traditional newsroom or a smartphone-wielding influencer—retains its crucial role in society. Together, these forces reshape how people consume and trust news. This partnership will shape journalism’s future, ensuring that reliable information reaches audiences in a digital-first world.
Creativity fuels innovation and expression across various media disciplines. With the advent of generative artificial intelligence (AI) and large language models (LLMs), many question how these technologies will influence human creativity. While generative AI may effectively enhance productivity across sectors, its impact on creative processes remains questioned. New research, Generative AI enhances individual creativity but reduces the collective diversity of novel content, investigates AI’s effect on the creative output of short (or micro) fiction. While the research focuses on short stories, the study examines how generative AI influences the production of creative expression, which has larger implications.
Creative assessment
The study evaluates creativity based on two main dimensions: novelty and usefulness. Novelty refers to the story’s originality and uniqueness. Usefulness relates to its potential to develop into a publishable piece. The study randomly assigns participants to one of three experimental conditions for writing a short story: Human-only, Human with one Generative AI idea, and Human with five Generative AI ideas.
The AI-assisted conditions include three-sentence story ideas to inspire creative narratives. This design allows the researchers to assess how AI-generated prompts affect the creativity, quality, and enjoyment of the stories produced. Both writers and 600 independent evaluators assessed these dimensions, providing a comprehensive view of the stories’ creative merit across different conditions.
Usage and creativity
In the two generative AI conditions, 88.4% of participants used AI to generate an initial story idea. In the “Human with one Gen AI idea” group, 82 out of 100 writers did this, while in the “Human with five Gen AI ideas” group, 93 out of 98 writers did the same. When given the option to use AI multiple times in the “Human with five GenAI ideas” group, participants averaged 2.55 requests, with 24.5% asking for a maximum of five ideas.
The findings from the independent evaluators show that access to generative AI significantly enhances the creativity and quality of short stories. Writers who use AI-generated ideas to produce stories consistently rate higher in creativity and writing quality than those written without AI assistance. This effect was particularly noticeable in the Human with five GenAI ideas condition, suggesting that increasing exposure to AI-generated prompts leads to greater creative output.
However, the study also uncovers notable similarities among stories generated with AI assistance. This suggests that while AI can enhance individual creativity, it also homogenizes creative outputs, diminishing the diversity of innovative elements and perspectives. The benefits of generative AI for individual writers may come at the cost of reduced collective novelty in creative outputs.
Implications for stakeholders
Despite several limitations, the research highlights the complex interplay between AI and creativity across different artistic domains. These limitations restrict the creative task by length (eight sentences), medium (writing), and type of output (short story). Additionally, there is no interaction with the LLM or variation in prompts. Future studies should explore longer and more varied creative tasks, different AI models, and the ethical considerations surrounding AI’s role in creative text and video production. Examining the cultural and economic implications in creative media sectors and balancing innovation with preserving artistic diversity is essential.
Generative AI can enhance human creativity by providing novel ideas and streamlining the creative processes. However, its integration into creative media processes must be thoughtful to safeguard the richness and diversity of artistic expression. This study sets the stage for further exploration into generative AI’s evolving capabilities and ethical implications in fostering creativity across diverse artistic domains. As AI technologies evolve, understanding their impact on human creativity is crucial for harnessing their full potential while preserving the essence of human innovation and expression.
The introduction of AI-generated search results is just the next step in a long line of the platforms moving more of the audience interactions behind their walled gardens. This is an accelerating trend that’s not going to reverse. Google began answering common questions itself in 2012, Meta increased its deprioritization of news in 2023, and now some analysts are predicting that AI search will drop traffic to media sites by 40% in the next couple years.
It’s a dire prediction. Panic is understandable. The uncertainty is doubled by the sheer pace of AI developments and the fracturing of the attention economy.
However, it is important to know that this is another situation in which it is critical to focus on the fundamentals. Media companies need to develop direct relationships with audiences, double down on quality content, and use new technology to remove any inefficiencies in their publishing operation. Yes, the industry has already done this for decades. However, there are new approaches in 2024 that can allow publishers to improve experiences to attract direct audiences.
All-in on direct relationships
When there’s breaking news, is the first thought in your audience’s mind opening your app, or typing in your URL? Or are they going to take the first answer they can get – likely from someone else’s channel?
Some media companies view direct relationships as a “nice to have” or as a secondary objective. If that’s the case, it’s time to make them a priority.
Whether direct relationships are already the top priority or not, now’s a good time to take a step back to re-evaluate the website’s content experience and the business model that supports it. Does it emphasize—above all else—providing an audience experience that encourages readers to create a direct relationship with your business?
When the cost to produce content is zero, quality stands out
This brings us to the avenue that drives direct relationships: your website, and your app. Particularly as search declines as a traffic source, these become the primary interaction space with audiences. We’ll follow up next month with frameworks for your product team to use to make your website and apps more engaging to further build your direct audience traffic.
It’s no longer about competing for attention on a third-party platform—for example through a sheer quantity of content about every possible keyword. It’s about making the owned platform compelling. Quality over quantity has never been more important.
Incorporating AI into editorial workflow
As the cost to create content is increasingly commoditized via the large language models (LLMs), the internet will fill up with generic noise—even more so than it already is.
Content that’s actually of genuinely high quality will rise in appreciation, both by readers themselves and the search engines that deliver traffic to them. Google is already punishing low quality content. So are audiences. The teams using LLMs to generate entire articles, whole-cloth, are being downgraded by Google (and this approach is not likely to drive readers to you directly either).
But AI does have its uses. One big challenge in generating quality content is time. Ideally, technology gives time back to journalists. They’ll have extra time to dig into their research. They may gain another hour to interview more sources and find that killer quote. Editors have more time to really make the copy pop. The editorial team has more time for collaborating on the perfect news package. The list goes on.
AI is perfect for automating all the non-critical busywork that consumes so much time: generating titles, tags, excerpts, backlinks, A/B tests, and more. This frees up researchers, writers, and creatives to do the work that audiences value most, and deliver the content that drives readers to return to websites and download apps.
This approach has been emerging for a while now. For example, ChatGPT is great at creating suggestions for titles, excerpts, tags, and so on. However, there’s a new approach that’s really accelerating results: Retrieval Augmented Generation (RAG).
RAG is the difference maker when it comes to quality
The base-model LLMs are trained on the whole internet, rather than specific businesses. RAG brings an organization’s own data to AI generation. Journalists using ChatGPT to get generations will get “ok” results that they then need to spend time fixing. With RAG, they can focus the results to make sure they’re fine-tuned to your particular style. That’s important for branding, and also saves creatives time to use for other things.
The next level not only uses content data, but also performance data to optimize RAG setups. This way, AI is not just generating headline suggestions or excerpts that match a particular voice, it’s also basing them on what has historically generated the most results.
In other words, instead of giving a newsroom ChatGPT subscriptions and saying “have at it,” media companies can use middleware that intelligently prompts LLMs using their own historical content and performance data.
Do this right and journalists, editors, and content optimizers can effortlessly generate suggestions for titles, tags, links, and more. These generations will be rooted in brand and identity, instead of being generic noise. This means the team doesn’t need to spend time doing all that manually, and can focus on content quality.
Using RAG to leverage the back catalog
Media companies have thousands upon thousands of articles published going back years. Some of them are still relevant. But the reality is that leveraging the back catalog effectively has been a difficult undertaking.
Humans can’t possibly remember the entirety of everything an organization has ever published. But machines can.
A machine plugged into the CMS can use Natural Language Processing (NLP) to understand the content currently being worked—what is it about? Then it can check the back catalog for every single other article on the topic. It can also rank each of those historical articles by which generated the most attention and which floundered. Then it can help staff insert the most high-performing links into current pieces.
Similarly, imagine the same process, just in reverse. By automating the updating of historical evergreen content with fresh links, new articles can immediately jump-start with built-in traffic.
Removing silos between creation and analysis
While Google traffic might be declining, it will nonetheless remain important in this new world. And in this period of uncertainty, media organizations need to convert as much as possible of the traffic from this channel while it is still operating.
We call this “Leaving the platforms behind.” Media companies should focus on getting as much of the traffic from search and other channels into first-party data collection funnels as possible. This way, they can build enough moat to continue floating if any or all of these traffic channels completely disappear.
Most teams today have dedicated SEO analysts who are essentially gatekeepers between SEO insights and content production. The SEO analysts aren’t going anywhere any time soon. But the new table stakes are that every journalist needs to be able to self-serve keyword insights.
It is important to use analytics tools that bring search console data directly to the approachable/easy article analytics page that the editorial team already knows how to use. Ideally, analytics tools should connect keywords and other platform traffic to conversions, so everyone on your team can understand their impact on leaving the platforms behind.
Done well, you’ll create a feedback loop that evolves and improves your content in a way that resonates with readers and machines.
Quality is all that matters
This is not the first “all hands on deck,” moment for the media industry. That being said, what we’re seeing is that the barometer of success is a truly aligned strategy and execution that brings product, business development, and editorial teams together to pursue creating first party relationships with audiences. The organizations that have little brand identity, and pursue traffic instead of subscriptions, are suffering—and will likely continue to do so.
The public has a knowledge gap around generative artificial intelligence (GenAI), especially when it comes to its use in news media, according to a recent study of residents in six countries. Younger people across countries are more likely to have used GenAI tools and be more comfortable and optimistic about the future of GenAI than older people. And a higher level of experience using Gen AI tools appears to correlate with more positive assessment of their utility and reliability.
Over two thousand residents in each of six countries were surveyed for the May 2024 report What Does the Public in Six Countries Think of Generative AI in News? (Reuters Institute for the Study of Journalism). The countries surveyed were Argentina, Denmark, France, Japan, the UK and the USA.
Younger people more optimistic about GenAI
Overall, younger people had higher familiarity and comfort with GenAI tools. They were also more optimistic about future use and more comfortable with the use of GenAI tools in news media and journalism.
People aged 18-24 in all six countries were much more likely to have used GenAI tools such as ChatGPT, and to use them regularly, than older respondents. Averaging across countries, only 16% of respondents 55+ report using ChatGPT at least once, compared to 56% aged 18 to 24.
Respondents 18-24 are much more likely to expect GenAI to have a large impact on ordinary people in the next five years. Sixty percent of people 18-24 expect this, while only 40% of people 55+ do.
In five out of six countries surveyed, people aged 18-34 are more likely to expect GenAI tools to have a positive impact in their own lives and on society. However, Argentia residents aged 45+ broke rank, expressing more optimism about GenAI improving both their own lives and society at large than younger generations.
Many respondents believe GenAI will improve scientific research, healthcare, and transportation. However, they express much more pessimism about its impact on news and journalism and job security.
Younger people, while still skeptical, have more trust in responsible use of GenAI by many sectors. This tendency is especially pronounced in sectors viewed with greatest skepticism by the overall public – such as government, politicians, social media, search engines, and news media.
Across all six countries, people 18-24 are significantly more likely than average to say they are comfortable using news produced entirely or partly by AI.
People don’t regularly use GenAI tools
Even the youngest generation surveyed reports infrequent use of GenAI tools. However, if the correlation between young people using GenAI more and feeling more optimistic and trusting about it holds true on a broader scale, it’s likely that as more people become comfortable using GenAI tools regularly, there will be less trepidation surrounding it.
Between 20-30% of the online public across countries have not heard of any of the most popular AI tools.
While ChatGPT proved by far the most recognized GenAI tool, only 1% of respondents in Japan, 2% in France and the UK, and 7% in the U.S. say they use ChatGPT daily. Eighteen percent of the youngest age group report using ChatGPT weekly, compared to only 3% of those aged 55+.
Only 5% of people surveyed across the six countries report using GenAI to get the latest news.
It’s worth noting that the populations surveyed were in affluent countries with higher-than-average education and internet connectivity levels. Less affluent, free, and connected countries likely have even fewer people experienced with GenAI tools.
The jury is out on public opinion of GenAI in news
A great deal of uncertainty prevails around GenAI use among all people, especially those with lower levels of formal education and less experience using GenAI tools. Across all six countries, over half (53%) of respondents answered “neither” or “don’t know” when asked whether GenAI will make their lives better or worse. Most, however, think it will make news and journalism worse.
When it comes to news, people are more comfortable with GenAI tools being used for backroom work such as editing and translation than they are with its use to create information (writing articles or creating images).
There is skepticism about whether humans are adequately vetting content produced using GenAI. Many believe that news produced using GenAI tools is less valuable.
Users have more comfort around GenAI use to produce news on “soft” topics such as fashion and sports, much less to produce “hard” news such as international affairs and political topics.
Thirty percent of U.S. and Argentina respondents trust news media to use GenAI responsibly. Only 12% in the UK and 18% in France agree. For comparison, over half of respondents in most of the countries trust healthcare professionals to use GenAI responsibility.
Most of the public believes it is very important to have humans “in the loop” overseeing GenAI use in newsrooms. Almost half surveyed do not believe that is happening. Across the six-country average, only a third believe human editors “always” or “often” check GenAI output for accuracy and quality.
A cross-country average of 41% say that news created mostly by AI will be “less worth paying for” and 19% “don’t know. 32% answered “about the same.”
Opportunities to lead
These findings present a rocky road for news leaders to traverse. However, they also offer also an opportunity to fill the knowledge gap with information that is educational and reassuring.
Research indicates that the international public overall values transparency in news media as a general practice, and blames news owners and leadership (rather than individual journalists) when it is lacking. However, some research shows users claim to want transparency around GenAI tools in news, but trust news less once they are made aware of its use.
The fact that the public at large is still wavering presents an opportunity for media leaders to get out in front on this issue. Creating policy and providing transparency around the use of GenAI tools in news and journalism is critical. News leaders especially need to educate the public about their standards for human oversight around content produced using GenAI tools.
Public trust in the news is dwindling, with three in 10 UK adults admitting they don’t trust the news very much and 6% confessing they don’t trust it at all. Unfortunately, this phenomenon is not limited to the UK but affects media audiences globally. A recent Gallup Poll, for example, showed a similar reality among Americans, with only 32% saying that they trust news a “great deal” or a “fair amount.”
What’s more, publishers are grappling with the fact that audiences increasingly turn to social media to get their news fix. In its annual Digital News Report, Reuters and The University of Oxford found that 30% of respondents say that social media is the main way they come across news, surpassing the 22% who access it directly. Unfortunately, social media provides a fertile breeding ground for misinformation, which (somewhat ironically) further erodes people’s trust in news.
Today’s media companies need strategies and tools that will help them re-engage audiences whose expectations have been shaped by social media. By understanding the behaviors and preferences of today’s audiences and incorporating the right tools and tactics, publishers have the ability to attract audiences and satisfy their need for a well-rounded information diet in a more social setting.
More than passing news updates
Notably, the shift to social news consumption is particularly acute among younger consumers, with people aged 18-24 less likely to use a news website or app and more dependent on social media for news. And these young consumers’ information preferences have been molded by their use of social media and mobile content consumption. Our own research finds consumers want easily understandable and readily available content. In fact, 26% of 18-34-year-olds say that they prefer news updates in short, bite-sized segments.
One of the strategies publishers can implement to replicate the social media experience–while continuing to provide quality news and information–is through the use of live blogs. Live blogs allow media companies to provide readers with an enriched and authentic experience that replicates the benefits of social media while addressing key challenges such as lack of engagement, misinformation, and declining trust.
A live blog allows publishers to provide real-time commentary, updates, and coverage on breaking news or unfolding events. Despite their rise in popularity during the Covid-19 pandemic – where they served as a valuable tool for disseminating rapidly emerging critical information – live blogs have been around for quite some time.
However, publishers around the world are now working to refine their live blog strategies to capture the best aspects of the social media experience but serve as more than just a format that provides the latest superficial updates. These publishers build trust and credibility among their audiences through this more social way of authentic storytelling.
The style of live blogs resembles a mobile-friendly social media timeline. Therefore, it gives consumers news in the format they crave. It caters to the habits and preferences of users accustomed to consuming content through scrolling on their mobile phones.
Interactivity and engagement
To increase audience engagement, publishers can also incorporate interactive elements such as polls, videos, and live comment blocks into their live blogs. These mirror many popular features found on social media platforms. For example, journalists from the New Zealand publisher Stuff interacted directly with readers as millions of people attempted to get tickets to Taylor Switft’s Eras Tour in Australia. With over 150 comments on their live blog, the journalists were able to build a community with their readers as they all shared their triumphs and frustrations with one another in real-time.
Some publishers even use live blogs to provide their audiences with direct access to experts in various fields. MDR, a public German broadcaster, did this particularly well during the Covid-19 pandemic and cost of living crisis. They encouraged readers to post their questions in a comment block within the live blog. Then, the expert answered their questions directly in the chat. This tactic increases trust by giving readers access to experts in their field and reinforcing the expertise of a media outlet’s team. It also helps provide a more balanced view of events through the inclusion of a variety of perspectives, reducing the perception of spin.
With live blogs, individual personalities can come out, which allows journalists to foster better relationships with their audience. For example, reporters covering sports at Süddeutsche Zeitung engage with their audience using a lighter tone than their formal journalism. This injects personality into their coverage and makes it more relatable and enjoyable for readers, mirroring the conversational style often seen in social media interactions.
Another key advantage of live blogs is their ability to prevent endless doomscrolling by providing a curated and limited amount of verified information and data. This way, readers can choose the most relevant information to them based on their own needs and preferences without becoming overwhelmed with too much content.
Looking ahead
It’s a challenging time for publishers and newsrooms around the world. The emergence of generative AI search results, along with audiences’ increasing frustration with the news (not to mention the fact that social media platforms are distancing themselves from news), create higher barriers to engagement.
In the year ahead, publishers should turn their attention to incorporating strategies that replicate the elements audiences love most about social media to keep consumers engaged and coming back for more. Implementing this approach can help publishers meet the needs of the modern consumer, who favors receiving their news in short, bite-sized segments. Live blogs allow media companies to capture the essence of the social media experience while addressing lack of engagement, misinformation, and declining trust.
Trust – and the lack of it – has become the metric of choice when discussing the alienation individuals feel regarding news organizations. When we consider what the metric is telling us, the picture is undeniably grim.
In an October 2023 poll, Gallup found that more people said they had no trust at all in the media (39%) than those that said they had a great deal of trust (32%). Increasing accountability and transparency are oft-cited prescriptions news organizations focus on to build trust.
Getting to the root of trust issues
However, many people say that the reasons they don’t trust the media include a failure to cover both sides of an issue and the perception that journalists have a political bias, particularly a liberal bias. The Gallup poll reflected a 47% trust gap between Democrats (58%) and Republicans (11%). That said, trust among Democrats is falling significantly for the same reason that it has plummeted for Republicans: a perception of bias, in this case, a conservative bias. The nation’s political polarization is further driving down media trust.
It is understandable that media organizations believe that audience perception of bias can be addressed through transparency efforts focused on the way journalists report and disseminate the news. Unfortunately, there’s a fundamental element of storytelling that may have a much bigger impact on the appearance of bias: word choice.
It’s difficult to address issues of bias when people fail to see themselves reflected in the words journalists use. Language is not merely a tool for communication but a reflection of positioning and perspective, bias and blame. Academic studies show that trust and distrust are encoded in the very language choices we make.
Research we’ve been conducting at the University of Florida’s College of Journalism and Communications is identifying patterns of common language usage in coverage of controversial and potentially divisive subjects that could drive wedges and further damage trust. It is possible that, by recoding words away from inherent biases and towards authentic language people use to describe their experiences, we may find a pathway that engenders trust.
While the pursuit of trust is indeed a noble one, it feels more ambitious than the current climate allows. Therefore, journalists should ask the question: Is trust entirely in my control? And if not, what is? Our work has steered us toward focusing on what can be controlled: authenticity, intentionality and precision. We believe these elements can serve as the building blocks that lead to greater trust.
Based on that work, we’re developing a machine-learning tool journalists can use to identify potentially biased language and use that feedback to make more intentional word choices. The tool, called Authentically, is aimed at equipping journalists with the insights to make informed decisions in their writing. Authentically is currently in the alpha stage of development and we’re working with newsroom partners to test functionality.
When complete, the tool will operate in real time to flag words that merit more careful consideration. By providing a more robust context to the connotations of language, journalists are given the opportunity to ask themselves: Is this really what I meant to say? Does this accurately represent the events I’m describing? Is this language biased?
Word choices and perceived bias
Throughout our investigation of multiple news topics, common patterns of use emerged. In our analysis of abortion coverage, the data indicated that words conveying a sense of pride, such as “proudly,”“unapologetically” and “adamantly” frequently preceded the pro-life label, whereas the pro-choice label was frequently preceded by words indicating a sense of necessity or urgency, such as “necessarily,”“increasingly” and “relentlessly.” While these differences might appear subtle, they raise critical questions: what is being communicated when the language used around one position consistently denotes an undertone of morality while the other suggests one of urgency?
In examining coverage of racial justice protests, specifically regarding the murder of George Floyd in 2020, the findings spoke for themselves. The verbs used to describe protest actions repeatedly drew comparisons to fire or destruction, such as “spark,” “fuel,” “erupt,” “ignite,” “trigger” and “flare.” Is the recurrent use of this fiery language a deliberate choice, or is it a subconscious pattern of bias? What impact does that have on the perception of these demonstrations and of the people participating in them?
As concerns and polarizations regarding the climate grow, so does the importance to be conscious of our language choices. Verbs used with the term “global warming” appear to have a more neutral focus on the general effects, such as “occur” and “bring,” while verbs used with the term “climate change” delve deeper into the speed, intensity and potential ramifications of ongoing environmental shifts, such as “alter,” “fuel” and “accelerate.” Does the language journalists use – even when the differences are subtle – help convey the urgency of a climate emergency, and therefore shape perceptions?
While the foundational tenants of journalism remain core to audience trust, words matter. Our research has illustrated to us the pivotal role of authenticity, intentionality and precision in beginning to bridge the gap between the intention of the journalist and the ways their stories are received by the public.
About the authors
Janet Coats is the Managing Director of the Consortium on Trust in Media and Technology at the University of Florida’s College of Journalism and Communications. She spent 25 years as a journalist and a decade as a media consultant before moving to higher education.
Kendall Moe is the Senior Project Manager and Researcher for the Authentically project and has conducted the language analysis described in this story. She has an undergraduate degree in linguistics and a master’s degree in special education from the University of Florida.
AI is more than just a trendy buzzword; it’s a transformative force that will shape the future of media. From content creation to personalization and automation, AI is on the brink of revolutionizing how we both produce and consume news, and media organizations that are prepared to embrace AI stand to benefit significantly. In a recent webinar, I spoke with Arc XP Chief Technology Officer, Matt Monahan, to learn how the emergent technology of generative AI is unlocking new workflows, presenting and solving unique business challenges, and creating opportunities for growth within the digital media industry.
Understanding generative AI
Today, when we talk about AI, we often mean generative AI, a subset of deep learning, which teaches computers to think like humans, recognizing complex patterns in data. “Generative AI is really a transformer model and the models behind them, or what we call large language models (LLMs),” says Monahan. These models, such as Chat GPT, can handle tasks ranging from text generation to code creation, image generation, and even 3D modeling.
The AI industry is currently in a hype cycle, marked by high expectations and significant investments. However, a growing awareness of the limitations of LLMs like Chat GPT, particularly within the media industry, has emerged. AI is not a magic wand capable of creating content from scratch with flawless accuracy. Automated story publication, without human oversight, presents significant challenges because these models are not designed for fact-checking or the introduction of new content; their core competency lies in predicting language.
Despite these limitations, experimenting with generative AI provides invaluable insights into the evolving media landscape. Monahan stated, “The companies that spend time in experimentation today are going to be the ones who accrue benefit when they are ready to take advantage of it as the technology matures.” AI is a journey, and its potential is unlocked gradually as teams experiment, learn, and build competency.
While integrating generative AI may initially feel like stepping through a one-way door, there are experiments that allow for exploration without irreversible commitments. By integrating human review processes alongside AI, companies can achieve a harmonious blend of efficiency and accuracy. Human editors bring essential elements such as context, fact-checking and ethical judgment to the table — qualities that AI lacks.
Adopting AI in the newsroom
Recognizing AI as not a distant future technology but a viable solution right now, many news organizations have already embraced LLMs in their newsrooms. With human review processes, they utilize AI for tasks including creating AI-assisted graphics and diagrams, drafting written content, and even generating turnkey content at scale, such as translations, financial reporting, sports coverage, and large dataset analysis. This integration of AI enhances their capabilities while preserving the integrity of their news reporting.
One example of AI in action is its role in translation. Some media companies are already using AI to quickly create high-quality translations, needing little to no editing. This has allowed journalists to reach global audiences more efficiently by tailoring the same story to different readers. By implementing AI into their workflows, journalists are able to minimize their time spent on repetitive and time-consuming tasks, enabling them to focus on what matters – producing compelling and meaningful content that resonates with their readers.
What to expect by 2030
As news organizations take their first steps into the realm of AI, Monahan envisions a future where AI becomes the standard. “If you examine the pace of LLM development over the past three to four years, it becomes quite evident that the quality will improve at a rate beyond most people’s imagination,” he says.
Today, less than 1% of online content is AI-generated. However, he predicts that within a decade, at least 50% of online content will be generated by AI. This raises important questions: What does it mean for content to be 50% AI-generated? Does it represent content created entirely from scratch, content edited by AI, or content that has received AI assistance? These are questions that the media industry will need to address and define in the coming years.
Looking ahead to 2030, Monahan anticipates several key developments:
AI will significantly cut the costs of content creation, encompassing written content, graphics, and video explainers. However, this shift won’t eliminate the need for human involvement, especially in crucial areas like fact-checking and quality assurance.
Content formats and user experiences will shift significantly, with personalized content becoming the norm. Media companies will need to adapt and innovate to meet these new demands.
Sports content will gain immense value as one of the few remaining sources of “original content” resistant to full automation.
Advertising will become hyper-personalized, delivering unique ads and commercials tailored to individual users.
With automated workflows and most of the code being generated by AI tools, every developer is expected to become an AI-assisted developer.
Monahan emphasizes that embracing AI isn’t just about staying ahead; it’s about spearheading a future where AI elevates content creation, enriches user experiences, and reshapes the media landscape. By automating tasks in the newsroom, such as content creation and translation, AI empowers journalists to concentrate on their core mission: crafting engaging and meaningful content for their readers. The future of media is powered by AI, and those who harness its capabilities will lead the way in this transformative journey.
It’s hard to believe, but ChatGPT was only released to the public late last year (November 2022), sparking an AI arms race and spurring adoption across a range of sectors, including the media.
So, how can media leaders best harness these developments? What are the steps they need to have in place to make the most of these advances? Here are seven things you need to consider:
1. Don’t just jump on the bandwagon
The media has long been guilty of shiny object syndrome, chasing after the next big thing in the hope that it will help solve multiple short-term and long-term structural issues. All of the noise that’s being made about AI can make media leaders fear that they are behind the curve. From the publishers I have spoken to recently, the FOMO (fear of missing out) is very real.
Yet at the same time, there’s a wariness too. After all, the media landscape is littered with many other developments (the Metaverse, VR/AR, pivot to video, blockchain et al) that have been simultaneously held up as saviors and disrupters.
Will AI be any different? I think it will be, not least because elements of this technology have already been deployed at many media businesses for a while. Developments in Generative AI are the next stage in this evolution, rather than a wholesale revolution.
2. Take time to determine the best approach
Findings from a new global survey published by LSE seem to reinforce this. They found that although 80% of respondents expect an increase in the use of AI in their newsrooms, four in ten companies have not greatly changed their approach to AI since 2019, the date of LSE’s last report.
Adoption of new tools at this time may therefore be lower than you think. Perhaps that may give you the confidence to take a beat. Rather than jumping on the bandwagon too quickly, take the time to determine what you want AI to help you achieve.
This approach can help to lay the foundations for long-term success. Strategies should start with the end in mind. Set goals and ascertain how you’ll know when they have been achieved.
3. Set up a taskforce to understand what success looks like
To help them determine their own approaches to the latest wave of AI innovation, companies like TMB (Trusted Media Brands) and others have set up internal task forces to understand the risks, as well as the benefits that AI may unlock.
In doing this, media businesses can learn from the mistakes of those who’ve arguably rushed into this technology too quickly. CNET, Gannett and MSN are just some of those who have recently had embarrassing public experiences as a result of publishing (unchecked) AI-written content.
4. Bring the whole company with you
Given the breadth of activities that can be impacted by AI, these internal bodies need to be diverse and include people from across the business. This matters because media firms should see AI as more than just a cost-saver.
Harnessed correctly, it may help to create fresh revenue streams and to reach new audiences. To realize this value, publishers need to cultivate company-wide expertise and carefully assess where AI can drive efficiencies, enhance existing offerings, or enable entirely new products and services.
Tools like Newsroom AI and Jasper can help to increase the volume, speed and breadth of content being offered, while AI-produced newsletters like ARLnow and news apps like Artifact demonstrate how AI can deliver content in fresh ways. Developing internal training programs and encouraging take-up of industry wide opportunities to gain more knowledge about how AI works and its possibilities will help with buy-in and culture change.
As Louise Story, a former executive at The New York Times and The Wall Street Journal recently put it, “AI will reshape the media landscape, and the organizations that use it creatively will thrive.”
5. Have clear guidelines for AI usage
Alongside having a clear strategic approach, and a robust understanding of how to measure success, how these efforts are implemented also matters.
One way to help offset this concern is to upskill your staff and ensure that representatives from across the company are involved in setting your AI strategy. A further practical step involves creating a clear set of guidelines about how AI will be used in your company. And, indeed, what it will not be used for.
There are also opportunities to engage your audience in this process too. Ask them for input on your guidelines, as well as being clear (e.g., through effective labeling) about when AI has, or has not, been used. This matters at a time when trust in the media remains near record lows. AI missteps only risk exacerbating some of these trust issues, emphasizing why elements of this technology need to be used with an element of caution.
6. Understand how to protect your IP
Together with labor concerns, another major issue that publishers and content creators are contending with relates to copyright and IP. It is important to understand how you can avoid your content being cannibalized – and in some cases anonymized – by Generative AI.
Although tools like the chat/converse function in Google Search and Microsoft’s Bing provide links to sources, ChatGPT does not. That’s a major source of concern for media companies who risk being deprived of clickthrough traffic and attribution.
As Olivia Moore at the venture capital firm AZ16 has pointed out, ChatGPT is by far the most widely used of these tools. Its monthly web and app traffic is around the same size as that of platforms like Reddit, LinkedIn, and Twitch.
This summer, the Associated Press agreed to license its content to OpenAI, the company behind ChatGPT, making it the first publisher to do so. Not every company can replicate this. How many outlets have the reach, brand and depth of content that AP has? Nevertheless, it will be interesting to see if other major publishers – as well as consortia of other companies – follow suit.
The media industry has learned from past experience that relying too heavily on tech companies can undermine their long-term sustainability. Short-term financial grants and shifting algorithmic priorities may provide temporary relief but fail to address deeper impacts on creative business models.
Creating quality content comes at a cost. Having seen revenues eroded and journalism undercut previously, publishers are rightfully wary about how this will pan out. So, it will be critical to weigh any payment schemes and financial relationships against the larger industry-wide impact these tools will have on content creators.
Addressing this issue is not easy, given how nascent this AI technology is and how quickly it is developing. However, the potential risk to publishers is understandably focusing a lot of minds on identifying and implementing solutions. For now, as this issue plays out, it’s one that needs to be firmly on your radar.
Moving Forward: diversification and compensation
The rapid evolution of AI presents a heady mixture of both promise and peril. The companies that are most likely to flourish will have to balance the opportunities that AI offers while avoiding its pitfalls and threats.
That’s not going to be easy. However, the relationship between AI developers and content creators will remain a deeply symbiotic one.
“Media companies have an opportunity to become a major player in the space,” arguesFrancesco Marconi, the author of Newsmakers: Artificial Intelligence and the Future of Journalism. “They possess some of the most valuable assets for AI development: text data for training models and ethical principles for creating reliable and trustworthy systems,” he adds.
Given this, arguably it’s all the more important that the media industry is rewarded for this value. “We should argue vociferously for compensation,” News Corporation’s chief executive Robert Thomson says.
At the same time, media companies also need to be cognizant of the fact that AI-driven changes in areas such as search and SEO, as well as consumer behaviors, are likely to impact traffic and digital ad revenues. This is akin to “dropping a nuclear bomb on an online publishing industry that’s already struggling to survive,” contends the technology reporter Matt Novak.
With regulation unlikely to come any time soon, arguably it will be up to publishers, perhaps working together collectively, to navigate the best solutions to this thorny financial issue. That may include collective bargaining and licensing agreements with AI companies using their materials, as well as creative partnerships like the new AI image generator recently announced by Getty Images and Nvidia.
In the meantime, it will be more important than ever for media companies to diversify their revenues, as well as step up their efforts to rethink their business models, operations, and products to ensure that they are fit for the age of AI.
Professor Charlie Beckett argues that fundamental to this will be content that stands out from the crowd. “In a world of AI-driven journalism, your human-driven journalism will be the standout,” he told us recently. Differentiation will be key, concurs the former BBC and Yahoo! executive David Caswell. Meanwhile, as Juan Señor, President of Innovation Media Consultingrecently reminded us, “we cannot rely on someone else’s platform to build our business.”
This means that publishers will need to focus on originality, value, in-house knowledge and skills, as well as the ability to bring their organization – and audience – along with them.
These are major challenges, and we need to acknowledge that AI offers both challenges and opportunities to media companies. Steering through this uncertain period will require making smart strategic decisions and keeping abreast of a rapidly changing landscape. The AI-driven future is hard to predict and navigating this transformation will require both vision and vigilance. But one thing is certain. It’s going to be a bumpy, creative and fascinating journey.