Creativity fuels innovation and expression across various media disciplines. With the advent of generative artificial intelligence (AI) and large language models (LLMs), many question how these technologies will influence human creativity. While generative AI may effectively enhance productivity across sectors, its impact on creative processes remains questioned. New research, Generative AI enhances individual creativity but reduces the collective diversity of novel content, investigates AI’s effect on the creative output of short (or micro) fiction. While the research focuses on short stories, the study examines how generative AI influences the production of creative expression, which has larger implications.
Creative assessment
The study evaluates creativity based on two main dimensions: novelty and usefulness. Novelty refers to the story’s originality and uniqueness. Usefulness relates to its potential to develop into a publishable piece. The study randomly assigns participants to one of three experimental conditions for writing a short story: Human-only, Human with one Generative AI idea, and Human with five Generative AI ideas.
The AI-assisted conditions include three-sentence story ideas to inspire creative narratives. This design allows the researchers to assess how AI-generated prompts affect the creativity, quality, and enjoyment of the stories produced. Both writers and 600 independent evaluators assessed these dimensions, providing a comprehensive view of the stories’ creative merit across different conditions.
Usage and creativity
In the two generative AI conditions, 88.4% of participants used AI to generate an initial story idea. In the “Human with one Gen AI idea” group, 82 out of 100 writers did this, while in the “Human with five Gen AI ideas” group, 93 out of 98 writers did the same. When given the option to use AI multiple times in the “Human with five GenAI ideas” group, participants averaged 2.55 requests, with 24.5% asking for a maximum of five ideas.
The findings from the independent evaluators show that access to generative AI significantly enhances the creativity and quality of short stories. Writers who use AI-generated ideas to produce stories consistently rate higher in creativity and writing quality than those written without AI assistance. This effect was particularly noticeable in the Human with five GenAI ideas condition, suggesting that increasing exposure to AI-generated prompts leads to greater creative output.
However, the study also uncovers notable similarities among stories generated with AI assistance. This suggests that while AI can enhance individual creativity, it also homogenizes creative outputs, diminishing the diversity of innovative elements and perspectives. The benefits of generative AI for individual writers may come at the cost of reduced collective novelty in creative outputs.
Implications for stakeholders
Despite several limitations, the research highlights the complex interplay between AI and creativity across different artistic domains. These limitations restrict the creative task by length (eight sentences), medium (writing), and type of output (short story). Additionally, there is no interaction with the LLM or variation in prompts. Future studies should explore longer and more varied creative tasks, different AI models, and the ethical considerations surrounding AI’s role in creative text and video production. Examining the cultural and economic implications in creative media sectors and balancing innovation with preserving artistic diversity is essential.
Generative AI can enhance human creativity by providing novel ideas and streamlining the creative processes. However, its integration into creative media processes must be thoughtful to safeguard the richness and diversity of artistic expression. This study sets the stage for further exploration into generative AI’s evolving capabilities and ethical implications in fostering creativity across diverse artistic domains. As AI technologies evolve, understanding their impact on human creativity is crucial for harnessing their full potential while preserving the essence of human innovation and expression.
The introduction of AI-generated search results is just the next step in a long line of the platforms moving more of the audience interactions behind their walled gardens. This is an accelerating trend that’s not going to reverse. Google began answering common questions itself in 2012, Meta increased its deprioritization of news in 2023, and now some analysts are predicting that AI search will drop traffic to media sites by 40% in the next couple years.
It’s a dire prediction. Panic is understandable. The uncertainty is doubled by the sheer pace of AI developments and the fracturing of the attention economy.
However, it is important to know that this is another situation in which it is critical to focus on the fundamentals. Media companies need to develop direct relationships with audiences, double down on quality content, and use new technology to remove any inefficiencies in their publishing operation. Yes, the industry has already done this for decades. However, there are new approaches in 2024 that can allow publishers to improve experiences to attract direct audiences.
All-in on direct relationships
When there’s breaking news, is the first thought in your audience’s mind opening your app, or typing in your URL? Or are they going to take the first answer they can get – likely from someone else’s channel?
Some media companies view direct relationships as a “nice to have” or as a secondary objective. If that’s the case, it’s time to make them a priority.
Whether direct relationships are already the top priority or not, now’s a good time to take a step back to re-evaluate the website’s content experience and the business model that supports it. Does it emphasize—above all else—providing an audience experience that encourages readers to create a direct relationship with your business?
When the cost to produce content is zero, quality stands out
This brings us to the avenue that drives direct relationships: your website, and your app. Particularly as search declines as a traffic source, these become the primary interaction space with audiences. We’ll follow up next month with frameworks for your product team to use to make your website and apps more engaging to further build your direct audience traffic.
It’s no longer about competing for attention on a third-party platform—for example through a sheer quantity of content about every possible keyword. It’s about making the owned platform compelling. Quality over quantity has never been more important.
Incorporating AI into editorial workflow
As the cost to create content is increasingly commoditized via the large language models (LLMs), the internet will fill up with generic noise—even more so than it already is.
Content that’s actually of genuinely high quality will rise in appreciation, both by readers themselves and the search engines that deliver traffic to them. Google is already punishing low quality content. So are audiences. The teams using LLMs to generate entire articles, whole-cloth, are being downgraded by Google (and this approach is not likely to drive readers to you directly either).
But AI does have its uses. One big challenge in generating quality content is time. Ideally, technology gives time back to journalists. They’ll have extra time to dig into their research. They may gain another hour to interview more sources and find that killer quote. Editors have more time to really make the copy pop. The editorial team has more time for collaborating on the perfect news package. The list goes on.
AI is perfect for automating all the non-critical busywork that consumes so much time: generating titles, tags, excerpts, backlinks, A/B tests, and more. This frees up researchers, writers, and creatives to do the work that audiences value most, and deliver the content that drives readers to return to websites and download apps.
This approach has been emerging for a while now. For example, ChatGPT is great at creating suggestions for titles, excerpts, tags, and so on. However, there’s a new approach that’s really accelerating results: Retrieval Augmented Generation (RAG).
RAG is the difference maker when it comes to quality
The base-model LLMs are trained on the whole internet, rather than specific businesses. RAG brings an organization’s own data to AI generation. Journalists using ChatGPT to get generations will get “ok” results that they then need to spend time fixing. With RAG, they can focus the results to make sure they’re fine-tuned to your particular style. That’s important for branding, and also saves creatives time to use for other things.
The next level not only uses content data, but also performance data to optimize RAG setups. This way, AI is not just generating headline suggestions or excerpts that match a particular voice, it’s also basing them on what has historically generated the most results.
In other words, instead of giving a newsroom ChatGPT subscriptions and saying “have at it,” media companies can use middleware that intelligently prompts LLMs using their own historical content and performance data.
Do this right and journalists, editors, and content optimizers can effortlessly generate suggestions for titles, tags, links, and more. These generations will be rooted in brand and identity, instead of being generic noise. This means the team doesn’t need to spend time doing all that manually, and can focus on content quality.
Using RAG to leverage the back catalog
Media companies have thousands upon thousands of articles published going back years. Some of them are still relevant. But the reality is that leveraging the back catalog effectively has been a difficult undertaking.
Humans can’t possibly remember the entirety of everything an organization has ever published. But machines can.
A machine plugged into the CMS can use Natural Language Processing (NLP) to understand the content currently being worked—what is it about? Then it can check the back catalog for every single other article on the topic. It can also rank each of those historical articles by which generated the most attention and which floundered. Then it can help staff insert the most high-performing links into current pieces.
Similarly, imagine the same process, just in reverse. By automating the updating of historical evergreen content with fresh links, new articles can immediately jump-start with built-in traffic.
Removing silos between creation and analysis
While Google traffic might be declining, it will nonetheless remain important in this new world. And in this period of uncertainty, media organizations need to convert as much as possible of the traffic from this channel while it is still operating.
We call this “Leaving the platforms behind.” Media companies should focus on getting as much of the traffic from search and other channels into first-party data collection funnels as possible. This way, they can build enough moat to continue floating if any or all of these traffic channels completely disappear.
Most teams today have dedicated SEO analysts who are essentially gatekeepers between SEO insights and content production. The SEO analysts aren’t going anywhere any time soon. But the new table stakes are that every journalist needs to be able to self-serve keyword insights.
It is important to use analytics tools that bring search console data directly to the approachable/easy article analytics page that the editorial team already knows how to use. Ideally, analytics tools should connect keywords and other platform traffic to conversions, so everyone on your team can understand their impact on leaving the platforms behind.
Done well, you’ll create a feedback loop that evolves and improves your content in a way that resonates with readers and machines.
Quality is all that matters
This is not the first “all hands on deck,” moment for the media industry. That being said, what we’re seeing is that the barometer of success is a truly aligned strategy and execution that brings product, business development, and editorial teams together to pursue creating first party relationships with audiences. The organizations that have little brand identity, and pursue traffic instead of subscriptions, are suffering—and will likely continue to do so.
The public has a knowledge gap around generative artificial intelligence (GenAI), especially when it comes to its use in news media, according to a recent study of residents in six countries. Younger people across countries are more likely to have used GenAI tools and be more comfortable and optimistic about the future of GenAI than older people. And a higher level of experience using Gen AI tools appears to correlate with more positive assessment of their utility and reliability.
Over two thousand residents in each of six countries were surveyed for the May 2024 report What Does the Public in Six Countries Think of Generative AI in News? (Reuters Institute for the Study of Journalism). The countries surveyed were Argentina, Denmark, France, Japan, the UK and the USA.
Younger people more optimistic about GenAI
Overall, younger people had higher familiarity and comfort with GenAI tools. They were also more optimistic about future use and more comfortable with the use of GenAI tools in news media and journalism.
People aged 18-24 in all six countries were much more likely to have used GenAI tools such as ChatGPT, and to use them regularly, than older respondents. Averaging across countries, only 16% of respondents 55+ report using ChatGPT at least once, compared to 56% aged 18 to 24.
Respondents 18-24 are much more likely to expect GenAI to have a large impact on ordinary people in the next five years. Sixty percent of people 18-24 expect this, while only 40% of people 55+ do.
In five out of six countries surveyed, people aged 18-34 are more likely to expect GenAI tools to have a positive impact in their own lives and on society. However, Argentia residents aged 45+ broke rank, expressing more optimism about GenAI improving both their own lives and society at large than younger generations.
Many respondents believe GenAI will improve scientific research, healthcare, and transportation. However, they express much more pessimism about its impact on news and journalism and job security.
Younger people, while still skeptical, have more trust in responsible use of GenAI by many sectors. This tendency is especially pronounced in sectors viewed with greatest skepticism by the overall public – such as government, politicians, social media, search engines, and news media.
Across all six countries, people 18-24 are significantly more likely than average to say they are comfortable using news produced entirely or partly by AI.
People don’t regularly use GenAI tools
Even the youngest generation surveyed reports infrequent use of GenAI tools. However, if the correlation between young people using GenAI more and feeling more optimistic and trusting about it holds true on a broader scale, it’s likely that as more people become comfortable using GenAI tools regularly, there will be less trepidation surrounding it.
Between 20-30% of the online public across countries have not heard of any of the most popular AI tools.
While ChatGPT proved by far the most recognized GenAI tool, only 1% of respondents in Japan, 2% in France and the UK, and 7% in the U.S. say they use ChatGPT daily. Eighteen percent of the youngest age group report using ChatGPT weekly, compared to only 3% of those aged 55+.
Only 5% of people surveyed across the six countries report using GenAI to get the latest news.
It’s worth noting that the populations surveyed were in affluent countries with higher-than-average education and internet connectivity levels. Less affluent, free, and connected countries likely have even fewer people experienced with GenAI tools.
The jury is out on public opinion of GenAI in news
A great deal of uncertainty prevails around GenAI use among all people, especially those with lower levels of formal education and less experience using GenAI tools. Across all six countries, over half (53%) of respondents answered “neither” or “don’t know” when asked whether GenAI will make their lives better or worse. Most, however, think it will make news and journalism worse.
When it comes to news, people are more comfortable with GenAI tools being used for backroom work such as editing and translation than they are with its use to create information (writing articles or creating images).
There is skepticism about whether humans are adequately vetting content produced using GenAI. Many believe that news produced using GenAI tools is less valuable.
Users have more comfort around GenAI use to produce news on “soft” topics such as fashion and sports, much less to produce “hard” news such as international affairs and political topics.
Thirty percent of U.S. and Argentina respondents trust news media to use GenAI responsibly. Only 12% in the UK and 18% in France agree. For comparison, over half of respondents in most of the countries trust healthcare professionals to use GenAI responsibility.
Most of the public believes it is very important to have humans “in the loop” overseeing GenAI use in newsrooms. Almost half surveyed do not believe that is happening. Across the six-country average, only a third believe human editors “always” or “often” check GenAI output for accuracy and quality.
A cross-country average of 41% say that news created mostly by AI will be “less worth paying for” and 19% “don’t know. 32% answered “about the same.”
Opportunities to lead
These findings present a rocky road for news leaders to traverse. However, they also offer also an opportunity to fill the knowledge gap with information that is educational and reassuring.
Research indicates that the international public overall values transparency in news media as a general practice, and blames news owners and leadership (rather than individual journalists) when it is lacking. However, some research shows users claim to want transparency around GenAI tools in news, but trust news less once they are made aware of its use.
The fact that the public at large is still wavering presents an opportunity for media leaders to get out in front on this issue. Creating policy and providing transparency around the use of GenAI tools in news and journalism is critical. News leaders especially need to educate the public about their standards for human oversight around content produced using GenAI tools.
Public trust in the news is dwindling, with three in 10 UK adults admitting they don’t trust the news very much and 6% confessing they don’t trust it at all. Unfortunately, this phenomenon is not limited to the UK but affects media audiences globally. A recent Gallup Poll, for example, showed a similar reality among Americans, with only 32% saying that they trust news a “great deal” or a “fair amount.”
What’s more, publishers are grappling with the fact that audiences increasingly turn to social media to get their news fix. In its annual Digital News Report, Reuters and The University of Oxford found that 30% of respondents say that social media is the main way they come across news, surpassing the 22% who access it directly. Unfortunately, social media provides a fertile breeding ground for misinformation, which (somewhat ironically) further erodes people’s trust in news.
Today’s media companies need strategies and tools that will help them re-engage audiences whose expectations have been shaped by social media. By understanding the behaviors and preferences of today’s audiences and incorporating the right tools and tactics, publishers have the ability to attract audiences and satisfy their need for a well-rounded information diet in a more social setting.
More than passing news updates
Notably, the shift to social news consumption is particularly acute among younger consumers, with people aged 18-24 less likely to use a news website or app and more dependent on social media for news. And these young consumers’ information preferences have been molded by their use of social media and mobile content consumption. Our own research finds consumers want easily understandable and readily available content. In fact, 26% of 18-34-year-olds say that they prefer news updates in short, bite-sized segments.
One of the strategies publishers can implement to replicate the social media experience–while continuing to provide quality news and information–is through the use of live blogs. Live blogs allow media companies to provide readers with an enriched and authentic experience that replicates the benefits of social media while addressing key challenges such as lack of engagement, misinformation, and declining trust.
A live blog allows publishers to provide real-time commentary, updates, and coverage on breaking news or unfolding events. Despite their rise in popularity during the Covid-19 pandemic – where they served as a valuable tool for disseminating rapidly emerging critical information – live blogs have been around for quite some time.
However, publishers around the world are now working to refine their live blog strategies to capture the best aspects of the social media experience but serve as more than just a format that provides the latest superficial updates. These publishers build trust and credibility among their audiences through this more social way of authentic storytelling.
The style of live blogs resembles a mobile-friendly social media timeline. Therefore, it gives consumers news in the format they crave. It caters to the habits and preferences of users accustomed to consuming content through scrolling on their mobile phones.
Interactivity and engagement
To increase audience engagement, publishers can also incorporate interactive elements such as polls, videos, and live comment blocks into their live blogs. These mirror many popular features found on social media platforms. For example, journalists from the New Zealand publisher Stuff interacted directly with readers as millions of people attempted to get tickets to Taylor Switft’s Eras Tour in Australia. With over 150 comments on their live blog, the journalists were able to build a community with their readers as they all shared their triumphs and frustrations with one another in real-time.
Some publishers even use live blogs to provide their audiences with direct access to experts in various fields. MDR, a public German broadcaster, did this particularly well during the Covid-19 pandemic and cost of living crisis. They encouraged readers to post their questions in a comment block within the live blog. Then, the expert answered their questions directly in the chat. This tactic increases trust by giving readers access to experts in their field and reinforcing the expertise of a media outlet’s team. It also helps provide a more balanced view of events through the inclusion of a variety of perspectives, reducing the perception of spin.
With live blogs, individual personalities can come out, which allows journalists to foster better relationships with their audience. For example, reporters covering sports at Süddeutsche Zeitung engage with their audience using a lighter tone than their formal journalism. This injects personality into their coverage and makes it more relatable and enjoyable for readers, mirroring the conversational style often seen in social media interactions.
Another key advantage of live blogs is their ability to prevent endless doomscrolling by providing a curated and limited amount of verified information and data. This way, readers can choose the most relevant information to them based on their own needs and preferences without becoming overwhelmed with too much content.
Looking ahead
It’s a challenging time for publishers and newsrooms around the world. The emergence of generative AI search results, along with audiences’ increasing frustration with the news (not to mention the fact that social media platforms are distancing themselves from news), create higher barriers to engagement.
In the year ahead, publishers should turn their attention to incorporating strategies that replicate the elements audiences love most about social media to keep consumers engaged and coming back for more. Implementing this approach can help publishers meet the needs of the modern consumer, who favors receiving their news in short, bite-sized segments. Live blogs allow media companies to capture the essence of the social media experience while addressing lack of engagement, misinformation, and declining trust.
Trust – and the lack of it – has become the metric of choice when discussing the alienation individuals feel regarding news organizations. When we consider what the metric is telling us, the picture is undeniably grim.
In an October 2023 poll, Gallup found that more people said they had no trust at all in the media (39%) than those that said they had a great deal of trust (32%). Increasing accountability and transparency are oft-cited prescriptions news organizations focus on to build trust.
Getting to the root of trust issues
However, many people say that the reasons they don’t trust the media include a failure to cover both sides of an issue and the perception that journalists have a political bias, particularly a liberal bias. The Gallup poll reflected a 47% trust gap between Democrats (58%) and Republicans (11%). That said, trust among Democrats is falling significantly for the same reason that it has plummeted for Republicans: a perception of bias, in this case, a conservative bias. The nation’s political polarization is further driving down media trust.
It is understandable that media organizations believe that audience perception of bias can be addressed through transparency efforts focused on the way journalists report and disseminate the news. Unfortunately, there’s a fundamental element of storytelling that may have a much bigger impact on the appearance of bias: word choice.
It’s difficult to address issues of bias when people fail to see themselves reflected in the words journalists use. Language is not merely a tool for communication but a reflection of positioning and perspective, bias and blame. Academic studies show that trust and distrust are encoded in the very language choices we make.
Research we’ve been conducting at the University of Florida’s College of Journalism and Communications is identifying patterns of common language usage in coverage of controversial and potentially divisive subjects that could drive wedges and further damage trust. It is possible that, by recoding words away from inherent biases and towards authentic language people use to describe their experiences, we may find a pathway that engenders trust.
While the pursuit of trust is indeed a noble one, it feels more ambitious than the current climate allows. Therefore, journalists should ask the question: Is trust entirely in my control? And if not, what is? Our work has steered us toward focusing on what can be controlled: authenticity, intentionality and precision. We believe these elements can serve as the building blocks that lead to greater trust.
Based on that work, we’re developing a machine-learning tool journalists can use to identify potentially biased language and use that feedback to make more intentional word choices. The tool, called Authentically, is aimed at equipping journalists with the insights to make informed decisions in their writing. Authentically is currently in the alpha stage of development and we’re working with newsroom partners to test functionality.
When complete, the tool will operate in real time to flag words that merit more careful consideration. By providing a more robust context to the connotations of language, journalists are given the opportunity to ask themselves: Is this really what I meant to say? Does this accurately represent the events I’m describing? Is this language biased?
Word choices and perceived bias
Throughout our investigation of multiple news topics, common patterns of use emerged. In our analysis of abortion coverage, the data indicated that words conveying a sense of pride, such as “proudly,”“unapologetically” and “adamantly” frequently preceded the pro-life label, whereas the pro-choice label was frequently preceded by words indicating a sense of necessity or urgency, such as “necessarily,”“increasingly” and “relentlessly.” While these differences might appear subtle, they raise critical questions: what is being communicated when the language used around one position consistently denotes an undertone of morality while the other suggests one of urgency?
In examining coverage of racial justice protests, specifically regarding the murder of George Floyd in 2020, the findings spoke for themselves. The verbs used to describe protest actions repeatedly drew comparisons to fire or destruction, such as “spark,” “fuel,” “erupt,” “ignite,” “trigger” and “flare.” Is the recurrent use of this fiery language a deliberate choice, or is it a subconscious pattern of bias? What impact does that have on the perception of these demonstrations and of the people participating in them?
As concerns and polarizations regarding the climate grow, so does the importance to be conscious of our language choices. Verbs used with the term “global warming” appear to have a more neutral focus on the general effects, such as “occur” and “bring,” while verbs used with the term “climate change” delve deeper into the speed, intensity and potential ramifications of ongoing environmental shifts, such as “alter,” “fuel” and “accelerate.” Does the language journalists use – even when the differences are subtle – help convey the urgency of a climate emergency, and therefore shape perceptions?
While the foundational tenants of journalism remain core to audience trust, words matter. Our research has illustrated to us the pivotal role of authenticity, intentionality and precision in beginning to bridge the gap between the intention of the journalist and the ways their stories are received by the public.
About the authors
Janet Coats is the Managing Director of the Consortium on Trust in Media and Technology at the University of Florida’s College of Journalism and Communications. She spent 25 years as a journalist and a decade as a media consultant before moving to higher education.
Kendall Moe is the Senior Project Manager and Researcher for the Authentically project and has conducted the language analysis described in this story. She has an undergraduate degree in linguistics and a master’s degree in special education from the University of Florida.
AI is more than just a trendy buzzword; it’s a transformative force that will shape the future of media. From content creation to personalization and automation, AI is on the brink of revolutionizing how we both produce and consume news, and media organizations that are prepared to embrace AI stand to benefit significantly. In a recent webinar, I spoke with Arc XP Chief Technology Officer, Matt Monahan, to learn how the emergent technology of generative AI is unlocking new workflows, presenting and solving unique business challenges, and creating opportunities for growth within the digital media industry.
Understanding generative AI
Today, when we talk about AI, we often mean generative AI, a subset of deep learning, which teaches computers to think like humans, recognizing complex patterns in data. “Generative AI is really a transformer model and the models behind them, or what we call large language models (LLMs),” says Monahan. These models, such as Chat GPT, can handle tasks ranging from text generation to code creation, image generation, and even 3D modeling.
The AI industry is currently in a hype cycle, marked by high expectations and significant investments. However, a growing awareness of the limitations of LLMs like Chat GPT, particularly within the media industry, has emerged. AI is not a magic wand capable of creating content from scratch with flawless accuracy. Automated story publication, without human oversight, presents significant challenges because these models are not designed for fact-checking or the introduction of new content; their core competency lies in predicting language.
Despite these limitations, experimenting with generative AI provides invaluable insights into the evolving media landscape. Monahan stated, “The companies that spend time in experimentation today are going to be the ones who accrue benefit when they are ready to take advantage of it as the technology matures.” AI is a journey, and its potential is unlocked gradually as teams experiment, learn, and build competency.
While integrating generative AI may initially feel like stepping through a one-way door, there are experiments that allow for exploration without irreversible commitments. By integrating human review processes alongside AI, companies can achieve a harmonious blend of efficiency and accuracy. Human editors bring essential elements such as context, fact-checking and ethical judgment to the table — qualities that AI lacks.
Adopting AI in the newsroom
Recognizing AI as not a distant future technology but a viable solution right now, many news organizations have already embraced LLMs in their newsrooms. With human review processes, they utilize AI for tasks including creating AI-assisted graphics and diagrams, drafting written content, and even generating turnkey content at scale, such as translations, financial reporting, sports coverage, and large dataset analysis. This integration of AI enhances their capabilities while preserving the integrity of their news reporting.
One example of AI in action is its role in translation. Some media companies are already using AI to quickly create high-quality translations, needing little to no editing. This has allowed journalists to reach global audiences more efficiently by tailoring the same story to different readers. By implementing AI into their workflows, journalists are able to minimize their time spent on repetitive and time-consuming tasks, enabling them to focus on what matters – producing compelling and meaningful content that resonates with their readers.
What to expect by 2030
As news organizations take their first steps into the realm of AI, Monahan envisions a future where AI becomes the standard. “If you examine the pace of LLM development over the past three to four years, it becomes quite evident that the quality will improve at a rate beyond most people’s imagination,” he says.
Today, less than 1% of online content is AI-generated. However, he predicts that within a decade, at least 50% of online content will be generated by AI. This raises important questions: What does it mean for content to be 50% AI-generated? Does it represent content created entirely from scratch, content edited by AI, or content that has received AI assistance? These are questions that the media industry will need to address and define in the coming years.
Looking ahead to 2030, Monahan anticipates several key developments:
AI will significantly cut the costs of content creation, encompassing written content, graphics, and video explainers. However, this shift won’t eliminate the need for human involvement, especially in crucial areas like fact-checking and quality assurance.
Content formats and user experiences will shift significantly, with personalized content becoming the norm. Media companies will need to adapt and innovate to meet these new demands.
Sports content will gain immense value as one of the few remaining sources of “original content” resistant to full automation.
Advertising will become hyper-personalized, delivering unique ads and commercials tailored to individual users.
With automated workflows and most of the code being generated by AI tools, every developer is expected to become an AI-assisted developer.
Monahan emphasizes that embracing AI isn’t just about staying ahead; it’s about spearheading a future where AI elevates content creation, enriches user experiences, and reshapes the media landscape. By automating tasks in the newsroom, such as content creation and translation, AI empowers journalists to concentrate on their core mission: crafting engaging and meaningful content for their readers. The future of media is powered by AI, and those who harness its capabilities will lead the way in this transformative journey.
It’s hard to believe, but ChatGPT was only released to the public late last year (November 2022), sparking an AI arms race and spurring adoption across a range of sectors, including the media.
So, how can media leaders best harness these developments? What are the steps they need to have in place to make the most of these advances? Here are seven things you need to consider:
1. Don’t just jump on the bandwagon
The media has long been guilty of shiny object syndrome, chasing after the next big thing in the hope that it will help solve multiple short-term and long-term structural issues. All of the noise that’s being made about AI can make media leaders fear that they are behind the curve. From the publishers I have spoken to recently, the FOMO (fear of missing out) is very real.
Yet at the same time, there’s a wariness too. After all, the media landscape is littered with many other developments (the Metaverse, VR/AR, pivot to video, blockchain et al) that have been simultaneously held up as saviors and disrupters.
Will AI be any different? I think it will be, not least because elements of this technology have already been deployed at many media businesses for a while. Developments in Generative AI are the next stage in this evolution, rather than a wholesale revolution.
2. Take time to determine the best approach
Findings from a new global survey published by LSE seem to reinforce this. They found that although 80% of respondents expect an increase in the use of AI in their newsrooms, four in ten companies have not greatly changed their approach to AI since 2019, the date of LSE’s last report.
Adoption of new tools at this time may therefore be lower than you think. Perhaps that may give you the confidence to take a beat. Rather than jumping on the bandwagon too quickly, take the time to determine what you want AI to help you achieve.
This approach can help to lay the foundations for long-term success. Strategies should start with the end in mind. Set goals and ascertain how you’ll know when they have been achieved.
3. Set up a taskforce to understand what success looks like
To help them determine their own approaches to the latest wave of AI innovation, companies like TMB (Trusted Media Brands) and others have set up internal task forces to understand the risks, as well as the benefits that AI may unlock.
In doing this, media businesses can learn from the mistakes of those who’ve arguably rushed into this technology too quickly. CNET, Gannett and MSN are just some of those who have recently had embarrassing public experiences as a result of publishing (unchecked) AI-written content.
4. Bring the whole company with you
Given the breadth of activities that can be impacted by AI, these internal bodies need to be diverse and include people from across the business. This matters because media firms should see AI as more than just a cost-saver.
Harnessed correctly, it may help to create fresh revenue streams and to reach new audiences. To realize this value, publishers need to cultivate company-wide expertise and carefully assess where AI can drive efficiencies, enhance existing offerings, or enable entirely new products and services.
Tools like Newsroom AI and Jasper can help to increase the volume, speed and breadth of content being offered, while AI-produced newsletters like ARLnow and news apps like Artifact demonstrate how AI can deliver content in fresh ways. Developing internal training programs and encouraging take-up of industry wide opportunities to gain more knowledge about how AI works and its possibilities will help with buy-in and culture change.
As Louise Story, a former executive at The New York Times and The Wall Street Journal recently put it, “AI will reshape the media landscape, and the organizations that use it creatively will thrive.”
5. Have clear guidelines for AI usage
Alongside having a clear strategic approach, and a robust understanding of how to measure success, how these efforts are implemented also matters.
One way to help offset this concern is to upskill your staff and ensure that representatives from across the company are involved in setting your AI strategy. A further practical step involves creating a clear set of guidelines about how AI will be used in your company. And, indeed, what it will not be used for.
There are also opportunities to engage your audience in this process too. Ask them for input on your guidelines, as well as being clear (e.g., through effective labeling) about when AI has, or has not, been used. This matters at a time when trust in the media remains near record lows. AI missteps only risk exacerbating some of these trust issues, emphasizing why elements of this technology need to be used with an element of caution.
6. Understand how to protect your IP
Together with labor concerns, another major issue that publishers and content creators are contending with relates to copyright and IP. It is important to understand how you can avoid your content being cannibalized – and in some cases anonymized – by Generative AI.
Although tools like the chat/converse function in Google Search and Microsoft’s Bing provide links to sources, ChatGPT does not. That’s a major source of concern for media companies who risk being deprived of clickthrough traffic and attribution.
As Olivia Moore at the venture capital firm AZ16 has pointed out, ChatGPT is by far the most widely used of these tools. Its monthly web and app traffic is around the same size as that of platforms like Reddit, LinkedIn, and Twitch.
This summer, the Associated Press agreed to license its content to OpenAI, the company behind ChatGPT, making it the first publisher to do so. Not every company can replicate this. How many outlets have the reach, brand and depth of content that AP has? Nevertheless, it will be interesting to see if other major publishers – as well as consortia of other companies – follow suit.
The media industry has learned from past experience that relying too heavily on tech companies can undermine their long-term sustainability. Short-term financial grants and shifting algorithmic priorities may provide temporary relief but fail to address deeper impacts on creative business models.
Creating quality content comes at a cost. Having seen revenues eroded and journalism undercut previously, publishers are rightfully wary about how this will pan out. So, it will be critical to weigh any payment schemes and financial relationships against the larger industry-wide impact these tools will have on content creators.
Addressing this issue is not easy, given how nascent this AI technology is and how quickly it is developing. However, the potential risk to publishers is understandably focusing a lot of minds on identifying and implementing solutions. For now, as this issue plays out, it’s one that needs to be firmly on your radar.
Moving Forward: diversification and compensation
The rapid evolution of AI presents a heady mixture of both promise and peril. The companies that are most likely to flourish will have to balance the opportunities that AI offers while avoiding its pitfalls and threats.
That’s not going to be easy. However, the relationship between AI developers and content creators will remain a deeply symbiotic one.
“Media companies have an opportunity to become a major player in the space,” arguesFrancesco Marconi, the author of Newsmakers: Artificial Intelligence and the Future of Journalism. “They possess some of the most valuable assets for AI development: text data for training models and ethical principles for creating reliable and trustworthy systems,” he adds.
Given this, arguably it’s all the more important that the media industry is rewarded for this value. “We should argue vociferously for compensation,” News Corporation’s chief executive Robert Thomson says.
At the same time, media companies also need to be cognizant of the fact that AI-driven changes in areas such as search and SEO, as well as consumer behaviors, are likely to impact traffic and digital ad revenues. This is akin to “dropping a nuclear bomb on an online publishing industry that’s already struggling to survive,” contends the technology reporter Matt Novak.
With regulation unlikely to come any time soon, arguably it will be up to publishers, perhaps working together collectively, to navigate the best solutions to this thorny financial issue. That may include collective bargaining and licensing agreements with AI companies using their materials, as well as creative partnerships like the new AI image generator recently announced by Getty Images and Nvidia.
In the meantime, it will be more important than ever for media companies to diversify their revenues, as well as step up their efforts to rethink their business models, operations, and products to ensure that they are fit for the age of AI.
Professor Charlie Beckett argues that fundamental to this will be content that stands out from the crowd. “In a world of AI-driven journalism, your human-driven journalism will be the standout,” he told us recently. Differentiation will be key, concurs the former BBC and Yahoo! executive David Caswell. Meanwhile, as Juan Señor, President of Innovation Media Consultingrecently reminded us, “we cannot rely on someone else’s platform to build our business.”
This means that publishers will need to focus on originality, value, in-house knowledge and skills, as well as the ability to bring their organization – and audience – along with them.
These are major challenges, and we need to acknowledge that AI offers both challenges and opportunities to media companies. Steering through this uncertain period will require making smart strategic decisions and keeping abreast of a rapidly changing landscape. The AI-driven future is hard to predict and navigating this transformation will require both vision and vigilance. But one thing is certain. It’s going to be a bumpy, creative and fascinating journey.
You wouldn’t know it from the breathless coverage of new AI tools like ChatGPT, but the use of AI in the media is old news. While many media companies have been using AI-generated content for years, only the closest media watchers knew they might be reading news generated by a bot. A new report digs into the reality of how AI is — and has been — used by media companies, how AI’s limits necessarily constrain its use, and where new media models may be able to put this technology to profitable use.
News media and AI are old pals
A recent report “AI and the media. Now.” from Di5rupt and FIPP collects media leaders’ thoughts on how AI is impacting global publishers. In exploring media companies’ relationship with AI through the years, it’s clear that this symbiosis began earlier than even those in the know may have realized. According to the report, the earliest use of automated content “dates back to the early 1960s, when computer systems were first used to generate news articles.” More recently, news organizations have been using AI tools to churn out data-driven content — like sports recaps, stock price updates, or weather reports — often creating their own solutions to do so.
From the Associated Press to Axel Springer to China Daily, automated content has been on the editorial calendar for years. These tools have been used largely to free up human resources for reporting, analysis, and other in-depth work. Meanwhile, other types of AI tools make light work for people by doing the front work of transcribing interview recordings, analyzing data, looking for copyright infringements, or even helping with audience segmentation. But these uses all belie the continued limitations of AI, which are many.
The limits of AI for the media
While publicly available tools like ChatGPt may have put AI-generated content squarely in the limelight, the truth is not much has changed about AI’s capabilities or how media organizations are using it. As the report puts it, “Essentially think of ChatGPT as a search engine on steroids. You ask it a question and it skims the data that it has stored to construct a response for you.” So, the report cautions that any companies planning on replacing writers with machines, must first consider the limits of AI.
It cannot actually write breaking news (yet!) as it was only trained on data through 2021.
Not only can AI make mistakes, but it has no real way of fact-checking its sources.
It does not cite its sources — and is often found to be plagiarizing.
AI is only as good as the data it has access to, so it is often biased, wrong, or lacking important context.
Analysis is not AI’s strong suit, at least when it comes to ChatGPT and its ilk. The report says, “It presents what it sees as fact and leaves the reader to make their own mind up.” This is why it works well for basic, data-driven content creation and not much else.
Ironically, the algorithms rely on journalists to create new content — or maybe we should call it data — to train on, ultimately making AI dependent on humans, at least in its current state. In fact, The AP and ChatGPT’s parent company, OpenAI, struck a deal to let its large language model use The AP’s content to train. These limitations mean AI will likely continue to be relegated to simple content creation tasks that allow humans to do more important work.
The potential for AI-driven news
ChatGPT’s limits aside, some startups have already launched AI-only news sites. Despite these sites being subject to the limitations above, the report thinks that the trend may be more than just a novelty.
John Barnes, Chief Digital Officer, William Reed Business Media, explains what these new media outlets could look like: ”Model around advertising and affiliate revenue and gobble up traffic by being quick and comprehensive, or find highly-targeted subscription and B2B channels where speed of information matters much more than grammatical flair or particularly special insight — results, events, and data-driven types of products.”
However, when one stops to consider the kind of content where speed matters more than insight, one will likely arrive at where AI and media first started their relationship: at financial, sports, and other data-driven news. As these sites proliferate, they will face the same challenges as other media organizations: differentiation. And when you don’t have original, investigative reporting — or even celebrity profiles — to set your publication apart, it all comes down to who can be the most accurate while maintaining speed.
Ultimately, AI may help the next Woodward and Bernstein dig through piles of data to find a story or transcribe the Watergate tapes, but it will never be able to meet Deep Throat in the parking garage.
TikTok and media companies across the globe are teaming up to write the next chapter in the epic saga of the media and its complex relationship with social platforms. In early May, TikTok announced Pulse Premiere, a program designed to place advertisers alongside the premium content created by trusted media brands — and share the revenue 50/50 with those content creators.
The first question that might bubble to the surface for media companies is, “How is this different from all the other revenue-sharing schemes with social platforms?”
“Some publishers have had good relationships with some platforms,” says Nic Newman, Senior Research Associate at the Reuters Institute for the Study of Journalism. He points to Snapchat as an example of a platform that has led to reasonable revenue for some media companies. But revenue is not the main impetus for them to embrace TikTok. “Cultural genres are things that premium publishers are pushing into, especially to reach younger users,” says Newman. And TikTok has been a key partner in that strategy all along.
This strategy aims to combat news avoidance and engage new audiences, says Newman. Add to the mix a decline in brand revenue and media buyers who find it easier to buy through platforms, and it’s clear why premium publishers are eager to partner with a wildly popular platform like TikTok. From TikTok’s point of view, Newman adds, it also makes sense to reward media companies for creating content that helps fuel the platform’s ongoing success.
But as it turns out, there are a few key differences between TikTok and its social media competitors that have media companies feeling bullish.
Embracing TikTok as a content creation platform
TikTok is unique. Facebook and X/Twitter, at least for publishers, are places where publications and audiences share links to content, which drives traffic back to a website. This dynamic has been a source of tension over the year, which continues to play out. TikTok, however, is more of a content-creation tool than it is a distribution platform. Publishers who are successful on TikTok understand and embrace the peculiarities of TikTok and create content designed specifically for this audience.
Reuters Institute’s 2022 Digital News Report cited TikTok as the fastest-growing social network for news consumption. This is driven, in part, by publishers’ desire to reach new, younger audiences. Reuters found 40% of people 18 to 24 use TikTok, and 15% of them use it for news. It only makes sense that news brands with robust TikTok audiences would seek to cash in on that engagement through advertising.
One of those media brands is Vox Media, which joins Condé Nast, Buzzfeed, Dotdash Meredith, Hearst Magazines, MLS, NBCUniversal, UFC, and WWE as one of the initial Pulse Premiere partners. “Our editorial brands and video franchises reach millions of highly engaged followers on TikTok, and we are now excited to work with TikTok on a new ad product,” Ray Chao, SVP & GM of Audio and Digital Video at Vox Media, said in a statement to DCN. “We are hopeful that this partnership will help us strengthen our digital video revenue model through a new revenue stream.”
Similarly, Deb Brett, Condé Nast’s Chief Business Officer, Digital, says, “Our audience demands us to be on TikTok.” That’s not a problem for Conde Nast’s publications, which have been “digital first” for years, according to Brett.
Importantly, Condé Nast also avoids taking a one-size-fits-all approach to its social media channels. “We tailor what we create for every platform. We want to hear from our brands in different ways,” says Brett. Before TikTok introduced Pulse Premiere, that often meant creating branded content integrations and custom programs for advertisers who wanted to reach the brands’ audiences. Condé Nast even tried live streaming and shoppable content. Pulse Premiere is now making it easier for media organizations to monetize their presence on the platform.
Thinking about brand safety
Pulse Premiere comes on the heels of TikTok Pulse, launched in 2022, which places brand advertisers next to the top 4% of trending videos on TikTok across a number of categories. Throughout the history of social media advertising, brands have run into the problem of unpredictability. Historically, this problem was only addressed once it boiled over, sometimes in the form of a boycott. As ads follow users around one social platform or another, there has been no telling what content an ad will be placed beside. At this stage of the social game, it’s no surprise that TikTok is being proactive about providing a solution to that ongoing issue.
In its announcement, TikTok said, “Pulse Premiere gives brands the control to choose where their ads are placed, adjacent to content from our premium publishing partners in lifestyle and education, sports, and entertainment categories for specific tentpole events as well as evergreen, ongoing content.” In other words, TikTok is giving advertisers the ability not only to avoid being placed next to the latest viral conspiracy theory video — or even just dance trends irrelevant to a brand’s messaging — but to easily continue existing publisher relationships.
Extending existing brand relationships
As a publishing community, we love [that TikTok Pulse Premiere is] turnkey, its low lift, and its scalable,” says Brett. Rather than devising ad hoc solutions to advertiser demands to be part of the TikTok conversation that Condé Nast’s brands create, the publisher can now take advantage of a native ad platform.
Condé Nast will kick off its Pulse Premiere participation by aligning a campaign with what it calls one of its “cultural calendar moments”: Vogue World. The event takes place on September 14 in London, and Brett says Vogue will align its TikTok content with the event. “It’s an exciting intersection of their ad product and our cultural calendar moments,” she adds.
While it’s still too early to tell whether the economics add up for publishers, Brett says, “It has indicators for a huge potential.” Newman, however, points to a problem publishers have come to know well: “Fundamentally, people’s attention is the problem. The first few publishers do well, then other pubs follow, and the rates go down.” Only time will tell if TikTok proves to be different from its competitors in the ways that matter most for publishers.
At the 2023 Collision Conference, held June 26-30 in Toronto Canada, DCN’s editorial director Michelle Manafy sat down with three media executives to discuss the ethics of using generative AI in journalism. The conversation covered the evolution of AI and its usage in the media, up to today’s much-discussed generative AI tools. Panelists weighed in on a range of use cases and where they would – or would not – permit (or even encourage) the use of generative AI in their media organizations. They also discussed whether or not generative AI is an existential threat to journalism, journalists — and even humanity as a whole. Listen to the discussion here and/or read a few highlights below.
Navigating the ethical landscape of generative AI and journalism
Featuring:
Gideon Lichfield – Global Editorial Director, Wired
Harry McCracken – Global Technology Editor, Fast Company
Traci Mabrey – Head of News, Factiva
Michelle Manafy – Editorial Director, Digital Content Next
A few highlights from the panel discussion:
Traci Mabrey: We’ve been using [machine learning and AI] forever and that’s a really important component as we look at this. This new horizon is going to be something, and I don’t think any of us know exactly what that is yet. But we have been using the building blocks of it for quite some time…
Gideon Lichfield: I think what’s changed is that it now has the capability to produce something that looks like something humans would create from scratch. And I emphasize looks like because it’s very clear that what’s going on is imitation… the fact that it became available as an easy to use interface was really crucial… this technology was around already for a few years, but it wasn’t that easy to access. The big change last year was just that it became easy to access…
Michelle Manafy: We’ve heard of late that some big tech leaders, some really smart folks call generative AI an existential threat. Are we afraid? Should we be afraid? And I don’t just mean as the media. You guys all think about larger issues in society. Is this good? Is it bad? Should we all be scared?
Harry McCracken: I think the worrying about it blowing up the world or killing us all is a little overwrought, particularly because there’s a pretty long list of genuine concerns that are either an issue right now or pretty clearly will be over the next few years involving things like misinformation. There are huge privacy concerns with a handful of large companies grabbing all our data and synthesizing that for their own benefit. I’d say there are plenty of things to worry about with A.I… but destroying the world might be more like the way that social media has, in a lot of ways, degraded the human experience…
Gideon Lichfield: …the increasing volume of just sheer garbage that is out there that is going to be generated by AI: that’s a that’s a real worry. And the job displacement part is also a thing that I worry about. But I think there is a way to use it. There is a way to use A.I. that empowers people, gives them extra tools. But it’s also a great temptation for companies, for employers to simply look at it as a way to save costs…
Harry McCracken: …Journalism is unusual in that the writing is the product. Most of the writing that exists in the world is not the product, just the byproduct. There are a lot of cases where having a computer draft your internal memo or whatever makes a lot of sense and will fill you up to do more important things…
Traci Mabrey: …I think if we look at our journalists and our editors around the world, there’s a very personal scope that goes into everything somebody is writing and somebody is speaking about. And I think that’s a really big component when you look at it. The technology, as Gideon was saying, it is bringing up a set of words. It’s able to make 500 words on X topic regarding this. But that is not the way that I would infuse that information into the world. And it’s not those types of things that make organic journalism and all of the real nuggets that we get from it… I think for the drafting process and the information gathering, certainly saving a lot of time. But we’re certainly on the path of that being a still a very personal end product.
Learn more about how media leaders are developing their policies around the usage of AI and generative AI in their organizations:
When it comes to video storytelling, tools are a critical factor in determining the quality of your product. Reliance on outdated video-production tools can affect your ability to compete with other brands battling for audience attention and loyalty.
Video is becoming an incredibly valuable tool for disseminating information quickly to audiences. In fact, 43.4% of internet users watch online videos as a source of learning each week. For media organizations seeking to build loyalty and report on breaking news, it’s important to have a suite of tools that makes it easy to create brand-consistent videos with just a few clicks.
Given trend shifts in the video-production industry, read on to evaluate if your tool suite could use an upgrade.
What goes into video production?
The world of modern video production is more important than ever, given that companies need to put tools in the hands of as many reporting teams and video editors as possible. While some videos may require high production costs and complex systems, it is beneficial to also have tools that make it easy to respond to timely local news and major events. For content that needs to be published quickly, companies need to have resources available that allow reporters to share content at any time, from anywhere with the click of a button.
Today, publishers producing the best videos customize their themes, ensure brand recognition, publish directly from advanced interfaces and much more. The videos that stand out in today’s crowded marketplace live on multiple platforms and speak directly to viewers while getting online quickly.
Video production challenges
There’s no doubt that today’s video storytelling is better than ever. That quality comes at a cost, though. Modern video storytellers face a variety of challenges with video production, including the following:
1. Time
In today’s fast-paced news environment, publishers need to be able to post top-quality videos in less time than ever before. This time crunch is a real problem for many organizations and can easily separate the winning digital publications from those who fall behind.
To combat time-related issues, teams benefit from platforms that allow them to automate many of the repetitive tasks of video production. Tools like templating and brand recognition packages help publishers push to-the-minute video content without missing crucial deadlines or falling behind the news cycle.
2. Incorrect formatting
Many storytelling teams have tried to get around time-crunch problems by creating one-size-fits-all content. Unfortunately, this kind of content often feels unoriginal and duplicative. Additionally, it may not be properly sized or formatted for the intended platform. As consumers increasingly gather information from alternative sources such as social media, email and websites, it’s important to have content that can be adjusted and reformatted for each location.
To combat this issue, teams have two choices: invest far more time and energy in each video or use a platform that makes it easy to create diverse material that remains on-brand. The right digital publishing tools provide comprehensive customization options, including the ability to swap out themes while applying branding across videos and publishing them in the correct sizing.
3. Software limitations
Relying on limited, low-capacity software to produce critical video content can be a big pitfall for publishers that want to stand out. Software limitations make it virtually impossible to remain competitive in the video-storytelling environment and may even create unnecessary bottlenecks during breaking news events.
To combat software limitations, today’s digital publishers must find video-production software that provides powerful tools, including theme customization, personalization, publishing flexibility and more.
4. Team size
Historically, teams have needed to grow larger to support video storytelling efforts. That’s because video storytelling tools are often complex, and not everyone was equipped to use them. As a result, teams needed to create entire publishing and production departments full of people who knew how to use advanced tools.
Today, however, large teams can actually make it harder to be agile and adaptable. Therefore, the most competitive publishing teams out there are paring down, opting for more intuitive software (that doesn’t require specialized skills to use) rather than larger teams. This allows your teams to be effective in a variety of key focus areas, while still producing engaging content.
Critical Focus
Video production software can help increase competitiveness across the market. With this in mind, let’s look at a couple of recent advancements in video-production software that will help you optimize your strategy.
Automating branded content
To be recognizable, branding should incorporate the same logos and color schemes across all videos. This is necessary to ensure brand recognizability, while producing quality material quickly and easily. Modern video-production software can help you reuse branded assets and themes where they are needed. The ability to save brand themes and layouts across multiple platforms allows media teams to rapidly implement them across the board. This gives teams more time to focus on making the content stand out while maintaining brand consistency.
Anywhere, anytime video production
As teams have pared down and become leaner and more agile, video-production software has morphed to support the needs of small teams. In other words, your teams must be able to easily create videos from anywhere, at any time, with any device. This makes it easy for reporters in the field to cover breaking news across your social channels and your website.
Ease, automation, and always on-brand content
Today’s media teams must have the ability to easily create quality video-storytelling materials without sacrificing speed, accuracy or branding. Today, this is critical for any company that wants to remain competitive, creative and innovative in today’s fast-paced video-publication environment.
Great video production serves several purposes: it spreads breaking news, educates audiences and promotes viewership. Clearly, media companies make a significant contribution in building their brand and that brand trust should be maximized across platforms. Streamlining that process allows teams to focus on creating the kind of content that promotes brand recognition and audience retention.
For a decade, artificial intelligence (AI) has enabled digital media companies to create and deliver news and content faster, to find patterns in large amounts of data, and engage with audiences in new ways. However, with much hyped recent announcements including ChatGPT, Microsoft’s next-gen Bing, and Meta’s LlaMA, media outlets recognize that they face significant challenges as they explore the opportunities the latest wave of AI brings.
In this second story in our two-part series on the evolution of AI applications in the media business*, we explore six challenges that media outlets face around AI tools, from the misuse of AI to generate misinformation, errors and accuracy, to worries about journalistic job losses.
Misinformation
While it has been used by media companies for various purposes over the last 10 years, AI implementations still face challenges. One of the biggest is the risk of creating and spreading misinformation, disinformation and promoting bias. Generative AI could make misinformation and disinformation cheaper and easier to produce.
“AI language models are notorious bullshitters, often presenting falsehoods as facts. They are excellent at predicting the next word in a sentence, but they have no knowledge of what the sentence actually means,” wrote Melissa Heikkilä for MIT Technology Review.
Generative AI can be used to create new content including audio, code, images, text, simulations, and videos—in mere seconds. “The problem is, they have absolutely no commitment to the truth,” wrote Emily Bell in the Guardian. “Just think how rapidly a ChatGPT user could flood the internet with fake news stories that appear to have been written by humans.”
AI could also be used to create networks of fake news sites and news staff to spread disinformation. Just ask Alex Mahadevan, the director of MediaWise at the Poynter Institute, who used ChatGPT to create a fake newspaper, stories and code for a website in a few hours and wrote about the process. “Anyone with minimal coding ability and an ax to grind could launch networks of false local news sites—with plausible-but-fake news items, staff and editorial policies—using ChatGPT,” he said.
Errors and accuracy
Julia Beizer, chief digital officer at Bloomberg Media, says the biggest challenge she sees around AI is accuracy.
“At journalism companies, our duty is to provide our readers with fact-based information. We’ve seen what happens to our discourse when our society isn’t operating from a shared set of facts. It’s clear AI can provide us with a lot of value and utility. But it’s also clear that it isn’t yet ready to be an accurate source on the world’s information,” she said.
Thus far, AI content generators are prone to making factually-inaccurate claims. Microsoft acknowledged that its AI-enhanced Bing might make errors, saying: “AI can make mistakes … Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate.”
That hasn’t stopped media companies from experimenting with ChatGPT and other generative AI. Sports Illustrated publisher Arena Group Holdings partnered with AI startups Jasper and Nota to generate stories from its own library of content which were then edited by humans. However, there were “many inaccuracies and falsehoods” in the pieces. CNET, which also produced AI-written articles and came under scrutiny for factual errors and plagiarism in those pieces.
Francesco Marconi, longtime media AI advocate and co-founder of AppliedXL, said that though AI technologies can reduce media production costs, they also pose a risk to both news media and society as a whole.
“Unchecked algorithmic creation presents substantial pitfalls. Despite the current uncertainties, newsrooms should monitor the evolution of the technology by conducting research, collaborating with academic institutions and technology firms, and implementing new AI workflows to identify inaccuracies and errors,” he said.
“The introduction of generative summaries on search engines like Google and Bing will likely affect the traffic and referral to publishers,” Marconi said. “If search engine users can receive direct answers to their queries, what motivation do they have to visit the publisher’s website? This can impact news organizations in terms of display ads and lead generation for sites that monetize through subscriptions.”
Filter and context
The amount of data and information created every day is estimated around 2.5 quintillion bytes, according to futurist Bernard Marr. With the rise of generative AI models, the growth of information available to digital media companies and the public is exponential. Some experts predict that by 2026, 90% of online content could be AI-generated.
It presents a new challenge, according to Marconi. The explosion of data from IoT has created a world where there is too much of it. “We are now producing more information than at any other point in history, making it much more challenging to filter out unwanted information.”
A significant challenge for journalism today is filtering and contextualizing information. News organizations and journalism schools must incorporate computational journalism practices, so that journalists are also responsible for writing editorial algorithms in addition to stories.
“This marks an inflection point, where we now must focus on building machines that filter out noise, distinguish fact from fiction, and highlight what is significant,” Marconi said. “These systems are developed with journalistic principles and work 24/7 to filter out irrelevant information and uncover noteworthy events.”
Replacing journalists
AI-powered text generation tools may threaten journalism jobs, which has been a concern for the industry for years. On the other side is the longstanding argument that automation will free journalists to do more interesting and intensive work. It is clear, however, that given the financial pressures faced by media companies, the use of AI to streamline staffing is a serious consideration.
Digital media companies across the U.S. and Europe are grappling with what the potential of generative AI may mean for their businesses. Buzzfeed recently shared that it planned to explore AI-generated content to create quizzes, while cutting a percentage of its workforce. Last week, CEO of German media company Axel Springer Mathias Doepfner candidly admitted that journalists could be replaced by AI, as the company prepared to cut costs.
There is a valid concern regarding job displacement when considering the impact of AI on employment, Marconi agreed—with a caveat. “Some positions may disappear entirely, while others may transform into new roles,” he said. “However, it is also important to note that the integration of AI into newsrooms is creating new jobs: Automation & AI editors, Computational journalists, Newsroom tool managers, and AI ethics editors.”
Potential legal and ethical implications
One of the other biggest challenges digital media companies and publishers will face with the rise of AI in the newsroom are issues around copyright and intellectual property ownership.
ChatGPT and other generative AI are trained by scraping content from the internet, including open-source databases but also copyrighted articles and images created by publishers. “This debate is both fascinating and complex: fair use can drive AI innovation (which will be critical for long-term economic growth and productivity). However, at the same time it raises concerns about the lack of compensation or attribution for publishers who produced the training data,” according to Marconi.
Under European law, AI cannot own copyright as it cannot be recognized as an author. Under U.S. law, copyright protection only applies to content authored by humans. Therefore, it will not register works created by artificial intelligence.
“AI’s legal and ethical ramifications, which span intellectual property (IP) ownership and infringement issues, content verification, and moderation concerns and the potential to break existing newsroom funding models, leave its future relationship with journalism far from clear-cut,” wrote lawyer JJ Shaw for PressGazette.
Questions remain
While AI is not new, it is clearly making an evolutionary leap at present. However, while media companies may have been slow to adopt technology in the early days of the internet, today’s media executives are keen to embrace tools that improve their businesses and streamline operations. But given the pace at which AI is evolving, there’s still much to learn about the opportunities and challenges it presents.
Currently, there are some practical concerns for digital media companies and large questions still to be answered, according to Bloomberg’s Beizer. She questions how the advancement of these tools will affect relationships: “If we use AI in our own content creation, how should we disclose that to users to gain their trust?”
Wired has already made the first step by writing a policy that places clear limits on what they will use AI for and how the editorial process will be handled to ensure that a quality product is produced.
Beizer also poses the question of “how publishers and creators should be compensated for their role in sourcing, writing and making the content that’s now training these large machines?”
While in some eras, media companies have been swept along with the tide of technological change, with AI media executives are clearly grappling with how to embrace the promise while better managing the impact on their businesses.