/ An inside look at the business of digital content
Media execs weigh risks, challenges of generative AI
March 14, 2023 | By Jessica Patterson – Independent Media ReporterFor a decade, artificial intelligence (AI) has enabled digital media companies to create and deliver news and content faster, to find patterns in large amounts of data, and engage with audiences in new ways. However, with much hyped recent announcements including ChatGPT, Microsoft’s next-gen Bing, and Meta’s LlaMA, media outlets recognize that they face significant challenges as they explore the opportunities the latest wave of AI brings.
In this second story in our two-part series on the evolution of AI applications in the media business*, we explore six challenges that media outlets face around AI tools, from the misuse of AI to generate misinformation, errors and accuracy, to worries about journalistic job losses.
Misinformation
While it has been used by media companies for various purposes over the last 10 years, AI implementations still face challenges. One of the biggest is the risk of creating and spreading misinformation, disinformation and promoting bias. Generative AI could make misinformation and disinformation cheaper and easier to produce.
“AI language models are notorious bullshitters, often presenting falsehoods as facts. They are excellent at predicting the next word in a sentence, but they have no knowledge of what the sentence actually means,” wrote Melissa Heikkilä for MIT Technology Review.
Generative AI can be used to create new content including audio, code, images, text, simulations, and videos—in mere seconds. “The problem is, they have absolutely no commitment to the truth,” wrote Emily Bell in the Guardian. “Just think how rapidly a ChatGPT user could flood the internet with fake news stories that appear to have been written by humans.”
AI could also be used to create networks of fake news sites and news staff to spread disinformation. Just ask Alex Mahadevan, the director of MediaWise at the Poynter Institute, who used ChatGPT to create a fake newspaper, stories and code for a website in a few hours and wrote about the process. “Anyone with minimal coding ability and an ax to grind could launch networks of false local news sites—with plausible-but-fake news items, staff and editorial policies—using ChatGPT,” he said.
Errors and accuracy
Julia Beizer, chief digital officer at Bloomberg Media, says the biggest challenge she sees around AI is accuracy.
“At journalism companies, our duty is to provide our readers with fact-based information. We’ve seen what happens to our discourse when our society isn’t operating from a shared set of facts. It’s clear AI can provide us with a lot of value and utility. But it’s also clear that it isn’t yet ready to be an accurate source on the world’s information,” she said.
Thus far, AI content generators are prone to making factually-inaccurate claims. Microsoft acknowledged that its AI-enhanced Bing might make errors, saying: “AI can make mistakes … Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate.”
That hasn’t stopped media companies from experimenting with ChatGPT and other generative AI. Sports Illustrated publisher Arena Group Holdings partnered with AI startups Jasper and Nota to generate stories from its own library of content which were then edited by humans. However, there were “many inaccuracies and falsehoods” in the pieces. CNET, which also produced AI-written articles and came under scrutiny for factual errors and plagiarism in those pieces.
Francesco Marconi, longtime media AI advocate and co-founder of AppliedXL, said that though AI technologies can reduce media production costs, they also pose a risk to both news media and society as a whole.
“Unchecked algorithmic creation presents substantial pitfalls. Despite the current uncertainties, newsrooms should monitor the evolution of the technology by conducting research, collaborating with academic institutions and technology firms, and implementing new AI workflows to identify inaccuracies and errors,” he said.
Search traffic
Generative AI applications like ChatGPT have the potential to eat away a portion of publishers’ search traffic by generating answers without requiring a user to visit a news website.
“The introduction of generative summaries on search engines like Google and Bing will likely affect the traffic and referral to publishers,” Marconi said. “If search engine users can receive direct answers to their queries, what motivation do they have to visit the publisher’s website? This can impact news organizations in terms of display ads and lead generation for sites that monetize through subscriptions.”
Filter and context
The amount of data and information created every day is estimated around 2.5 quintillion bytes, according to futurist Bernard Marr. With the rise of generative AI models, the growth of information available to digital media companies and the public is exponential. Some experts predict that by 2026, 90% of online content could be AI-generated.
It presents a new challenge, according to Marconi. The explosion of data from IoT has created a world where there is too much of it. “We are now producing more information than at any other point in history, making it much more challenging to filter out unwanted information.”
A significant challenge for journalism today is filtering and contextualizing information. News organizations and journalism schools must incorporate computational journalism practices, so that journalists are also responsible for writing editorial algorithms in addition to stories.
“This marks an inflection point, where we now must focus on building machines that filter out noise, distinguish fact from fiction, and highlight what is significant,” Marconi said. “These systems are developed with journalistic principles and work 24/7 to filter out irrelevant information and uncover noteworthy events.”
Replacing journalists
AI-powered text generation tools may threaten journalism jobs, which has been a concern for the industry for years. On the other side is the longstanding argument that automation will free journalists to do more interesting and intensive work. It is clear, however, that given the financial pressures faced by media companies, the use of AI to streamline staffing is a serious consideration.
Digital media companies across the U.S. and Europe are grappling with what the potential of generative AI may mean for their businesses. Buzzfeed recently shared that it planned to explore AI-generated content to create quizzes, while cutting a percentage of its workforce. Last week, CEO of German media company Axel Springer Mathias Doepfner candidly admitted that journalists could be replaced by AI, as the company prepared to cut costs.
There is a valid concern regarding job displacement when considering the impact of AI on employment, Marconi agreed—with a caveat. “Some positions may disappear entirely, while others may transform into new roles,” he said. “However, it is also important to note that the integration of AI into newsrooms is creating new jobs: Automation & AI editors, Computational journalists, Newsroom tool managers, and AI ethics editors.”
Potential legal and ethical implications
One of the other biggest challenges digital media companies and publishers will face with the rise of AI in the newsroom are issues around copyright and intellectual property ownership.
ChatGPT and other generative AI are trained by scraping content from the internet, including open-source databases but also copyrighted articles and images created by publishers. “This debate is both fascinating and complex: fair use can drive AI innovation (which will be critical for long-term economic growth and productivity). However, at the same time it raises concerns about the lack of compensation or attribution for publishers who produced the training data,” according to Marconi.
Under European law, AI cannot own copyright as it cannot be recognized as an author. Under U.S. law, copyright protection only applies to content authored by humans. Therefore, it will not register works created by artificial intelligence.
“AI’s legal and ethical ramifications, which span intellectual property (IP) ownership and infringement issues, content verification, and moderation concerns and the potential to break existing newsroom funding models, leave its future relationship with journalism far from clear-cut,” wrote lawyer JJ Shaw for PressGazette.
Questions remain
While AI is not new, it is clearly making an evolutionary leap at present. However, while media companies may have been slow to adopt technology in the early days of the internet, today’s media executives are keen to embrace tools that improve their businesses and streamline operations. But given the pace at which AI is evolving, there’s still much to learn about the opportunities and challenges it presents.
Currently, there are some practical concerns for digital media companies and large questions still to be answered, according to Bloomberg’s Beizer. She questions how the advancement of these tools will affect relationships: “If we use AI in our own content creation, how should we disclose that to users to gain their trust?”
Wired has already made the first step by writing a policy that places clear limits on what they will use AI for and how the editorial process will be handled to ensure that a quality product is produced.
Beizer also poses the question of “how publishers and creators should be compensated for their role in sourcing, writing and making the content that’s now training these large machines?”
While in some eras, media companies have been swept along with the tide of technological change, with AI media executives are clearly grappling with how to embrace the promise while better managing the impact on their businesses.