Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next logo

Menu

Research / Insights on current and emerging industry topics

Generative visual AI in news organizations: challenges and opportunities

Generative AI tools offer the potential to positively impact photojournalism and news video creation and production. But news professionals have serious concerns

April 30, 2024 | By Dr. T. J. Thomson, Senior Lecturer – RMIT University@Cenevox

News has long relied on the power of visuals to tell stories: first through illustrations and more recently through photography and video. The recent rise in access to generative AI tools for making and editing images offers photojournalists, video producers and other journalists exciting new possibilities. However, it also poses unique challenges at each stage of the planning, production, editing, and publication process.

As an example, AI-generated assets can suffer from algorithmic bias. Therefore, organizations that use AI carelessly run the risk of reputational damage.

AI-generated images can suffer from algorithmic biases. As examples, without specifying any demographic or environmental attributes, text-to-image AI generator Midjourney returned four images—all of light-skinned men and all in seemingly urban environments—for the prompt, “wide-angle shot of journalist with camera.

However, despite the risks, a recent Associated Press report found that one in five journalists uses generative AI to make or edit multimedia. But how are journalists using these tools, specifically, and what should other journalists and media managers look out for?

I recently undertook a study of how newsroom workers perceived and used generative visual AI in their organizations with Ryan J. Thomson and Phoebe Matich. That study, “Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policies,” uses interviews with newsroom personnel at 16 leading news organizations in seven countries, including the U.S. It reveals how newsroom leaders can protect their organizations from the dangers of careless generative visual AI use while also harnessing its possibilities.

Challenges for deploying AI visuals in newsrooms

Mis/disinformation

Those interviewed were most worried about the way in which generative AI tools or outputs can be used to mislead or deceive. This can happen even without ill intent. In the words of one of the editors interviewed:

When it comes to AI-generated photos, regardless of if we go the extra mile and tell everyone, “Hey, this is an AI-generated image” in the caption and things like that, there will still be a shockingly large amount of people who won’t see that part and will only see the image and will assume that it’s real and I would hate for that to be the risk that we put in every time we decide to use that technology.

The World Economic Forum has named the threat of AI-fuelled mis/disinformation as the world’s greatest short-term risk. They rank it above other pressing issues, such as armed conflict and climate change.

Labor concerns

The second biggest challenge, interviewees said, was the threat that generative AI posed to lens-based workers and other visual practitioners within news organizations. AI-generated visual content is much cheaper to produce than paying for bespoke content but the interviewees noted that quality is, of course, different.

An editor in Europe said he didn’t think AI tools would take peoples’ jobs. Instead, he felt it would be others who apply these tools well who would be hired instead, as the newsroom can thus be more efficient by using them.

The third biggest challenge, according to the interviewees, was copyright concerns around AI-generated visual content. In the words of one of the editors interviewed:

“Programs like Midjourney and DALL-E are essentially stealing images and stealing ideas and stealing the creative labor of these illustrators and they’re not getting anything in return.”

Many text-to-image generators, including Stable Diffusion, Midjourney, and DALL-E, have been accused of training their models on vast swathes of copyrighted content online. The two biggest players in the market that said they are taking a different approach are Adobe (with its generative AI offering, Firefly) and Getty (with its offering, Generative AI by Getty Images).

Both of these claim they’re only training their generators with proprietary content or with content they have license to use, which makes using them less legally risky. (Although Adobe was later discovered to have trained its model partially on Midjourney images.)

The downside of not indiscriminately scraping the web for training data is that this affects the outputs that are possible. Firefly, for example, wasn’t able to fully render the prompt: “Donald Trump on the Steps of the Supreme Court.” It returned four images of the building itself sans Trump along with the error message: “One of more words may not meet User Guidelines and were removed.”

Adobe Firefly wasn’t able to fully render the prompt “Donald Trump on the Steps of the Supreme Court.” It returned this image of the building itself, instead.

On its help center, Adobe notes, “Firefly only generates images of public figures available for commercial use on the Stock website, excluding editorial content. It shouldn’t generate public figures unavailable in the Stock data.”

Detection issues

The fourth biggest challenge was that journalists themselves didn’t always know when AI had been used to make or edit visual assets. Some of the traditional ways to fact-check images don’t always work for those made by or edited with AI.

Some participants mentioned the Content Authenticity Initiative and its Content Credentials, a kind of tamper-evident metadata used to show the history of an image. However, they also lamented significant barriers to implementation. These included having to buy new cameras equipped with the content credentials technology and also re-develop their digital asset management systems and websites to work with and display the credentials. Considering that at least half of all Americans get at least some news from social media platforms, content credentials will only be effective if they are adopted widely across the industry and by big tech giants, alike.

Despite these significant risks and challenges, newsroom workers also imagined ways that the technology could be used in productive and beneficial ways.

Opportunities for deploying AI tools and visuals in newsrooms

Creating illustrations

This is how text-to-image generator Midjourney responded to a prompt about visualizing generative AI. Journalists said they could see the potential for using generative AI to show difficult-to-visualize topics, such as AI itself.

The newsroom employees interviewed were most comfortable with using generative AI to create illustrations that were not photorealistic. AI can be helpful to illustrate hard-to-visualize stories, like those dealing with bitcoin or with AI itself.

Brainstorming and idea generation

Those interviewed also thought generative AI could be used for story research and inspiration. Instead of just looking at Pinterest boards or conducting a Google Image search, journalists imagined asking a chatbot for help with how to show challenging topics, like visualizing the depth of the Mariana Trench. Interviewees also thought generative AI could be used to create mood boards to quickly and concretely communicate an editor’s vision to a freelancer.

Visualizing the past or future

Journalists also thought the potential existed to help them show the past or future. In one editor’s words:

“We always talk about how like it’s really hard to photograph the past. There’s only so much that you can do in terms of pulling archival images and things like that.”

This editor thought AI could be used in close consultation with relevant sources to narrate and then visualize how something looked in the past. Image-to-video AI tools like Runway can allow you to bring a historical still image to life or to describe a historical scene and receive a video in return.

Image-to-video AI tool Runway allows a user to bring life to a still image from history.

More guidance (and research) needed

From our research, which also discusses principles and policies that newsrooms have in place to guide the responsible use of AI within news organizations, it is clear that the media industry finds itself at another major crossroads. As with each evolution of the craft, there are opportunities to explore and risks to be evaluated. But from what we saw, journalists need more guardrails to guide their use and allow for experimentation and innovation in ethically sound and responsible ways.

Print Friendly and PDF