Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next logo

Menu

Research / Insights on current and emerging industry topics

Journalists confront the reality of media use of AI

Journalists and other members of media organizations' workforce want to be involved in how generative AI is used in order to ensure quality, transparency and trust

March 17, 2025 | By Rande Price, Research VP – DCN
human hands at a computer with lightening bolts to indicate Journalism and Journalists use of AI

The rapid adoption of Generative AI (Gen AI) in newsrooms sparks important discussions among journalists and media professionals, especially about transparency and trust. Across the industry, publishers vary in how they communicate their AI strategies to their workforce. Reports suggest that some journalists seek more transparency around management’s AI implementation efforts and agreements with AI companies. This lack of clarity also applies to content, as some publishers explore AI-generated articles without consistently informing staff or readers.

A lack of transparency around AI fuels distrust

Mike Ananny and Jake Karr examine how news media unions are trying to manage and stabilize the use of Generative AI. Their analysis, How Media Unions Stabilize Technological Hype, draws from a review of industry reports, expert interviews, and case studies of newsrooms integrating generative AI. The methodology emphasizes qualitative insights to understand AI’s impact on editorial processes, ethics, and audience trust.

According to the authors’ analysis, many employees only learn about AI licensing deals through sudden announcements, often without prior consultation. Some must rely on external reporting to understand their company’s AI initiatives. Union representatives consistently face resistance when requesting information, reinforcing a broader mistrust of employer intentions.

However, solutions are emerging. Some unions are pushing for contractual guarantees to ensure greater transparency. The Associated Press and certain Gannett-owned publications propose contract language requiring 90 days’ notice before implementing new AI-related newsroom technology. Similarly, The Onion and Wirecutter unions successfully bargain for advance notice and transparency requirements regarding AI procurement. These efforts signal a path to restoring trust through openness and accountability.

Journalists defend creativity and quality

News professionals ensure accuracy, provide context, and uphold ethical standards that AI alone cannot fulfill. Ananny and Karr conclude that AI’s so-called “creativity” is a remix of existing human work, lacking the depth, insight, and contextual awareness that define quality journalism. No matter how advanced AI becomes, skilled journalists must verify facts, interpret events, and shape narratives with integrity.

Recognizing this, some media organizations are implementing safeguards. The Associated Press is committing to using Gen AI only with direct human oversight to maintain compliance with journalistic standards. Another example includes editorial employees reviewing AI content before publication at The Onion, The A.V. Club, Deadspin, and The Takeout. And the MinnPost treats AI-generated material as a source requiring human editing and fact-checking.

But beyond oversight, journalists are pushing for the right to decide whether or not to use AI. Many unions argue that workers, as experts in their field, should determine the appropriate role of AI in journalism. The CNET Media Workers Union demands the right to opt out of using AI if it fails to meet publishing standards. The Atlantic Union similarly insists that journalists may use AI within ethical guidelines, but no one should force them to do so.

These demands reflect a broader principle: journalism is, at its core, a human-driven endeavor. AI may assist but cannot replace the judgment, creativity, and accountability that define quality reporting. The analysis concludes that to integrate AI responsibly, newsrooms must prioritize transparency, trust, and human journalists’ role in safeguarding the profession’s integrity.

Beyond transparency and journalistic integrity, the rise of Gen AI raises significant ethical and legal questions. One of the most pressing concerns is intellectual property: Who owns the content produced by AI models trained on vast amounts of copyrighted material? Many publishers argue that AI-generated work lacks originality and merely regurgitates existing human-created content. This also raises potential plagiarism and copyright infringement issues. In response, some media companies are taking legal action.

Additionally, there is concern about AI’s ability to spread misinformation. Unlike human journalists, AI lacks the critical thinking skills to discern fact from fiction. Without rigorous oversight, AI-generated content can amplify biases, fabricate sources, and misinterpret data, posing a direct threat to public trust in news media.

Regulatory bodies are beginning to take notice. Governments worldwide are considering policies to ensure AI transparency and ethical implementation in journalism. For example, the European Union’s AI Act includes provisions requiring companies to disclose AI-generated content and implement safeguards against misinformation. The Federal Trade Commission warns companies against deceptive AI practices, signaling potential regulatory intervention in the media industry.

Balancing innovation and integrity for journalism and AI

Despite the challenges, AI’s presence in newsrooms is likely to grow. Some publishers are taking a proactive approach by developing AI policies that prioritize ethical considerations. Reuters, for example, offers internal guidelines to ensure journalists use AI tools responsibly and transparently. The BBC is similarly on board to maintain human oversight over AI-generated content and clearly labeling AI-assisted reporting.

Ultimately, the future of AI in journalism will depend on striking the right balance between technological innovation and journalistic integrity. The authors concur that if publishers prioritize transparency, enforce accountability, and uphold journalists’ fundamental role, AI can be a valuable tool rather than a disruptive force.

Print Friendly and PDF