As artificial intelligence gains traction in American newsrooms, most of the public has growing concerns about its long-term effects on journalism. While AI may promise efficiency and innovation, audiences fear the loss of editorial quality, factual accuracy, and jobs.
Michael Lipka’s report for the Pew Research Center reveals that half of U.S. adults (50%) believe AI will negatively impact the news people receive over the next 20 years, while only 10% foresee a positive effect. The rest see a mixed impact (23%) or aren’t sure (16%).
Concerns for journalists’ jobs
One of the starkest findings from Pew’s nationally representative survey is the projected toll on employment in journalism. Nearly six in 10 Americans (59%) believe that AI will reduce journalism jobs over the next two decades. Only 5% expect AI to create more roles, which underscores widespread anxiety about automation displacing skilled labor in newsrooms.
This echoes themes from other industry-focused studies. The 2024 European Broadcasting Union (EBU) News Report reinforces the importance of human oversight in AI generative content. While acknowledging that AI technologies can enhance newsroom efficiency and content creation, the report stresses that human editorial judgment remains essential to ensure accuracy and uphold accountability. It also highlights transparency as a critical factor in maintaining public trust, emphasizing that the integration of AI in journalism must not come at the expense of ethical standards and journalistic integrity.
Performance gaps and trust deficits
The public also doubts AI’s current ability to match the standards employed by human journalists. In Pew’s survey, 41% of Americans say AI would do a worse job than professional journalists at writing a news story, compared with only 19% who believe it would do better. Another 20% say the quality would be about the same.
These concerns are not just hypothetical. Developers of leading generative AI systems—such as OpenAI’s ChatGPT and Google’s Gemini publicly acknowledge the ongoing issue of “hallucinations,” or confident-sounding but factually incorrect statements. Two-thirds of respondents (66%) in the Pew study say they are extremely or very concerned about people getting inaccurate information from AI-generated content, with another 26% somewhat concerned.
The Reuters Institute’s Journalism, Media and Technology Trends and Predictions 2025 report supports this sentiment. While 80% of news executives report testing generative AI, most remain cautious. Only a minority (around 20%) use AI to produce original stories. Instead, common applications include headline optimization, translation, transcription, and tagging—low-risk tasks that minimize the threat of spreading misinformation or editorial errors.
Bipartisan concern despite a divided media environment
One surprising aspect of the Pew findings is the bipartisan nature of AI skepticism. Despite sharply differing views on mainstream media trust, Democrats and Republicans largely agree about AI’s potential risks in journalism. For instance, 68% of Democrats and 67% of Republicans say they are extremely or very concerned about AI-generated misinformation. And when it comes to AI’s long-term impact on news, the gap is narrow: 54% of Republicans compared to 49% of Democrats expect a negative outcome. In an era of political polarization, this rare alignment suggests that concern over AI’s role in news transcends ideological divides.
Educational disparities in AI perceptions
Pew’s data also reveals differences in perceptions based on education level. Americans with at least some college education are more likely to negatively view AI’s impact on journalism. For example, 56% of college graduates and 54% of those with some college experience expect harm to news quality versus 44% of those with a high school diploma or less.
Those more familiar with the nuances of journalism may better understand the stakes around issues like source vetting, bias detection, and ethical storytelling. For news organizations, these findings highlight the need for greater public education about AI integration into the editorial process and how humans must oversee the process.
Incorporating innovation and integrity
As AI grows more capable, its role in media will continue to expand; however, its success will depend on maintaining audience trust.
Strategies to consider include:
- Transparent disclosure of AI use in news production (e.g., “AI-assisted reporting” tags).
- Human-in-the-loop editorial models that pair AI speed with journalistic judgment.
- Clear internal guidelines around the ethical deployment of generative tools.
- Investments in staff training to help journalists leverage AI while maintaining standards.
As artificial intelligence becomes more deeply embedded in society, its impact on journalism is just one example of broader public concerns about automation, trust, and accountability. Skepticism surrounding AI in newsrooms reflects unease about how these technologies might reshape essential human-centered fields. The challenge lies in balancing innovation with ethical responsibility, to ensure that progress enhances – and does not undermine – the media institutions the public relies upon.