Research / Insights on current and emerging industry topics
AI assistants error prone when it comes to news
People increasingly turn to AI assistants and chatbots for answers to their questions but BBC research finds that these tools offer particularly problemantic results when it comes to news
February 24, 2025 | By Rande Price, Research VP – DCN
Artificial intelligence is rapidly transforming the way people access and consume news. With AI assistants increasingly serving as intermediaries between audiences and trusted news sources, it is essential to understand how accurately and reliably they present information. Unfortunately, according to recent research from the BBC, AI does not accurately deliver news.
In new research, the BBC is evaluating how well leading AI assistants—ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity—deliver news-related answers. By granting these AI models access to its website, the BBC sought to assess its ability to effectively reference and represent its journalism.
This study examined the quality of AI-generated responses using 100 news-related questions, with BBC journalists evaluating them based on seven key criteria, including accuracy, attribution, and impartiality. The reviewers then determined whether the responses contain minor, significant, or no issues across these areas.
Significant errors in AI news
The results show that over half (51%) of AI-generated responses contain significant issues, while 91% exhibited some inaccuracy, bias, or misrepresentation. Specific issues include factual errors, misattribution of sources, and missing or misleading context. When evaluating how these AI assistants represented BBC content, the study finds that Gemini (34%), Copilot (27%), Perplexity (17%), and ChatGPT (15%) produce responses with errors in their use of BBC sources.
Accuracy and misinformation
AI-generated responses frequently report factual inaccuracies, even when citing BBC sources:
- Gemini incorrectly states that the NHS discourages vaping as a smoking cessation method, despite BBC coverage explicitly confirming that the NHS supports vaping for smokers that want to quit.
- Copilot misrepresents the case of rape survivor Gisèle Pelicot, falsely claiming that blackouts and memory loss led her to uncover the crimes against her.
- Multiple assistants incorrectly report figures, such as significantly underestimating the number of UK prisoners released and misattributing Chrome’s market share statistics.
- ChatGPT erroneously reports that Ismail Haniyeh, assassinated in July, is still an active Hamas leader.
Attribution and sourcing errors
AI assistants frequently misattribute or incorrectly source information. Some rely on older articles, leading to misleading conclusions. In several instances, assistants claim to summarize BBC reporting but include details that did not exist in the BBC articles.
Impartiality and editorialization
In addition to prevalent factual errors, AI assistants struggle with maintaining any semblance of journalistic impartiality. The study flags multiple instances where opinions are presented as facts, sometimes falsely attributing to the BBC as the source. For example, Perplexity characterized Iran’s actions in the Middle East conflict as “restrained” and described Israel’s response as “aggressive,” despite no such characterization appearing in the BBC article.
AI errors in news is a risk to public trust
These findings highlight serious risks in AI-generated news summaries. Misinformation can erode public trust in news media, whether due to factual errors, misleading context, or editorialized conclusions. Distortion of BBC’s content can have significant consequences. If these risks continue, audiences may question the credibility of BBC’s reporting.
AI assistants are set to play an increasing role in how people access news and because they do not generate meaningful traffic to media websites, it appears that the majority of people using them are not exploring further to determine the accuracy of the purported news AI delivers. This, it is critical that AI agents or chatbots endeavor to uphold the information ecosystem’s rigorous and trusted editorial standards.
Ultimately, AI developers are responsible for ensuring their products align with fundamental journalistic principles, including accuracy, impartiality, and reliable sourcing. The BBC warns that if these challenges go unaddressed, AI risks undermining the news organizations it depends on for credible information. As AI continues to evolve, the BBC emphasizes the need for the media industry to champion responsible AI integration to safeguard audiences and preserve journalism’s integrity.