Artificial intelligence is reshaping news consumption, but can it be trusted? A recent BBC investigation reveals that AI-generated news summaries from leading chatbots—ChatGPT, Google Gemini, Microsoft Copilot, and Perplexity—are riddled with inaccuracies. The study found that more than half of these AI-generated summaries contained errors, raising concerns about misinformation and the reliability of AI-driven news delivery.
The AI News Accuracy Problem
The BBC’s findings indicate that AI-powered news summaries are far from perfect. Among the key issues identified:
- Factual Inaccuracies: 19% of AI-generated summaries contained incorrect facts, misreported figures, or misrepresented events. For instance, Google Gemini falsely claimed that the UK’s National Health Service (NHS) advised against vaping when, in reality, the NHS promotes vaping as a smoking cessation tool.
- Misquotations: 13% of the summaries featured misattributed or altered quotes. ChatGPT inaccurately stated that Ismail Haniyeh was part of Hamas leadership, despite his assassination in July 2024.
- Distorted Context: Many summaries altered the original meaning of news articles, leading to misleading narratives.
- Platform-Specific Issues: Google Gemini had the highest rate of inaccuracies, with 46% of its summaries flagged for factual errors.
The Risk of AI-Distorted News
The consequences of misleading AI-generated news go beyond minor errors. In an era where rapid information dissemination shapes public opinion, flawed summaries can distort narratives and fuel misinformation. BBC’s CEO of News and Current Affairs, Deborah Turness, warned about the risks, stating,
“We live in troubled times, and how long will it be before an AI-distorted headline causes significant real-world harm?”
Tech Companies Under Pressure
Following the investigation, the BBC has called on tech giants to take responsibility for AI-generated misinformation. The organization is advocating for greater transparency, accountability, and collaboration between media companies and AI developers to ensure that AI-driven journalism does not become a vehicle for spreading inaccuracies.
AI vs. Human Oversight
While AI excels at processing vast amounts of data quickly, its tendency to generate misleading or incorrect summaries underscores the need for human oversight. Fact-checking, editorial review, and clear disclaimers must be integral to AI-driven news services to prevent misinformation from becoming mainstream.
A Broader AI Reliability Crisis?
This issue isn’t isolated to chatbots. In a separate controversy, Apple came under fire for AI-powered news notifications that inaccurately rewrote BBC headlines. Following public backlash, Apple temporarily paused the feature and committed to refining its AI-generated content approach.
The Future of AI-Generated News
AI is here to stay, but its role in journalism remains contentious. Can AI truly replace human journalists? Not yet. While it offers speed and efficiency, the BBC’s findings highlight a crucial flaw—AI lacks the discernment needed to ensure news accuracy. Until significant improvements are made, relying on AI-generated summaries without verification could do more harm than good.
For now, readers should remain cautious. AI-generated news can be a useful tool, but cross-checking information from credible sources remains essential to separating fact from fiction.
[READ ALSO: Top 6 AI Tools Making Millionaires in 2025]