This photo taken on Sept. 17, 2025 shows a robot dog used in a digital protection project for an ancient wooden pagoda during an exhibition of the 2025 World Internet Conference Cultural Heritage Digitalization Forum in Xi'an, northwest China's Shaanxi Province.
Silk Road –
GENEVA, (Reuters) – Leading AI assistants misrepresent news content in nearly half their responses, according to new research published on Wednesday by the European Broadcasting Union (EBU) and the BBC.
The international research studied 3,000 responses to questions about the news from leading artificial intelligence assistants – software applications that use AI to understand natural language commands to complete tasks for a user.
Read about innovative ideas and the people working on solutions to global crises with the Reuters Beacon newsletter. Sign up here.
It assessed AI assistants in 14 languages for accuracy, sourcing and ability to distinguish opinion versus fact, including ChatGPT, Copilot, Gemini and Perplexity.
Overall, 45% of the AI responses studied contained at least one significant issue, with 81% having some form of problem, the research showed.
Reuters has made contact with the companies to seek their comment on the findings.
Gemini, Google’s AI assistant, has stated previously on its website that it welcomes feedback so that it can continue to improve the platform and make it more helpful to users.
OpenAI and Microsoft have previously said hallucinations – when an AI model generates incorrect or misleading information, often due to factors such as insufficient data – are an issue that they are seeking to resolve.
Advertisement · Scroll to continueReport This Ad
Perplexity says on its website that one of its “Deep Research” modes has 93.9% accuracy in terms of factuality.
SOURCING ERRORS
A third of AI assistants’ responses showed serious sourcing errors such as missing, misleading or incorrect attribution, according to the study.
Some 72% of responses by Gemini, Google’s AI assistant, had significant sourcing issues, compared to below 25% for all other assistants, it said.
Issues of accuracy were found in 20% of responses from all AI assistants studied, including outdated information, it said.
With AI assistants increasingly replacing traditional search engines for news, public trust could be undermined, the EBU said.
“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation,” EBU Media Director Jean Philip De Tender said in a statement.
Some 7% of all online news consumers and 15% of those aged under 25 use AI assistants to get their news, according to the Reuters Institute’s Digital News Report 2025.
The new report urged AI companies to be held accountable and to improve how their AI assistants respond to news-related queries.
