Unveiling the Truth: How AI Chatbots Distort Current Affairs

Published On Tue Feb 11 2025
Unveiling the Truth: How AI Chatbots Distort Current Affairs

AI chatbots distort and mislead when asked about current affairs ...

Researchers have discovered that leading artificial intelligence assistants, including ChatGPT, Copilot, Gemini, and Perplexity, often create distortions, factual inaccuracies, and misleading content when responding to questions about news and current affairs.

Algorithmic Amplification and Society | Knight First Amendment ...

According to a study by the BBC, more than half of the AI-generated answers provided by these assistants were found to have "significant issues." These issues ranged from misidentifying political figures to misrepresenting health advice and mistaking opinions for facts.

Key Findings:

  • About a fifth of the answers introduced factual errors on numbers, dates, or statements.
  • 13% of quotes sourced to the BBC were either altered or did not exist in the articles cited.

Examples of Distortions:

One example highlighted in the research is when Gemini responded to a question about a convicted neonatal nurse, Lucy Letby, stating that her innocence or guilt is up to individual interpretation, omitting crucial details about her court convictions for murder and attempted murder.

Safe Pro Enters Multi-Year Agreement for its Patented Drone AI ...

Other distortions included Microsoft’s Copilot falsely attributing actions to a French rape victim and ChatGPT misreporting the status of political figures Rishi Sunak and Nicola Sturgeon.

Response from the BBC:

The findings have led the BBC's chief executive for news, Deborah Turness, to caution that "Gen AI tools are playing with fire" and could erode public trust in facts. Turness urged AI companies to collaborate with the BBC to ensure more accurate responses.

In a blogpost about the research, Turness questioned the readiness of AI to deliver news without distorting facts, emphasizing the need for accuracy to prevent confusion.

Impact on Apple:

Following similar issues, Apple had to suspend sending BBC-branded news alerts to iPhone users after inaccuracies were detected in the summaries. These errors included false information about a legal case and an individual's actions.

Will America's government try to break up Google?

Sign up to TechScape for a weekly exploration of how technology shapes our lives.

Conclusion:

The research highlights the prevalence of inaccuracies about current affairs among popular AI tools. Collaboration between AI companies and media organizations is crucial to ensure the accuracy of information provided by AI assistants and to preserve the integrity of news content.

The BBC and other publishers should have control over how their content is used by AI assistants, and transparency regarding the processing of news data is essential to mitigate errors and inaccuracies.

The companies responsible for the AI assistants mentioned in the research have been contacted for their input on the findings.