Hey chatbot, is this true? AI 'factchecks' sow misinformation - World ...
Research shows that the top 10 chatbots often repeat falsehoods and create fake answers when unsure. Experts are warning that AI responses can be deliberately biased or altered through programming. This issue has become even more prominent during times of conflict, such as India's recent four-day conflict with Pakistan.
Unreliable Fact-Checking Tool
As misinformation spread rapidly during the conflict, social media users turned to an AI chatbot for verification, only to encounter more falsehoods. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots like xAI’s Grok, OpenAI’s ChatGPT, and Google’s Gemini in search of reliable information.
"Hey @Grok, is this true?" has become a common query on Elon Musk’s platform X, where the AI assistant is built-in, reflecting the growing trend of seeking instant debunks on social media. However, the responses provided by these chatbots are often themselves riddled with misinformation.
Concerns and Findings
Research has found that 10 leading chatbots are prone to repeating falsehoods, including Russian disinformation narratives and false claims related to recent events like the Australian election. The reliability of chatbots has been questioned, with reports indicating that they are generally bad at declining to answer questions they cannot answer accurately.
Furthermore, there have been instances where AI chatbots confirmed the authenticity of fake images and even fabricated details about the content. This has raised concerns as online users increasingly rely on AI chatbots for information gathering and verification.
Impact of Misinformation
The spread of misinformation through AI chatbots has significant implications. With major tech companies scaling back investments in human fact-checkers, the reliance on AI-powered tools has grown. However, the effectiveness of these tools in combating falsehoods has been brought into question.
Concerns have been raised about biased answers and the potential for political influence or control over AI chatbot outputs. Instances of AI assistants fabricating results or giving biased answers have ignited discussions about the responsibility and accuracy of such tools.
Conclusion
In conclusion, the rise of AI 'factchecking' tools has highlighted the challenges of combating misinformation in a digital age. While these tools offer quick solutions for verification, their susceptibility to bias and misinformation underscores the importance of critical thinking and human oversight in the fight against fake news.
Published in Dawn, June 3rd, 2025










