Can You Trust AI Chatbot Responses? | Social Media Today
As more and more people place their trust into AI bots to provide them with answers to whatever query they may have, questions are being raised as to how AI bots are being influenced by their owners, and what that could mean for accurate informational flow across the web.
Last week, X’s Grok chatbot was in the spotlight, after reports that internal changes to Grok’s code base had led to controversial errors in its responses.

As you can see in this example, which was one of several shared by journalist Matt Binder on Threads, Grok, for some reason, randomly started providing users with information on “white genocide” in South Africa within unrelated queries.
Understanding the Errors
Why did that happen? A few days later, the xAI explained the error, noting that: “On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot's prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI's internal policies and core values.”
Impact of Bias
On Tuesday last week, Elon Musk responded to a users’ concerns about Grok citing The Atlantic and BBC as credible sources, saying that it was “embarrassing” that his chatbot referred to these specific outlets. Because, as you might expect, they’re both are among the many mainstream media outlets whom Musk has decried as amplifying fake reports.

So Elon has seemingly built in a new measure to avoid the embarrassment of citing mainstream sources, which is more in line with his own views on media coverage. But is that accurate? Will Grok’s accuracy now be impacted because it’s being instructed to avoid certain sources, based, seemingly, on Elon’s own personal bias?
Transparency and Trust
xAI is leaning on the fact that Grok’s code base is openly available, and that the public can review and provide feedback on any change. But that’s reliant on people actually looking over such, while that code data may not be entirely transparent.
Issues with AI Bias
At the same time, xAI isn’t the only AI provider that’s been accused of bias. OpenAI’s ChatGPT has also censored political queries at certain times, as has Google’s Gemini.
Trust in AI
Yes, you can now get more specific information faster, in simplified, conversational terms. But whoever controls the flow of data dictates responses, and it’s worth considering where your AI replies are being sourced from when assessing their accuracy.

Because while artificial “intelligence” is the term these tools are labeled with, they’re not actually intelligent at all. There’s no thinking, no conceptualization going on behind the scenes. It’s just large-scale spreadsheets, matching likely responses to the terms included within your query.










