Is ChatGPT reliable? A look into the accuracy of AI chatbots

Published On Sat May 13 2023
Is ChatGPT reliable? A look into the accuracy of AI chatbots

Calling ChatGPT on its 'bulls***'

Artificial intelligence has made significant growth in recent years, and tools like ChatGPT are gaining popularity around the world. These AI-powered chatbots are used for various purposes, such as crafting emails, writing essays, and doing basic research. However, there are increasing reports that such chatbots are less than accurate. This raises a question of how much we can trust artificial intelligence.

Meredith Whittaker, the president of Signal, an encrypted messaging service, believes that AI will be misused for “social control and oppression.” As a former Google researcher, she has become one of AI’s most outspoken critics. To test AI's reliability, the author flew down to Rio de Janeiro to interview Meredith Whittaker at the Web Summit tech conference.

The conference was filled with tech executives touting AI’s potential to solve humanity’s problems. However, Whittaker was an outlier, warning about AI's potential threat to the future of humanity. To prepare for the interview, the author went on ChatGPT for the first time and asked a simple question: What should I ask Meredith Whittaker about AI?

ChatGPT gave suggestions for a few questions, but only one seemed helpful. The chatbot informed the author, “Signal recently published a report on the role of AI in content moderation. Can you tell us a bit more about the key findings from that report and what it means for the future of content moderation?”

The author Googled this report but couldn’t find it, leading them to conclude that ChatGPT knew something that Google’s search engines didn’t. During the interview, the author asked Whittaker about the findings of the report, to which she replied, “That’s a lie. There was no report.”

According to Whittaker, tools like ChatGPT are really a “bullshit engine.” AI frequently gets things wrong, leading her to question why we are using a "bullshit engine" for anything serious. We need access to some form of shared reality, especially in an information ecosystem overrun by falsehoods, half-truths, and misinformation.

When the author went back to ChatGPT for comment, it apologized for the incorrect information it provided. The chatbot emphasized that its responses are generated based on the data and information available to it at the time of the question. It does not have the ability to fact-check or verify the accuracy of the information presented.

In conclusion, ChatGPT's incorrect responses and inability to fact-check or verify the accuracy of information presented raises questions about AI's reliability. As Whittaker pointed out, we need access to some form of shared reality, and tools like ChatGPT may not be the solution.