Chatbots Are Primed to Warp Reality - The Atlantic
A growing body of research shows how AI can subtly mislead users—and even implant false memories. More and more people are learning about the world through chatbots and the software’s kin, whether they mean to or not. Google has rolled out generative AI to users of its search engine on at least four continents, placing AI-written responses above the usual list of links; as many as 1 billion people may encounter this feature by the end of the year. Meta’s AI assistant has been integrated into Facebook, Messenger, WhatsApp, and Instagram, and is sometimes the default option when a user taps the search bar. And Apple is expected to integrate generative AI into Siri, Mail, Notes, and other apps this fall. Less than two years after ChatGPT’s launch, bots are quickly becoming the default filters for the web.
The Issue with AI Chatbots and Assistants
Yet AI chatbots and assistants, no matter how wonderfully they appear to answer even complex queries, are prone to confidently spouting falsehoods—and the problem is likely more pernicious than many people realize. A sizable body of research, alongside conversations with several experts, suggests that the solicitous, authoritative tone that AI models take—combined with them being legitimately helpful and correct in many cases—could lead people to place too much trust in the technology.
That credulity, in turn, could make chatbots a particularly effective tool for anyone seeking to manipulate the public through the subtle spread of misleading or slanted information. No one person, or even government, can tamper with every link displayed by Google or Bing. Engineering a chatbot to present a tweaked version of reality is a different story.
Impact on Misinformation and Elections
As the election approaches, some people will use AI assistants, search engines, and chatbots to learn about current events and candidates’ positions. Indeed, generative-AI products are being marketed as a replacement for typical search engines—and risk distorting the news or a policy proposal in ways big and small. Others might even depend on AI to learn how to vote.
Manipulating Understanding through AI
With the entire tech industry shifting its attention to these products, it may be time to pay more attention to the persuasive form of AI outputs, and not just their content. Chatbots and AI search engines can be false prophets, vectors of misinformation that are less obvious, and perhaps more dangerous, than a fake article or video. "The model hallucination doesn’t end" with a given AI tool, one researcher pointed out. "It continues, and can make us hallucinate as well."
Implanting False Memories
Researchers recently sought to understand how chatbots could manipulate our understanding of the world by implanting false memories. The idea was to see if a witness could be led to say a number of false things about a video, such as that the robbers had tattoos and arrived by car, even though they did not. The study found that the generative AI successfully induced false memories and misled more than a third of participants—a higher rate than both a misleading questionnaire and another, simpler chatbot interface that used only the same fixed survey questions.
Conclusion
The false-memory findings echo an established human tendency to trust automated systems and AI models even when they are wrong. People expect computers to be objective and consistent. Large language models provide authoritative, rational-sounding explanations in bulleted lists; cite their sources; and can almost sycophantically agree with human users—which can make them more persuasive when they err. The subtle insertions that can implant false memories are precisely the sorts of incidental errors that large language models are prone to.