Opinion - Ghosted by ChatGPT: How I was first defamed and then ...
It is not every day that you achieve the status of “he-who-must-not-named.” But that curious distinction has been bestowed upon me by OpenAI’s ChatGPT, according to the New York Times, Wall Street Journal, and other publications.
The Tale of Disappearance
For more than a year, people who tried to research my name online using ChatGPT are met with an immediate error warning. It turns out that I am among a small group of individuals who have been effectively disappeared by the AI system. How we came to this Voldemortian status is a chilling tale about not just the rapidly expanding role of artificial intelligence, but the power of companies like OpenAI.
In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”
Joining me in this dubious distinction are Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and a few others. The common thread appears to be the false stories generated about us all by ChatGPT in the past. The company appears to have corrected the problem not by erasing the error but erasing the individuals in question.
The Ghosting Phenomenon
Thus far, the ghosting is limited to ChatGPT sites, but the controversy highlights a novel political and legal question in the brave new world of AI.
In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”
Legal Actions and Consequences
Although some of those defamed in this manner chose to sue these companies for defamatory AI reports, I did not. I assumed that the company, which has never reached out to me, would correct the problem.
As with Voldemort, even death is no guarantee of closure. Professor Mayer was a respected Emeritus Professor of Drama and Honorary Research Professor at the University of Manchester, who passed away last year. And ChatGPT reportedly will still not utter his name.
And it did, in a manner of speaking — apparently by digitally erasing me, at least to some extent. In some algorithmic universe, the logic is simple that there is no false story if there is no discussion of the individual.
Conclusion
There is no reason to see these exclusions or erasures as some dark corporate conspiracy or robot retaliation. It seems to be a default position when the system commits egregious, potentially expensive errors — which might be even more disturbing. It raises the prospect of algorithms sending people into the Internet abyss with little recourse or response. You are simply ghosted because the system made a mistake, and your name is now triggering for the system.
This is all well short of Hal 9000 saying “I’m sorry Dave, I’m afraid I can’t do that” in an AI homicidal rage. Thus far, this is a small haunt of digital ghosts. However, it is an example of the largely unchecked power of these systems and the relatively uncharted waters ahead.