ChatGPT Search Fails Attribution Test, Misquotes News Sources
OpenAI's ChatGPT Search is facing criticism for misattributing news content, according to a study conducted by Columbia University's Tow Center for Digital Journalism. The study revealed instances of inaccurate quotes and attributions, prompting concerns among publishers regarding their brand visibility and content control.
Background
OpenAI recently introduced ChatGPT Search with claims of extensive collaboration with the news industry and implementation of publisher feedback. This comes in response to the previous 2022 rollout of ChatGPT, during which many publishers discovered that their content had been utilized to train the AI models without their consent.

Study Findings
The Tow Center's evaluation of ChatGPT Search focused on its ability to identify the sources of quotes from 20 various publications. Some key findings from the study include:
- ChatGPT prioritizes user satisfaction over accuracy, potentially misleading readers and impacting publisher reputations.
- There is inconsistency in ChatGPT Search's responses when asked the same question repeatedly, likely due to inherent randomness in its language model.

Concerns and Reactions
These findings raise significant concerns about the content filtering mechanisms of OpenAI and its journalistic approach, potentially driving audiences away from original publishers. OpenAI, in response to the Tow Center's report, emphasized its efforts to support publishers by providing clear attribution and assisting users in discovering quality content through summaries, quotes, and links.
Conclusion
As generative search technologies like ChatGPT reshape how individuals interact with news content, OpenAI must prioritize responsible journalism to earn and maintain user trust. Ensuring accurate representation of publisher content in ChatGPT Search is crucial for sustaining positive relationships with news organizations.