The Future of AI Chatbots in Light of the Supreme Court Ruling on YouTube

Published On Sat May 13 2023
The Future of AI Chatbots in Light of the Supreme Court Ruling on YouTube

Implications of Supreme Court Ruling in YouTube Case for AI Chatbot Lawsuits

A ruling by the U.S. Supreme Court on whether YouTube can be sued over its video recommendations to users could have far-reaching implications for other technologies, including AI chatbots like ChatGPT and Bard. The court is due to rule by the end of June on whether a U.S. law protecting technology platforms from legal responsibility for content posted online by their users also applies when companies target users with recommendations using algorithms. If the shield is weakened, companies could face legal claims like defamation or privacy violations for generative AI chatbots.

While the case does not directly relate to generative AI, Justice Neil Gorsuch noted that AI tools that generate "poetry" and "polemics" likely would not enjoy such legal protections. This is only one facet of an emerging conversation about whether Section 230 immunity should apply to AI models trained on existing online data but capable of producing original works. The liability shield should not apply to generative AI tools because such tools "create content," according to Democratic Senator Ron Wyden, who helped draft the law.

Carl Szabo, vice president and general counsel of NetChoice, a tech industry trade group, said a weakened Section 230 would threaten to expose AI developers to a flood of litigation that could stifle innovation. Some experts forecast that courts may take a middle ground, examining the context in which the AI model generated a potentially harmful response. In cases in which the AI model appears to paraphrase existing sources, the shield may still apply. But chatbots like ChatGPT that create fictional responses likely would not be protected.

Hany Farid, a technologist and professor at the University of California, Berkeley, said that it stretches the imagination to argue that AI developers should be immune from lawsuits over models they "programmed, trained and deployed." Holding companies responsible for harms from the products they produce would encourage them to produce safer products, he added.