YouTube case at High Court could shape protections for ChatGPT, AI
The U.S. Supreme Court is expected to rule by the end of June on whether YouTube, owned by Alphabet Inc., can be sued for its video recommendations to users. The ruling will test whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations. This ruling will have implications not only for social media platforms but also for other rapidly developing technologies, such as generative AI chatbots like ChatGPT and Bard.
OpenAI’s ChatGPT and Alphabet’s Bard use algorithms that power generative AI tools, which work in a similar way to those that suggest videos to YouTube users. This similarity means that if the court decides that YouTube is liable, then these generative AI chatbots could also be liable to legal claims for defamation or privacy violations. While the case does not directly relate to generative AI, it is still an important facet of the emerging debate over whether Section 230 immunity should apply to AI models trained on troves of existing online data but capable of producing original works.
Section 230 protections generally apply to third-party content from users of a technology platform and not to information a company helped to develop. Courts have not yet ruled on whether a response from an AI chatbot would be covered. Justice Neil Gorsuch noted during arguments in February that AI tools that generate “poetry” and “polemics” likely would not enjoy such legal protections.