The CEO of OpenAI, Sam Altman, recently announced that the organization will no longer train its large-language GPT models on client data. This decision comes after customers expressed concerns about data privacy and criticized the company for training on their data. Altman stated that, "Customers clearly want us not to train on their data, so we've changed our plans: We will not do that."
It's worth noting that the revised privacy and data protection policies of OpenAI only apply to customers who utilize the company's API services. While ChatGPT will still be available, the text from OpenAI's chatbot may be used in addition to information from providers other than its API.
This update arrives at a time when numerous businesses are raising concerns about the use of large-language models. One such example is the Writers Guild of America, which went on strike due to concerns over ChatGPT's potential impact on script production and editing. Additionally, media companies are worried about their intellectual property and may sue AI firms for using their original content.
The issue of data privacy and protection has become more prevalent as the use of large-language models increases. AI companies are working hard to preserve client privacy and be transparent about their use of customer data. However, it's crucial for companies to reassess their policies and procedures to ensure they are protecting their clients' data.