OpenAI Tightens ChatGPT Privacy Controls with New Data Controls
OpenAI has announced that it is improving the privacy controls of its AI chatbot, ChatGPT. Users can now turn off their chat histories to prevent their input from being used for training data. The controls will be available under ChatGPT user settings, found under a new section titled Data Controls.
According to OpenAI, even when history and training features are turned off, the chatbot will still store chats for 30 days to prevent abuse. The company says it will only review them if it needs to monitor them. After 30 days, the chats will be permanently deleted.
Furthermore, the company is also launching a ChatGPT Business subscription, a plan that targets professionals and enterprises seeking more control over their data. This plan will follow the same data-usage policies as its API, meaning it will not use your data for training by default. The new plan will become available in the coming months.
OpenAI is also releasing a new export option that allows users to email a copy of the data it stores. The company says this option will enable users to move their data elsewhere and help them understand what information it keeps.
Earlier this month, OpenAI was in the spotlight after three Samsung employees leaked sensitive data to the chatbot, including recorded meeting notes. OpenAI uses its customers' prompts to train its models by default, and urges its users not to share sensitive information with the bot. The company also added that it is not able to delete specific prompts from users' history. Strengthening its privacy transparency and controls, OpenAI is a welcome change for the company after ChatGPT and other AI writing assistants have gained more popularity in recent months.