OpenAI Announces New Privacy Options for ChatGPT to Ensure Users’ Data Safety
OpenAI has introduced a new privacy feature allowing its users to withhold their ChatGPT conversations from becoming a part of the company's training models. This privacy safeguard can prove useful in case users discuss sensitive information with the AI chatbot. By clicking a toggle switch in their account settings, ChatGPT users can now turn off their chat histories, causing their conversations to no longer be saved in ChatGPT’s history sidebar. This data will also no longer be used to improve OpenAI’s models over time.
The San Francisco-based startup announced the changes in a recent blog post. The company aims to establish a more user-friendly approach towards the chatbot’s services, where users can decide how their data should be used. OpenAI’s Chief Technology Officer, Mira Murati, stated that the company wants to move more towards this direction.
The company filters out personally identifiable information from the data that comes in from users. Although the startup will still train its models on user data by default, it will now provide an option for users to opt-out. However, it will still store the data, including the conversations where the users have turned off the chat history, for 30 days before deleting them. This is to track any abusive behavior.
OpenAI is planning to introduce a business subscription plan in the near future, ensuring users' data safety. The company said that subscribers to this plan would not be trained by default on their data.