Corporate ChatGPT Applications Grow Amid Legal Scrutiny
Despite being under regulatory scrutiny since its launch in November, several companies are still developing ChatGPT applications. The legal uncertainty surrounding the use of data in such applications is a major concern for companies.
Privacy and Security Issues
Companies that are developing ChatGPT applications must navigate privacy and security questions about how the artificial-intelligence (AI) bot handles potentially sensitive data. Zalando, a German online fashion retailer, plans to introduce a shopping assistant that uses the enterprise version of OpenAI’s ChatGPT to analyze customers’ search queries.
Zalando’s head of privacy, Jan Wittrodt, said the retailer decided not to share customer data to train ChatGPT’s algorithm. Any data users enter when they search for items is deleted after 30 days, Mr. Wittrodt said. These safeguards protect customers’ privacy, even if they accidentally enter private information when using the assistant, he said.
Opt-In for User Searches
OpenAI introduced several changes to its public ChatGPT service last week, allowing users to choose whether they want their searches to be used to train the tool’s algorithm. Unsaved searches are deleted after 30 days. The changes don’t affect the enterprise service available to corporate customers, which already had similar features, according to Mr. Wittrodt.
Regulatory Scrutiny
ChatGPT has been in regulators’ crosshairs in Europe and the U.S. since its debut in November. The Biden administration is considering drafting new rules to regulate AI tools such as ChatGPT amid criticism over their potential use for discrimination or harmful disinformation. Italy’s data protection regulator temporarily blocked the tool from operating in the country last month, citing violations of the European Union’s General Data Protection Regulation, and demanded changes to how OpenAI handles data and prevents children under 13 from using the bot.
The European Data Protection Board, the umbrella group of regulators from the 27 European Union countries, has set up a task force to look into ChatGPT. Legislation tailored to address AI risks such as bias, accuracy, and discrimination is a better method to regulate the area, compared with the EU’s broader GDPR.
Legal Uncertainty
Michael Lamb, RELX’s global chief privacy officer, stated that there is legal uncertainty surrounding how ChatGPT uses data. “It’s about the overall training, the use of the algorithm and the impact on society, whether or not it involves personal data,” Mr. Lamb said.
Companies might need to issue disclosures to users or anyone whose data they collect for an application built with ChatGPT. Those could say that the system used the tool and that anything produced with it should be independently verified, he added.
Conclusion
Amid regulatory scrutiny, companies that are using or considering experimenting with ChatGPT should take precautions like drafting policies aimed at preventing employees from inputting sensitive or proprietary data. For certain applications, companies could also strip away personal data so that anything sent to ChatGPT can’t be used to identify people. A lot of companies still don’t understand the risks of these things.