OpenAI's ChatGPT penalized in Italy for data privacy breaches
Italy's data protection watchdog, known as the Garante, has fined US-based artificial intelligence company OpenAI €15 million (US$15.6 million) on Friday, December 20, as per reports.
Privacy Breach and Investigation
The penalty follows a probe into ChatGPT's collection and processing of personal information. According to the Italian authority, OpenAI did not have "an adequate legal basis" for using individuals' data to train its popular chatbot, and it violated both transparency principles and the obligation to fully inform users.
In response, OpenAI called the decision "disproportionate" and confirmed its intention to appeal.
Findings and Directives
The Garante's investigation, launched last year, also concluded that OpenAI failed to implement an "adequate age verification system" to prevent children under 13 from accessing potentially inappropriate AI-generated content. Additionally, the watchdog has directed OpenAI to carry out a six-month public awareness campaign across Italian media channels to inform citizens about ChatGPT's data collection practices.
Regulatory Scrutiny on AI
The fine comes amid growing scrutiny on both sides of the Atlantic over generative artificial intelligence systems such as ChatGPT. Regulators in the United States and Europe have been examining OpenAI and other key players in the fast-developing AI sector. Governments worldwide are also taking steps to regulate AI, led by the European Union's proposed AI act—a comprehensive rulebook for artificial intelligence.
For more information, you can visit OpenAI's ChatGPT page.