ChatGPT Scandal: Italy Imposes €15 Million Fine on OpenAI

Published On Fri Dec 27 2024
ChatGPT Scandal: Italy Imposes €15 Million Fine on OpenAI

Italy Fines OpenAI €15 Million Over ChatGPT User Data Breach

The Italian Data Protection Authority (GPDP) has concluded its investigation into OpenAI’s ChatGPT, imposing a €15 million fine and mandating a six-month information campaign to address transparency and data protection issues. This development follows a probe launched in March 2023 after significant concerns were raised about the AI chatbot’s compliance with the EU’s General Data Protection Regulation (GDPR).

The GPDP identified several breaches by OpenAI, including:

  • Unauthorized data processing
  • Failure to inform data subjects
  • Insufficient security measures

Enforcement Actions

The GPDP invoked new powers under Article 166(7) of the Privacy Code, underscoring the gravity of these violations. To address these breaches, the GPDP has ordered OpenAI to implement a six-month public information campaign. The campaign, to be broadcast via various media channels, will educate users about how ChatGPT collects and processes their data, their GDPR rights, including how to object, rectify, or delete personal data, and how they can opt-out of having their data used to train generative AI models.

ChatGPT Privacy Risks for Business

During the investigation, OpenAI established its European headquarters in Ireland, transferring jurisdiction for ongoing GDPR compliance matters to the Irish Data Protection Commission (DPC). The GPDP has forwarded all procedural documents to the DPC under the GDPR’s “one-stop-shop” mechanism.

Penalty and Regulatory Scrutiny

OpenAI’s cooperative attitude during the investigation influenced the GPDP’s decision to levy a €15 million fine, which is lower than the maximum penalty of €20 million or 4% of annual global turnover. However, the enforcement actions signal an increase in the regulatory scrutiny of AI technologies in Europe.

Although this action is an important step in GDPR enforcement and an effective response to the growing demand for transparency and accountability in AI-driven services, ultimately, it is users who should be mindful of the data they share with platforms like ChatGPT. It is generally recommended to avoid inputting sensitive information, and if you need to work with sensitive datasets, it would be preferable to use locally hosted generative AI models.

ChatGPT Data Breach Confirmed