The Risks and Rewards of Artificial Intelligence: Lessons from Italy's Chatbot Ban

Published On Sat May 13 2023
The Risks and Rewards of Artificial Intelligence: Lessons from Italy's Chatbot Ban

Lessons learned from Italy's temporary ban on AI chatbot

In March 2023, Italy became the first western country to block the advanced chatbot known as ChatGPT. The ban was based on concerns over personal data protection raised by the Italian data protection authority, Garante. The authority cited that ChatGPT collects data in a way that’s incompatible with data protection law and also lacks age verification, which could expose children to harmful content. As a result, it used an emergency procedure to temporarily suspend the processing of personal data by OpenAI, the California-based company that created ChatGPT. The news about the temporary ban spread across the world, raising concerns about the consequences of decisions like this on the development of new artificial intelligence (AI) applications. This article will look at some of the lessons learned from the temporary ban.

Proportionality and effectiveness of bans on developing technologies

The measures adopted by the Italian data protection authority raise questions both about effectiveness and proportionality. A general ban does not seem to strike a balance between the conflicting constitutional interests at stake. The temporary measure does not mention how it takes into account the protection of other interests, such as the freedom of users to access ChatGPT. Even though the ban is temporary, the situation might have benefited from the involvement at an earlier stage by other board members of the Italian data protection authority. A preliminary exchange with OpenAI could have avoided a ban altogether. This course of action could have anticipated the implementation of further safeguards to comply with data protection.

Coordination between member states at the European level

First, more European coordination is needed around the general issue of AI technology. The EU’s proposed AI Act is only one step towards a harmonised framework for ensuring the development of AI technologies that are aligned with European values. And as Italy’s ban has shown, the EU regulatory model can potentially become fragmented if national authorities go in their own directions. In particular, the connection between AI and data protection empowers national authorities to react to the development of new AI technology. It also underlines the need for more coordination between European member states on regulation of all kinds.

Protection of children from accessing harmful content

Finally, the decision raises questions about the best ways to protect children from any harmful content created by these applications. Introducing an age verification system or alerts regarding harmful content could have been topics for discussion, had the parties been engaged in an ongoing dialogue. A preventative and collaborative approach with OpenAI would have minimized the risk of this service being blocked in Italy. Continued discussion between OpenAI and Italy’s authorities is critical.

This case highlights important lessons about the proportionality and effectiveness of bans on developing technologies, the need for coordination between member states at the European level, and the protection of children from accessing harmful content. It also shows that general bans on new technological applications are usually the result of quick reactions that do not involve a deep assessment of the effectiveness and proportionality of the measure. A more collaborative approach would have avoided a ban altogether and minimized the risks for all parties involved.