ChatGPT: EU AI regulation spurs controversy and debates

Published On Fri May 12 2023
ChatGPT: EU AI regulation spurs controversy and debates

EU debates AI regulation after ChatGPT controversy

The use of artificial intelligence (AI) in chatbots has become a subject of debate in the European Union (EU) following the temporary ban of ChatGPT, a chatbot developed by OpenAI, in Italy. The Italian data protection authority, Garante, demanded that OpenAI improve the transparency of its data processing practices and obtain user consent for data use to further develop the software. 

Aside from Italy, similar concerns have been raised by Spain and France. However, there is currently no EU-wide regulation for AI use, and legislation proposed two years ago by the European Commission is still being debated. It is only expected to be put into effect by early 2025 after being approved by EU member states.

Concerns and Debates

The EU is striving to strike a balance between consumer protection, regulation, and free development of the economy and research. While some legislators believe in regulating AI to ensure the protection of basic rights, they are also concerned about stifling developers.

German MEP Axel Voss highlighted that AI had advanced significantly over the past two years and was likely to develop much further over the next two years. Thus, many provisions may no longer be applicable by the time the law comes into effect. Furthermore, there is uncertainty as to whether ChatGPT or similar products would even be covered by the EU regulation.

Implementing AI Regulation

The proposed AI legislation would only subject programs assigned scores of "high risk" or "limited risk" to special rules regarding algorithm documentation, transparency, and data disclosure. Applications that document and evaluate people's social behavior to predict certain actions will be banned, as will social scoring by governments and specific facial recognition technologies. 

Companies that develop AI products must also be accountable for its regulation. They should monitor the risks of each individual application and publish the results. Therefore, measures should be taken to ensure that companies are mandated to implement risk management to prevent unforeseen complications.

Conclusion

The EU is trying to promote the development of AI within its borders while also monitoring and regulating its use to protect user privacy and basic rights. However, the lack of uniform regulation across EU member states creates uncertainty for organizations that develop AI products. As a result, it may prompt them to develop outside of EU jurisdiction, ultimately hindering the potential benefits that AI can offer.