Debate over AI Regulation and Bans Erupts Due to ChatGPT
The use of artificial intelligence (AI) has become a hot topic for many nations around the world, as seen in the concerns raised by Italy, Spain, and France regarding ChatGPT. The chatbot, which uses AI to create text that appears to be written by humans, has been temporarily banned for breaching data protection legislation by the Italian data protection authority, Garante. The watchdog did not object to the use of AI but demanded that the company behind ChatGPT, OpenAI, be more transparent with its users about how their data was processed and that it must obtain permission from users before using their data to improve the software. The Italian authority stated that it would lift the ban if OpenAI met these conditions by April 30.
However, this has led to concerns about how to regulate the use of AI in products such as self-driving cars, medical technology, and surveillance systems. There is currently no EU-wide regulation in place, and legislation proposed by the European Commission two years ago is still being debated in the European Parliament. It is expected to come into effect no earlier than early 2025 after agreement from EU member states. German MEP Axel Voss, one of the main drafters of the EU's Artificial Intelligence Act, expressed concern that AI would continue to develop over the next two years and that much of the legislation would no longer be appropriate when it took effect.
The EU regulation outlines various levels of risk in AI that range from "minimal or no risk" to "unacceptable." Programs assigned scores of "high risk" or "limited risk" will be subject to specific rules surrounding the documentation of algorithms, transparency, and data use disclosure. Applications that document and evaluate people's social behavior to predict certain actions will be prohibited, as will social scoring by governments and specific facial recognition technologies.
The debate among legislators continues regarding AI's recording or simulation of emotions and how to categorize risk levels. The EU members' data protection commissioners have pushed for independent monitoring of AI, and it is sensible to amend the existing data protection legislation. The goal is to strike a balance between consumer protection, regulation, and the free development of the economy and research within the European Parliament and Commission. AI offers immense potential in a digital society and economy, as EU Commissioner for the Internal Market Thierry Breton has pointed out.
Mark Brakel from the US-based nonprofit Future of Life Institute suggested that developers must hold themselves accountable, and regulators should ensure that companies mandate risk management and publish the results. Companies may not predict their product's potential capabilities, and as MEP Voss warned, companies may move elsewhere to develop their algorithms if too much complexity is added in Europe.
It is notable that ChatGPT was developed in the United States for global use, leading to rising competition from other US businesses such as Google and Tesla CEO Elon Musk's Twitter. Major Chinese tech firms are also in the race, with the creation of chatbots such as Baidu's Ernie. However, there are no European chatbots on the horizon as of yet.