EU Lawmakers Push for AI Regulation
A group of European Union lawmakers recently published an open letter suggesting the implementation of regulations on the development of so-called “General Purpose AI systems,” including OpenAI’s ChatGPT and Google’s Bard. The lawmakers noted the need for steering AI development in a direction that is human-centric, safe, and trustworthy, especially as fears rise about the potential risks of unrestrained development. They are determined to provide a set of rules specifically tailored to foundation models. The EU officials suggested their legislation “could serve as a blueprint for other regulatory initiatives in different regulatory traditions and environments around the world.”
Echoes of Concern
Their call for regulation echoes concerns recently raised by billionaire Elon Musk and more than 1,000 experts in AI. Musk and the experts called for a six-month pause in the development of advanced systems, warning of various potential risks that could emerge if AI continues its rapid advancements without guardrails in place, including the possible “loss of control of our civilization.” The lawmakers referenced the experts’ concerns in their open letter. They share some of the concerns expressed in the letter, even while disagreeing with some of its more alarmist statements.
Moving Forward
The EU officials are tasked with crafting an updated version of a long-gestating proposed piece of EU legislation called the Artificial Intelligence Act, which has been in development for over two years. The bill would be subject to a vote in European Parliament in May and, if approved, to negotiations with the European Council, which has crafted its own version of the legislation. Meanwhile, US authorities have begun to consider potential regulations for the AI sector, with the Biden administration’s Commerce Department issuing a request for public comment on accountability measures that would help ensure AI tools “work as claimed – and without causing harm.”
Conclusion
As AI continues to advance rapidly, lawmakers are taking action to provide regulations tailored to foundation models, such as ChatGPT, with the goal of ensuring the development of powerful AI is human-centric, safe, and trustworthy. Musk’s and the experts’ concerns highlight the need for significant political attention. It is essential for actions and decisions to guide us into a world of AI potential rather than inaction that could widen the gap between AI development and our ability to steer it, leaving the door open to more challenging future scenarios.