Why Microsoft Thinks ChatGPT Needs to Be Regulated

Published On Sat May 13 2023
Why Microsoft Thinks ChatGPT Needs to Be Regulated

Even Microsoft Thinks ChatGPT Needs to Be Regulated | Digital Trends

Artificial intelligence (AI) chatbots such as Microsoft's ChatGPT have been fascinating and alarming people in almost equal measure. But in a surprising turn of events, Microsoft is now calling on governments to take action and regulate AI before things spiral out of control.

The call for regulation was made by BSA, a trade group representing numerous business software companies, including Microsoft, Adobe, Dropbox, IBM, and Zoom. The group is advocating for the US government to integrate rules governing the use of AI into national privacy legislation.

BSA has four main tenets. Firstly, Congress should clearly set out when companies need to determine the potential impact of AI. Secondly, those requirements should come into effect when the use of AI leads to "consequential decisions" - which Congress should also define. Thirdly, Congress should ensure company compliance using an existing federal agency. Fourthly, the development of risk-management programs must be a requirement for any company dealing with high-risk AI.

Craig Albright, vice president of U.S. government relations at BSA, said, "We're an industry group that wants Congress to pass this legislation, so we're trying to bring more attention to this opportunity. We feel it just hasn't gotten as much attention as it could or should."

BSA believes that the American Data Privacy and Protection Act, a bipartisan bill that is yet to become law, is the right legislation to codify its ideas on AI regulation. The trade group has already been in touch with the House Energy and Commerce Committee regarding its views.

The lightning-fast development of AI tools has caused alarm among many about the potential consequences for society and culture. This has been heightened by the numerous scandals and controversies in the field. In March 2023, a group of prominent tech leaders called on AI firms to pause research on anything more advanced than GPT-4, stating that "AI systems with human-competitive intelligence can pose profound risks to society and humanity" and that society at large needs to catch up and understand what AI development could mean for the future of civilization.

As scams have been quick to take an interest in ChatGPT, the advanced AI-powered chatbot from Microsoft-backed OpenAI, the company has put safeguards in place to prevent it from doing things it shouldn't. It also has some limitations based on its design, training data, and the sheer limitations of a text-based AI.

While ChatGPT is an amazing tool, a modern marvel of natural language artificial intelligence that can do incredible things, ChatGPT developer OpenAI acknowledges that with great power comes great responsibility. It is clear that AI legislation will become law sooner rather than later, especially as even Microsoft has suggested its own AI products should be regulated.

Things ChatGPT Can't or Won't Do

  • Write about anything after 2021

Microsoft was heavily criticized when it shut down its artificial intelligence (AI) Ethics & Society team in March 2023. In a post on Microsoft's On the Issues blog, Natasha Crampton, the Redmond firm's Chief Responsible AI Officer, explained that the ethics team was disbanded because "A single team, or a single discipline tasked with responsible or ethical AI, was not going to meet our objectives."

Digital Trends keeps readers up-to-date with the fast-paced world of tech with all the latest news, fun product reviews, insightful editorials, and one-of-a-kind sneak peeks.