OpenAI dissolves team focused on long-term AI risks
OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.
Two persons who were spearheading the safety aspect of OpenAI were OpenAI co-founder Ilya Sutskever and Jan Leike. Both of them resigned recently from Openai which is significant setback for OpenAI’s “safety culture and processes” and it is now widely seen that “Open AI’s culture and processes have taken a backseat to shiny products.”
This development comes amid major concerns coming up about AI’s safety. Major countries like U.S.A, U.K. , Europe have initiated laws and regulations to monitor AI development in such a way that protects human rights and such technologies are not misused by anti-social elements.
There has been wide noise that AI can replace millions of jobs in the near future. This is one more reason why governments across the world are cautious about the speed of innovation and deployment.
Leike who was responsible and overseeing safety at Open AI wrote in his post “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”. This clearly means that current leadership at OpenAI is more focused on commercial aspects and AGI rather than safety of such innovations.
The goal of Open AI should be “safety-first AGI company.” Vs “ First AGI Company” .
In my opinion OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.
OpenAI founder Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.
Ilya Sutskever and Jan Leike were part of and leading a team called ‘Superalignment” at OpenAI which was mainly focused on Safety of AI. With their departure, this team has been disbanded and so safety culture, I guess.
Why it matters?
World’s major countries are already concerned about the speed of AI innovation and its impact on humanity. Such developments especially where leaders of safety culture are leaving OpenAI is not at all good news for a safe and ethical AI.
Love reading about Safe and Ethical AI? Don’t forget to subscribe to this newsletter and share it with your community.