OpenAI Ignoring Fatal Threat Posed by AI – Insider
OpenAI has been warned by a former researcher, Daniel Kokotajlo, that the company is turning a blind eye to major risks associated with the development of an artificial general intelligence (AGI) system. AGI is a theoretical form of artificial intelligence that has the capability to comprehend and reason across a wide range of tasks, potentially replicating or predicting human behavior while showcasing the ability to learn and reason.
Risks of AGI Development
Kokotajlo, who parted ways with OpenAI's governance team in April, expressed concerns that there is a 70% chance of "the advanced AI" posing a threat to humanity. Despite this, he criticized the San Francisco-based developer for enthusiastically pursuing AGI development, describing their approach as reckless.
During his time at OpenAI, Kokotajlo, who is 31 years old, was responsible for forecasting the progress of AI technology. He predicted that AGI could be developed by 2027, with a high probability of it causing catastrophic harm to humanity. He advised OpenAI's CEO to shift focus towards safety measures and allocate more resources to mitigating the risks associated with AI.
Call for Transparency and Safety Measures
Kokotajlo, along with other insiders from OpenAI, recently penned an open letter urging AI developers, including OpenAI, to prioritize transparency and whistleblower protections. Despite facing criticism from employees and the public, OpenAI has defended its safety practices, emphasizing its commitment to developing advanced AI systems responsibly.
According to the New York Times, OpenAI stands by its safety achievements and scientific approach to risk management. The company acknowledges the importance of robust debate surrounding AI technology and pledges to engage with various stakeholders globally.
For more information, you can read the Privacy policy on RT's website.