OpenAI makes a new safety committee and hints at new models
OpenAI recently announced the formation of a Safety and Security Committee led by prominent figures such as Sam Altman and Bret Taylor. This decision follows the departure of key scientists, including those involved in the "superalignment" project focusing on mitigating long-term AI risks.
The newly established committee, comprised of industry experts and board members, aims to conduct a comprehensive review of existing safety protocols within a 90-day timeframe. This endeavor coincides with OpenAI's preparations to train its upcoming AI model, anticipated to signify a significant advancement in AI capabilities.
Key Members of the Committee
Alongside the board members, the committee features distinguished individuals from OpenAI's tech and policy domains, such as Aleksander Madry, Lilian Weng, John Schulman, Matt Knight, and Jakub Pachocki.
Additionally, the committee will benefit from counsel provided by former government officials including Rob Joyce and John Carlin, renowned for their expertise in cybersecurity and national security.
Given OpenAI's influential position in the AI sector, the decisions made regarding safety measures could potentially set a benchmark for other companies in the industry. Nevertheless, concerns have been raised by ex-employees regarding OpenAI's commitment to safety following the departure of key safety team members.
Prioritizing Safety in Technological Advancements
The community eagerly awaits the committee's insights, hoping for more than just a superficial solution. OpenAI's strategic approach in addressing these concerns and prioritizing safety as technological advancements unfold will undoubtedly be of great interest.