10 Titles for the Independent Safety Board Launch by OpenAI

Published On Tue Sep 17 2024
10 Titles for the Independent Safety Board Launch by OpenAI

OpenAI is launching an "independent" safety board that can stop the ...

OpenAI has announced that it is transforming its Safety and Security Committee into an independent "Board oversight committee" with the power to postpone model releases in case of safety concerns. According to an OpenAI blog post, this decision follows a thorough 90-day review of the organization's security processes and safeguards.

Key Recommendations

The committee, led by Zico Kolter and comprising Adam D'Angelo, Paul Nakasone, and Nicole Seligman, proposed the creation of an independent board. This independent board will collaborate with company management on security evaluations for significant model releases. Together with the full board, it will supervise model releases and have the authority to delay a release until security issues are resolved, as per OpenAI's statement. The full board of OpenAI will also be regularly briefed on security matters.

OpenAI sets up safety committee as it starts training new model

Ensuring Independence

It's worth noting that the members of OpenAI's security committee also serve on the company's board of directors. This raises questions about the committee's level of independence and its structural framework. OpenAI has been contacted for clarification on this matter.

Comparison with Meta's Oversight Board

By establishing an independent safety board, OpenAI seems to be adopting a similar approach to Meta's Oversight Board, which reviews some of Meta's content policy decisions. The Oversight Board at Meta has the authority to make decisions that Meta must adhere to. Members of the Supervisory Board are part of Meta's Board of Directors.

Enhancing Industry Collaboration

Moreover, the review conducted by OpenAI's Safety and Security Committee has led to additional opportunities for collaboration within the industry, aiming to enhance the safety of the AI sector. The company has expressed its commitment to sharing and explaining its safety initiatives more comprehensively and seeking more avenues for independent testing of its systems.