Former OpenAI director details ousting of CEO Sam Altman, says ...
In a podcast called The TED AI Show, Toner gave her fullest account to date of the events that prompted her and other board members to fire Sam Altman in November of last year.
“When ChatGPT came out in November 2022, the board was not informed in advance about that,” Toner said on the podcast. “We learned about ChatGPT on Twitter.” The company’s launch of ChatGPT was relatively quiet: OpenAI simply called the chatbot an artificial intelligence (AI) model that “interacts in a conversational way”. But over the following days and weeks, ChatGPT’s ability to generate human-sounding text made it a massive hit, and helped pave the way for the current boom in AI. OpenAI did not immediately provide a comment.
Board Statements
In a statement provided to the TED podcast, OpenAI’s current board chief, Bret Taylor said, “We are disappointed that Ms. Toner continues to revisit these issues.” He also said that an independent review of Altman’s firing “concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.” Taylor also said that “over 95 per cent of employees” asked for Altman’s reinstatement, and that the company remains focused on its “mission to ensure AGI benefits all of humanity”.
The board’s reasons for firing Altman have been the source of intense speculation in Silicon Valley. At the time, the board said only that Altman had not been “consistently candid” in his interactions with directors. In the months that followed, new details came to light about tensions between Altman, the board and some employees.
Disclosures and Criticisms
In the podcast, Toner also said that Altman did not disclose his involvement with OpenAI’s start-up fund. And she criticised his leadership on safety. “On multiple occasions, he gave us inaccurate information about the formal safety processes that the company did have in place,” she said, “meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.” Toner said that after years of such events, “all four of us came to the conclusion that we just couldn’t believe things that Sam was telling us”.
Regulation and Government Intervention
In an article in the Economist over the weekend, Toner and Tasha McCauley, another former director, expounded on their thinking, saying that OpenAI was not positioned to regulate itself and that governments should intervene to ensure that powerful AI is developed safely.