OpenAI recently announced the creation of a Safety and Security Committee, ostensibly aimed at addressing one of their most controversial issues: AI safety concerns. This decision came just days after the dissolution of the OpenAI Superalignment team, which had been focused on mitigating the risks associated with AI. This latest move raises eyebrows and questions about the true intentions and effectiveness of the committee, especially given the turbulent backdrop of leadership changes and shifting priorities within OpenAI.
Leadership Turmoil: Altman's Brief Ousting and Swift Reinstatement
Last year, OpenAI's CEO, Sam Altman, faced a brief ousting by the company's board. The board cited a lack of transparency and a loss of confidence in Altman's leadership as the reasons for his removal. However, this decision was quickly reversed following an employee revolt and pressure from investors, leading to the departure of the three board members who had initially voted for his removal. This incident marked a significant shift in OpenAI's governance, highlighting internal conflicts and raising questions about the company's direction and transparency.
The Controversy: Disbanding the Superalignment Team
Amidst this leadership turmoil, OpenAI made the controversial decision to dissolve its Superalignment team. This team had been dedicated to mitigating long-term risks associated with artificial general intelligence (AGI). The departure of prominent specialists such as Jan Leike and co-founder Ilya Sutskever, who left citing OpenAI's drift from its founding humanitarian vision, further exacerbated concerns. Leike, now with competitor Anthropic, accused OpenAI of prioritizing "shiny products" over genuine AI safety, marking a stark departure from the company's original mission.
The Irony: The Safety and Security Committee
In an attempt to restore confidence, OpenAI unveiled a new Safety and Security Committee, with Sam Altman among its leaders. This move has been met with skepticism and irony, as it essentially positions Altman to oversee and rectify the very issues that emerged under his leadership. The AI and tech community has not been silent on this apparent conflict of interest.
AI policy expert Tolga Bilge remarked on Twitter, “Sam Altman (OpenAI board member) appointing himself to be a member of the Safety and Security Committee to mark
his own homework as Sam Altman (CEO).” Gartner analyst Michael Gartenberg quipped, “Mr. Fox, could I trouble you to watch this henhouse for me please?”
Meanwhile, tech journalist Parmy Olson pointed out, “OpenAI just created an oversight board that’s filled with its own executives and Altman himself. This is a tried and tested approach to self-regulation in tech that does virtually nothing in the way of actual oversight.”
Given the context and the composition of the new committee, it is difficult to see how it will effectively address the AI safety concerns that have been raised. The irony of having Altman, who has been at the center of the controversies, lead the committee tasked with mitigating risks, is not lost on observers. It appears more as a strategic move to placate critics rather than a genuine attempt at reform. Therefore, the prospects of this committee instilling genuine confidence and effectuating meaningful oversight seem dim. As OpenAI continues to navigate its complex landscape, it remains to be seen whether this latest move will truly benefit AI safety or simply serve as another layer of corporate maneuvering.