OpenAI Resignation Trend Snowballs as Senior Advisor Miles Brundage Bows Out with Ambiguous Letter of Departure
Yet Another OpenAI Executive Departs
Miles Brundage, OpenAI's Senior Advisor for AGI Readiness, has stepped down after six years with the organisation.
In a post on X (formerly known as Twitter) and an essay shared through his newsletter, Brundage expressed his belief that he can make a greater impact as a researcher and advocate within the nonprofit sector, where he feels he can "publish freely."
He plans to focus on AI policy research outside the tech industry.
Brundage expressed:
“Part of what made this a hard decision is that working at OpenAI is an incredibly high-impact opportunity, now more than ever. OpenAI needs employees who care deeply about the mission and who are committed to sustaining a culture of rigorous decision-making about development and deployment (including internal deployment, which will become increasingly important over time).”
I just sent this message to my colleagues, and elaborate on my decision and next steps in a blog post (see next tweet): pic.twitter.com/NwVHQJf8hM
— Miles Brundage (@Miles_Brundage) October 23, 2024
His move occurs at a time when CEO Sam Altman's OpenAI is struggling with internal issues and transformations, such as introducing new products like consistency models to spur the advancement of AI.
Brundage's Time at OpenAI
Brundage joined OpenAI in 2018, where he played a pivotal role in addressing policy and safety concerns related to advanced AI systems like ChatGPT.
His efforts focused on the responsible management and deployment of these technologies.
Before his tenure at OpenAI, Brundage was a research fellow at the University of Oxford's Future of Humanity Institute.
Throughout his time at OpenAI, Brundage significantly contributed to the development of the company's red teaming programme and was instrumental in creating "system card" reports that evaluate the strengths and weaknesses of its AI models.
As part of the AGI readiness team, he emphasized the ethical deployment of language generation systems and provided guidance to executives, including Altman, on ethical issues associated with AI.
Brundage helped foster a robust safety culture within the organisation during a critical period of growth.
✨ Notable exit alert: Miles Brundage, stalwart policy researcher & senior advisor at OpenAI, bids farewell to the tech giant. 👋 The reason? A calling to the nonprofit sector for greater impact and freedom in publishing. 📚💡
— IntermixTech (@IntermixTech) October 23, 2024
Brundage, who stepped into OpenAI in 2018 as a…
Reflecting on his experience, Brundage described his time at OpenAI as a high-impact opportunity, acknowledging the difficulty of his decision to step down.
He praised the firm's mission while advocating for the inclusion of more independent researchers in AI policy discussions.
Continuous Resignation at OpenAI a Norm Now
Brundage's resignation comes amidst a significant leadership shift at OpenAI, which has seen the departures of CTO Mira Murati, Chief Research Officer Bob McGrew, and Research VP Barret Zoph in recent weeks.
Earlier this year, prominent research scientist Andrej Karpathy left, followed by co-founder and former Chief Scientist Ilya Sutskever, and ex-safety leader Jan Leike.
In August, co-founder John Schulman announced his exit, and Greg Brockman, the company's president, is currently on extended leave.
#OpenAI and CEO Sam Altman have hidden problems. CTO Mira Murati is leaving, after co-founders Ilya Sutskever and John Schulman left.
— Ethan Evans (@EthanEvansVP) September 25, 2024
I do not know Sam Altman, but I do know executive teams.
Adding to the tumult, a recent New York Times profile highlighted former OpenAI researcher Suchir Balaji, who left due to concerns that the technologies he was helping to develop could cause more societal harm than good.
Balaji wrote a blog post on 23 October about the nitty-gritty details of fair use and generative AI.
Controversy at OpenAI! Former employee Suchir Balaji, who spent four years with the company, claims their use of copyrighted data violates the law and that ChatGPT is damaging the internet. He exited in August 2024—what’s your take on this bold accusation? #OpenAI#ChatGPT… pic.twitter.com/IzLLHn9gyB
— PUPUWEB Blog (@cheinyeanlim) October 24, 2024
In response to Brundage's departure, Altman expressed support for his decision, suggesting that Brundage's upcoming work in external policy research would benefit OpenAI.
However, recent criticisms from former employees and board members point to a growing sentiment that OpenAI has increasingly prioritised commercial interests over AI safety.
In his post on X, Brundage encouraged OpenAI employees to "speak their minds" about how the organisation can improve, raising important questions about the company's future direction and commitment to responsible AI development.
He wrote:
“Some people have said to me that they are sad that I'm leaving and appreciated that I have often been willing to raise concerns or questions while I'm here … OpenAI has a lot of difficult decisions ahead, and won't make the right decisions if we succumb to groupthink.”
What is Next for Brundage?
Following Brundage's departure, OpenAI's economic research division, formerly part of the AGI readiness team, will now operate under the leadership of Ronnie Chatterji, the newly appointed chief economist.
As the AGI readiness team winds down, its remaining projects will be reallocated to other divisions within OpenAI, with Joshua Achiam, head of mission alignment, set to oversee some of these initiatives.
An OpenAI spokesperson expressed full support for Brundage's decision to pursue policy research outside the industry, emphasizing gratitude for his significant contributions. https://www.coinlive.com/news/openai-sounds-alarm-on-its-ai-models-being-used-in
The spokesperson said in a statement:
“Brundage's plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact. We're confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government.”
However, the spokesperson did not disclose who will assume Brundage's responsibilities.
Brundage plans to focus on AI regulation, its economic impact, and the future safety of AI.
He believes these areas are crucial for addressing the challenges posed by AI applications, particularly regarding models like the consistency models.
I really recommend this read from @Miles_Brundage, who's moving from OAI to focus on AI non-profit independent research. The level of his thoughfulness and non-divisive approach is truly commendable. https://t.co/ZDXTJ0Sg5b
— AI Watchtower ⏸️ (@FurtherAwayPL) October 23, 2024
His shift underscores the growing importance of responsible AI governance and its implications for the industry at large.
Altman Launches Consistency Models and AI Updates Amidst Ongoing Unrest
In parallel, Altman's OpenAI has unveiled consistency models, an approach aimed at accelerating the sampling processes in AI.
These models are engineered to produce high-quality samples more quickly than traditional diffusion models, marking a significant leap forward in AI technology.
Introducing sCMs: our latest consistency models with a simplified formulation, improved training stability, and scalability.
— OpenAI (@OpenAI) October 23, 2024
sCMs generate samples comparable to leading diffusion models but require only two sampling steps. https://t.co/rHHSE95sjo
This launch aligns with OpenAI's broader strategy to enhance its capabilities while tackling efficiency challenges, particularly following the acquisition of $6.6 billion in funding.
However, the introduction of these models comes amidst increasing scrutiny regarding the company's practices, particularly allegations of copyright violations linked to the training of its models.
Former OpenAI employees, including Balaji, have voiced concerns about the company's methods, sparking a wider debate about the governance of AI technologies.
Balaji has specifically accused OpenAI of infringing copyright by utilising IP-protected data for training without obtaining permission, a claim echoed by others in ongoing class action lawsuits against the organisation.
I recently participated in a NYT story about fair use and generative AI, and why I'm skeptical "fair use" would be a plausible defense for a lot of generative AI products. I also wrote a blog post (https://t.co/xhiVyCk2Vk) about the nitty-gritty details of fair use and why I…
— Suchir Balaji (@suchirbalaji) October 23, 2024
Will these shifts, including the constant exits of high-profile leaders, affect OpenAI's future trajectory?