A High-Profile Reshuffle
In a surprising move that has ignited further scrutiny on the tech giant, OpenAI recently reassigned Aleksander Madry, its head of AI safety, to a role centered on AI reasoning.
On 24 July 2024, Sam Altman, CEO of OpenAI, publicly clarified on X (formerly known as Twitter) that Madry has been reassigned from his role overseeing the Preparedness team to a new position focused on AI reasoning.
Altman described Madry's new project as "very important" without elaborating further.
According to Altman’s tweet, Madry will now work on a significant new research project, although the specifics of this project were not disclosed.
In the interim, OpenAI executives Joaquin Quinonero Candela and Lilian Weng will take over the responsibilities of the Preparedness team.
This shift in roles highlights a strategic realignment within OpenAI's safety operations as the company continues to adapt to emerging challenges in AI development.
Aleksander Madry’s Departure and New Role – Who is He?
Aleksander Madry, who has been with the company since May 2023, is a prominent figure in AI safety and had previously led OpenAI's Preparedness team.
His team was tasked with identifying, assessing, and mitigating catastrophic risks posed by AI models.
Madry's expertise was widely respected, and his departure raised concerns about the company's commitment to safety.
While Altman emphasised the importance of Madry's new research project, the decision to remove him from a critical safety role has sparked debate.
It's worth noting that Madry maintains affiliations with MIT, where he holds positions at the Center for Deployable Machine Learning and the AI Policy Forum.
He earned his Ph.D. from MIT in 2011 and has held positions at Microsoft Research New England and EPFL.
He directs the MIT Center for Deployable Machine Learning and leads the CSAIL-MSR Trustworthy and Robust AI Collaboration.
His research focuses on algorithmic graph theory, optimisation, and machine learning, aiming to develop reliable decision-making tools for real-world applications.
OpenAI Under Fire
The decision to reassign Madry comes amid a mounting wave of scrutiny and criticism directed at OpenAI.
Less than a week prior to the announcement, a group of Democratic senators sent a letter to CEO Sam Altman, expressing concerns about the company's approach to safety and demanding answers to specific questions by 13 August 2024.
The senators' letter highlights the growing pressure on AI companies to demonstrate their commitment to public safety.
OpenAI's safety practices have been under intense spotlight for several months.
In June, a group of current and former OpenAI employees published an open letter, outlining concerns about the rapid advancement of AI without adequate oversight.
The letter emphasised the need for greater transparency and accountability in the industry.
Furthermore, the Federal Trade Commission (FTC) and the Department of Justice (DOJ) are reportedly investigating OpenAI, Microsoft, and Nvidia for potential antitrust violations related to their AI activities.
This investigation underscores the heightened regulatory scrutiny facing the AI industry.
The reassignment of Madry comes at a time when OpenAI is also dealing with other challenges.
OpenAI’s Safety Department and Recent Changes
OpenAI’s trust and safety department has undergone significant changes recently.
In July 2023, the company disbanded its previous safety team and restructured it into three distinct units, after Dave Willner, head of OpenAI’s trust and safety department, left:
- Safety Systems Team
- Superalignment Team
- Preparedness Team
The Safety Systems Team focuses on the secure deployment of AI models and reducing abuse, the Superalignment Team works on future AI safety, and the Preparedness Team, formerly led by Madry, assesses new risks and mitigates catastrophic risks associated with cutting-edge AI models.
Earlier in May, the company disbanded its long-term AI risk team, a decision that sparked criticism from both inside and outside the organisation.
The team was disbanded after the company's co-founder, Ilya Sutskever, and Jan Leike, who led the long-term AI risk team, left OpenAI, citing disagreements over the company's priorities.
One month later, Ilya Sutskever announced his new startup – Safe Superintelligence Inc. (SSI).
Microsoft's Diminished Role
In addition to internal changes at OpenAI, the company’s relationship with major stakeholders, such as Microsoft, has evolved.
Earlier this month, Microsoft relinquished its observer seat on OpenAI's board, citing the company's progress in establishing its governance structure.
However, this move comes as OpenAI faces increased scrutiny and challenges.
A Weakening of Safety Focus?
The reassignment of Madry and the broader restructuring within the safety division have fueled speculation about a potential shift in OpenAI's priorities.
Critics argue that this move may indicate a weakening of the company's commitment to safety in favour of other objectives, such as product development and market competition.
However, OpenAI has maintained that Madry will continue to contribute to core AI safety work in his new role. The company has also emphasised the importance of safety and its ongoing efforts to address potential risks.
As the AI industry continues to accelerate, the stakes for safety are higher than ever. The decisions made by companies like OpenAI will have far-reaching consequences for society.
The coming months will be crucial in determining whether OpenAI can effectively balance innovation with responsibility.