13 tweets in a row!
Jan Leike, the person in charge of OpenAI Super Alignment, who just followed Ilya out of the company, revealed the real reason for his resignation, as well as more inside information.
First, the computing power was not enough, and the 20% promised to the Super Alignment team was short, causing the team to go against the current, but it was becoming increasingly difficult.
Second, safety was not taken seriously, and the safety governance of AGI was not as high a priority as launching a "shiny product".
Following this, more gossip was dug up by others.
For example, all members who leave OpenAI must sign an agreement, guaranteeing not to speak ill of OpenAI outside after leaving. Failure to sign is considered as automatically giving up the company's shares.
But there are still some hard-core people who refuse to sign and come out to make a big revelation (laughing to death), saying that the core leadership has long had differences in the priority of security issues.
Since the palace fight last year, the conflict of ideas between the two factions has reached a critical point, and it seems to have collapsed quite decently.
Therefore, although Ultraman has sent a co-founder to take over the Super Alignment Team, it is still not favored by the outside world.
The Twitter users who were at the forefront thanked Jan for his courage to speak out about this shocking news, and lamented:
Oh my god, it seems that OpenAI really doesn’t pay much attention to this safety!
However, looking back, Ultraman, who is now in charge of OpenAI, can still sit still for the time being.
He stood up to thank Jan for his contribution to OpenAI’s super alignment and safety, and said that he was actually very sad and reluctant to see Jan leave.
Of course, the key point is actually this sentence:
Wait, I will post a longer tweet than this in two days.
The promised 20% computing power is actually a pie in the sky
From the palace fight at OpenAI last year to now, the soul figure and former chief scientist Ilya has almost never appeared in public or spoken publicly.
Before he publicly announced his resignation, there were already many different opinions. Many people think that Ilya has seen some terrible things, such as AI systems that may destroy humanity.
△ Netizen: The first thing I do when I wake up every day is to think about what Ilya saw
This time Jan has laid it out, and the core reason is that the technical and market factions have different views on the priority of security.
The disagreement is serious, and the consequences are... everyone has seen it.
According to Vox, sources familiar with OpenAI revealed that employees who pay more attention to safety have lost confidence in Ultraman, "This is a process of trust collapsing bit by bit."
But as you can see, on public platforms and occasions, not many former employees are willing to talk about this publicly.
Part of the reason is that OpenAI has always had a tradition of asking employees to sign a resignation agreement with a non-disparagement agreement. If you refuse to sign, it means giving up the OpenAI options you have previously obtained, which means that employees who speak out may lose a huge amount of money.
However, the dominoes still fell one after another -
Ilya's resignation has exacerbated the recent resignation wave at OpenAI.
Following his resignation, in addition to Jan, the head of the super alignment team, at least five members of the security team have resigned.
Among them, there is also a hard bone who did not sign a non-disparagement agreement, Daniel Kokotajlo (hereinafter referred to as DK brother).
△Last year, DK wrote that he believed there was a 70% chance of an existential disaster for AI
DK joined OpenAI in 2022 and worked on the governance team. His main job was to guide OpenAI to safely deploy AI.
But he also resigned recently and gave an interview:
OpenAI is training more powerful AI systems with the goal of eventually surpassing human intelligence in all aspects.
This may be the best thing that has ever happened to mankind, but it may also be the worst thing if we don't act carefully.
DK brother explained that when he joined OpenAI, he was full of revenge and hope for safety governance, hoping that OpenAI would be more responsible as it got closer to AGI. But many people in the team gradually realized that OpenAI would not be like this.
"Gradually lost confidence in the leadership of OpenAO and their ability to handle AGI responsibly", this is why DK brother resigned.
Disappointment with future AGI safety work is part of the reason why many people left in the resignation wave intensified by Ilya.
Another part of the reason is that the super-aligned team may not have as abundant resources as the outside world imagines to conduct research.
Even if the super-aligned team works at full capacity, the team can only get 20% of the computing power promised by OpenAI.
And some of the team's requests are often rejected.
Of course, this is because computing resources are extremely important to AI companies, and every bit must be allocated reasonably; and because the work of the super-aligned team is to "solve different types of safety issues that will actually arise if the company successfully builds AGI."
In other words, the super-aligned team corresponds to the future safety issues that OpenAI needs to face - the key point is that it is future and it is unknown whether it will appear.
As of press time, Altman has not yet sent out his "longer tweet (than Jan's insider revelation)".
But he briefly mentioned that Jan's concerns about safety issues are correct, "We still have a lot of things to do; we are committed to doing so."
On this point, everyone can hold a small bench and wait, and then we will eat melons together at the first time.
In summary, many people have left the super alignment team, especially Ilya and Jan, leaving this team in the midst of storms facing the dilemma of being leaderless.
The subsequent arrangement is that co-founder John Schulma will take over, but there will no longer be a dedicated team.
The new super alignment team will be a more loosely connected group with members distributed throughout the company's departments. An OpenAI spokesperson described it as "deeper integration."
This point is also questioned by the outside world, because John's original full-time job was to ensure the safety of current OpenAI products.
I wonder if John can handle the sudden increase in responsibility and lead the two teams that focus on current and future safety issues?
Ilya-Altman dispute
If the timeline is extended, today's disintegration is actually a sequel to the Ilya-Altman dispute in the OpenAI "palace fight".
Back in November last year, when Ilya was still there, he worked with the OpenAI board of directors to try to fire Altman.
The reason given at the time was that he was not sincere enough in communication. In other words, we don't trust him.
But the final result is obvious. Altman threatened to join Microsoft with his "allies", and the board of directors succumbed and the removal failed. Ilya left the board. Altman chose members who were more favorable to him to join the board.
After that, Ilya disappeared from the social platform again until the official announcement of his resignation a few days ago. And it is said that he has not appeared in the OpenAI office for about 6 months.
At that time, he also left an intriguing tweet, but it was quickly deleted.
In the past month, I have learned many lessons. One of the lessons is that the saying "the beating will continue until morale improves" applies more often than it should.
But according to insiders, Ilya has been co-leading the Super Alignment Team remotely.
On the Ultraman side, the biggest accusation from employees is that his words and deeds are inconsistent. For example, he claimed that he wanted to prioritize safety, but his behavior was very contradictory.
In addition to the computing resources originally promised not being given. There are also things like raising funds from Saudi Arabia to build chips a while ago.
Those employees who focus on safety are confused.
If he really cared about building and deploying artificial intelligence in the safest way possible, then he wouldn't be so crazy about accumulating chips to accelerate the development of technology?
Earlier, OpenAI also ordered chips from a startup company invested by Altman. The amount was as high as 51 million US dollars (about 360 million yuan).
And in the report letter of the former OpenAI employees during the palace fight, the description of Altman seemed to be confirmed again.
It is precisely because of this "inconsistent words and deeds" operation from beginning to end that employees gradually lost confidence in OpenAI and Altman.
Ilya is like this, Jan Laike is like this, and the Super Alignment Team is like this.
Some thoughtful netizens have sorted out the important nodes of related events that have occurred in recent years - here is a thoughtful reminder, the P (doom) mentioned below refers to "the possibility of AI triggering a doomsday scenario".
In 2021, the head of the GPT-3 team left OpenAI due to "safety" issues and founded Anthropic; one of them believed that P (doom) was 10-25%;
In 2021, the head of RLHF security research resigned, and P (doom) was 50%;
In 2023, the OpenAI board of directors fired Altman;
In 2024, OpenAI fired two security researchers;
In 2024, an OpenAI researcher who paid special attention to security resigned, and he believed that P (doom) was already 70%.
In 2024, Ilya and Jan Laike will leave.
Technical or market-oriented?
With the development of large models to date, "how to achieve AGI?" can actually be summarized into two routes.
Technical faction hopes that the technology will be mature and controllable before application; Market faction believes that the "gradual" approach of opening up and applying at the same time will lead to the end.
This is also the fundamental difference in the Ilya-Altman dispute, that is, the mission of OpenAI:
Is it focused on AGI and super alignment, or is it focused on expanding ChatGPT services?
The larger the scale of ChatGPT service, the more computing power is needed; this will also take up time for AGI safety research.
If OpenAI is a non-profit organization dedicated to research, they should spend more time on super alignment.
And judging from some of OpenAI's external initiatives, the result is obviously not, they just want to take the lead in the competition for large models and provide more services to enterprises and consumers.
In Ilya's view, this is a very dangerous thing. Even if we don't know what will happen as the scale expands, in Ilya's view, the best way is safety first.
Openness and transparency, so that we humans can ensure that AGI is built safely, rather than in some secretive way.
But under the leadership of Altman, OpenAI seems to pursue neither open source nor super alignment. On the contrary, it only wants to rush in the direction of AGI while trying to build a moat.
So in the end, did AI scientist Ilya make the right choice, or will Silicon Valley businessman Altman make it to the end?
It is still unknown. But at least OpenAI is now facing a critical choice.
Industry insiders have summarized two key signals.
One is that ChatGPT is the main source of income for OpenAI. If there is no better model support, GPT-4 will not be provided to everyone for free;
The other is that if the departing team members (Jan, Ilya, etc.) are not worried about more powerful features soon, they will not care about the alignment problem... If AI stays at this level, it basically doesn't matter.
But the fundamental contradiction of OpenAI has not been resolved. On the one hand, AI scientists who steal fire are worried about the responsible development of AGI, and on the other hand, the Silicon Valley market faction is eager to promote the sustainability of technology in a commercial way.
The two sides are irreconcilable, the scientific school is completely out of OpenAI, and the outside world still doesn't know how far GPT has progressed?
The spectators who are eager to know the answer to this question are a little tired.
A sense of powerlessness came over me, just like what Hinton, Ilya's teacher and one of the three giants of the Turing Award, said:
I am old, I am worried, but I can't do anything.
Reference links:
[1]https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence
[2]https://x.com/janleike/status/1791498174659715494
[3]https://twitter.com/sama/status/1791543264090472660