In mid-May, while OpenAI was just launching its new model GPT-4o, Chief Scientist Ilya Sutskever unexpectedly announced his departure on the social platform X:
"I have left OpenAI. I'm excited about what's next for me, which will be a project of great personal significance. I'll share details at the appropriate time."
As the news of Ilya Sutskever's departure from OpenAI shocked the world, AI machine learning expert Jan Leike also announced his resignation, criticizing OpenAI for no longer prioritizing safety.
The following day, Evan Morikawa, who previously led projects like ChatGPT, GPT-4, DALL·E, and APIs, also announced his departure. While there were no public statements from Sutskever or others about conflicts with Altman, the simultaneous departure of several top executives inevitably led to speculation about significant internal disagreements or conflicts within OpenAI.
Former OpenAI Board Member: Altman Was Fired for Deception
On Monday (28th), former OpenAI board member Helen Toner brought up the reason for Sam Altman's dismissal by the board at the end of last year during an appearance on a program, alleging that Sam Altman was accused of lying to the board:
"For years, Sam Altman has been concealing information, distorting various situations within the company, and sometimes even outright lying to the board, which has made it very difficult for the board to fulfill its duties."
For example, when ChatGPT was launched in November 2022, the board received no prior notice, and we only found out about GPT through Twitter.
Additionally, Helen Toner added that Altman had concealed his ownership of the OpenAI Startup Fund:
"Despite his claims of being an independent board member with no economic interests in the company, he did not inform the board that he owned the OpenAI Startup Fund."
Community Exposes Multiple Charges Against Sam Altman
Against this backdrop, a user posted on Reddit stating that under Sam Altman's leadership, OpenAI has been involved in numerous scandals, including:
OpenAI Faces Backlash Over Equity-Threatening Agreements
Previously, there were reports that if OpenAI employees refused to sign agreements demanding harsh terms upon resignation, which couldn't guarantee they wouldn't criticize OpenAI, they might forfeit their existing OpenAI equity.
This unequal clause sparked community outcry. Altman hastily issued an apology, claiming he was unaware of any equity-threatening terms at OpenAI and wouldn't do so in the future.
However, he was later contradicted when more internal agreements surfaced, showing that OpenAI had almost complete ownership rights, enabling them to reclaim equity from former employees or prevent them from selling equity. These documents were signed by Sam Altman on April 10, 2023.
OpenAI's Controversial Partnership with News Corp Sparks Backlash
Despite widespread opposition, OpenAI announced a partnership with News Corp, calling it a "landmark multi-year global partnership." According to the agreement, News Corp's news content would incorporate OpenAI.
It's worth noting that News Corp's media outlets include The Wall Street Journal, The Times, The New York Post, etc., which have historically pursued right-wing propaganda as a business model, sparking political discourse and using any necessary means to push narratives, including denying the 2022 presidential election through Fox News and allegedly hacking over 600 phones for information. A netizen commented:
"Just look at this list, and you'll know OpenAI chose a very bad partner."
OpenAI Aligns with Microsoft: Advocating for Closed-Source AI Models
In the debate between open source and closed source, OpenAI has officially sided with Microsoft, attempting to influence governments. Their faction advocates for strict security restrictions and licensing requirements.
In contrast, companies like Meta and IBM rely more on open-source AI models rather than the closed-source frameworks pursued by OpenAI and Google, thus attempting to promote a less regulated, more open approach.
OpenAI Lifts Ban: Military Explores ChatGPT for Non-Lethal Tasks
This year, OpenAI quietly removed the ban on using ChatGPT for "military and warfare" purposes, allowing the military to utilize ChatGPT technology.
While currently, it seems that any services provided by OpenAI cannot be directly used for killing, such as controlling drones or launching missiles, they can enhance many related tasks, such as writing code or handling procurement orders.
There is evidence that US military personnel have been using ChatGPT to expedite paperwork, and the US National Geospatial-Intelligence Agency has publicly considered using ChatGPT to assist human analysts.
Undeniably, OpenAI remains at the forefront of chatbot technology development globally. However, the accelerated pursuit by competitors adds pressure. With the recent emergence of various scandals, whether OpenAI can continue to innovate cohesively is worth our attention.