OpenAI just hinted at the release of a new open sourced language model this summer, which would allow developers to run the model on their own hardware.
The move would mark the first time the company had released an open model since the launch of GPT-2 in 2019, seemingly reversing the company's shift to closed models in recent years.
But experts speculate that the new model will not be 100% open, as with other companies offering "open" AI models, including meta and Mistral, openAI will offer no access to the data used to train the model. Still, the usage license would allow researchers, developers and other users to access the underlying code and weights of the new mode to use, modify, or improve it.
CEO Sam Altman announced in a X post on March 31 that the new model would feature reasoning capabilities, is set to be released in the coming months. He adds that while this is something he had been thinking about for a long time, but he doesn't want to release the language model hastily.
Instead, the company wants to first gather feedback about how to make it maximally useful. So, they are hosting a developer events to gather feedback and experiment with different prototypes to perfect the new language model before releasing it.
The developer events would start in the U.S, followed by seasons in Europe and Asia-Pacific region.
Why the sudden change of heart?
Many have suggested that OpenAI's sudden change of heart to embrace open-source language models comes from the pressure from China, especially following the recent explosion of open-sourced AI models like DeepSeek R1, which flipped the AI script in favour of open-source in January.
For the first time in recent years, open-source models offered comparable performance to the most advanced proprietary AI. Especially in China, DeepSeek's success has revitalized the open-source AI scene, giving it new momentum after a period in which closed research dominated.
But it seems there is a more nuance and deeper reason motivating Sam Altman's change of heart on open-source. As AI technology makes its way into business, more and more customers are demanding greater flexibility and transparency of open-source models for many uses.
And as the gap between OpenAI and its competitors shrinks, it is becoming increasingly difficult for OpenAI to justify its 100% close approach-something Altman acknowledged in January when he admitted that DeepSeek has lessened OpenAI's lead in AI, and that OpenAI has been "on the wrong side of history" when it comes to open-sourcing its technologies.
OpenAI adapting to a very different AI era with unique needs
Apart from the pressure from its Chinese competitors, this move also shows the evolving landscape of the AI landscape. Unlike the past, users today are shifting the focus away from the model itself to the application or system organisation use of the model to meet their specific needs.
While there is still a large chunk of users who might still want to use a state-of-the-art LLM, but opening its horizon to offer an open-source model would allow OpenAI to have a presence in scenarios where customers don't want to use ChatGPT or the company's developer API.
Rowan Curran, a senior analyst at Forrester Research explained that OpenAi's return to open-source speak to AI's increasingly diverse ecosystem, from OpenAI, Google, Antropoic, Amazon and Meta.
He adds that enterprise companies are not excited about open-sourced AI models because of how accurate they are, but because they are flexible. This means they can run on different cloud platforms or even on a company's own data center, workstation, laptop instead of being tied to one provider.
A delicate balancing act
As OpenAI is shifting back to open-sourced, they would also open themselves vulnerable to their Chinese competitors to copy and improve on their technology.
OpenAI has previously stated this concern as the main reason why it has kept its language model closed. Back in January, OpenAi released a statement, noting, "It is critically important that we are working closely with the U.S government to best protect the most capable models from efforts by adversaries adn competitors to take U.S. Technology.
It was also later discovered that while DeepSeek did not release the data it used to train its R1 model, there are indications that it may have used outputs from OpenAi's o1 to kick-start the training of the model's reasoning abilities.
Now, the United States is once again at an important cross juncture where they have to balance the delicate act between being close-sourced or open-source; while open-source put powerful tools into the hands of powerful developers all around the world, expanding the democratic AI principle and driving economic growth, closed models incorporate important safeguards that protect America's strategic advantage and prevent misuse.