Sam Altman Tells Fans to Chill & Lower Expectations of OpenAI
Amidst rising speculation and considerable excitement among fans, particularly on X (formerly known as Twitter), that OpenAI might be ready to deploy General Artificial Intelligence (AGI) next month, CEO Sam Altman took to social media to temper expectations and clarify the company's current capabilities.
Responding to the online buzz about OpenAI’s proximity to superintelligence, Altman pushed back against these exaggerated claims.
OpenAI staff have been actively promoting their advancements in AI reasoning models, alongside a series of updates released last month.
The rumours gained further traction when AI writer Gwern Branwen added fuel to the fire, suggesting that the company was on the brink of a major breakthrough with its new reasoning models.
While some fans appreciated Altman’s grounded approach, acknowledging their own excitement about the company’s progress, others supported his call to manage expectations.
OpenAI’s reasoning expert, Noam Brown, also intervened, warning of the “vague AI hype” circulating on social media.
While he recognised the potential for optimism in AI advancements, Brown emphasized that significant research challenges remain, and OpenAI has not yet achieved superintelligence.
This statement contradicts earlier comments from within the company, such as those from researcher Stephen McAleer in January, who hinted that OpenAI had identified a path to Artificial Superintelligence (ASI).
Prior to joining OpenAI, Brown’s work at Facebook AI Research (FAIR) focused on AI systems like Libratus, which outperformed human players in complex games like poker and Diplomacy, underscoring his expertise in advancing AI capabilities.
Are Altman’s Words at Odds?
Altman’s recent statements seem to contradict his earlier remarks on the value of questioning in the age of AI.
In a December appearance on Wharton organisational psychologist Adam Grant’s ReThinking podcast, Altman emphasized that memorising facts or knowing where to find them is less valuable than learning how to ask insightful questions:
“There will be a kind of ability we still really value, but it will not be raw, intellectual horsepower to the same degree. Figuring out what questions to ask will be more important than figuring out the answer.”
This could be interpreted in two ways: either mastering the art of asking others great questions or refining prompt engineering—crafting questions for AI to yield the desired answers.
Communication expert Matt Abrahams explained that asking clear, thoughtful questions can demonstrate empathy and credibility.
Prompt engineers, who specialise in this skill, are in high demand, with some roles offering salaries exceeding $100,000 annually, according to IBM’s Lydia Logan.
While Altman believes AI may automate many administrative tasks, he predicts it would not replace human intellect.
Instead, he envisions people playing a key role in teaching AI critical thinking skills to strengthen reasoning and foster innovation.
He noted:
“I have certainly gotten the greatest professional joy from having to creatively reason through a problem and figure out an answer that no one has figured out before.”
Altman added:
“What I expect to happen in reality is, there is going to be a new way we work on the hard problems.”