A Quest for Safe Superintelligence
Ilya Sutskever, a name synonymous with innovative advancements in artificial intelligence, made headlines in the AI community in May 2024.
Having co-founded OpenAI, a research powerhouse dedicated to the ethical development of artificial general intelligence (AGI), Sutskever abruptly departed the organisation he helped create.
His departure was announced on the same day as his ex-colleague, Jan Leike, who was one of the leaders in the “superalignment team”.
Whispers of a rift between Sutskever and OpenAI leadership over the prioritisation of safety in AI development began to circulate.
This dramatic exit wasn't the end of Sutskever's journey, however. Just one month later, he announced his next venture – Safe Superintelligence Inc. (SSI).
This new company marked a bold departure from OpenAI, with a laser focus on building a superintelligence that prioritises safety above all else.
Who is Ilya Sutskever
Socials: X | Linkedin
Ilya Sutskever is an Israeli-Canadian computer scientist who has made significant contributions to the field of artificial intelligence, particularly in deep learning.
He is most well-known for co-inventing AlexNet, a convolutional neural network that achieved groundbreaking results in the 2012 ImageNet competition and helped propel deep learning into the mainstream.
Sutskever was also a co-founder and former chief scientist at OpenAI, a research company dedicated to developing safe artificial general intelligence.
While at OpenAI, he played a leading role in the development of the GPT series of large language models.
In June 2024, Sutskever co-founded Safe Superintelligence Inc., where he serves as Chief Scientist, aiming to focus solely on creating a safe and beneficial superintelligence.
Why Did He Split with OpenAI and Build SSI
Sutskever's departure from OpenAI stemmed from a fundamental disagreement about AI research priorities.
Sutskever (right) with Sam Altman, OpenAI CEO (left).
He believed OpenAI was prioritising rapid advancement in capabilities over ensuring the safety of increasingly powerful AI systems, a concern highlighted by the release of the potentially risky ChatGPT 4.0 language model.
Sutskever, along with other safety researchers, felt that robust safety protocols were crucial to develop alongside advancements.
This misalignment with OpenAI's leadership, focused on "shiny products" according to Sutskever, ultimately led him to establish Safe Superintelligence Inc. (SSI) to focus solely on safe AI development.
About Safe Superintelligence Inc.
SSI's mission statement is refreshingly clear and concise – to develop a safe superintelligence.
This singular focus permeates every aspect of the company's structure and operation.
Unlike traditional tech companies with multiple product lines and commercial pressures, SSI operates with a streamlined approach.
Management overhead and product cycles are minimised, ensuring that resources and focus remain firmly on the core objective – building a safe superintelligence.
Additionally, despite having a newly created X account, SSI's follower count surged to over 68,400 within just two weeks of their first post.
This rapid growth reflects the high level of anticipation and interest surrounding the project.
SSI's Approach to Build a Safe Future with Superintelligence
Sutskever envisioned SSI as a revolutionary entity, one unlike any AI research lab before it.
Here are the cornerstones of SSI's approach:
▸ Singular Focus
One might say SSI isn't merely a company; it's a mission statement come to life.
The company's entire identity revolves around its core objective – building safe superintelligence. This translates to a streamlined operation, free from the distractions of product cycles or profit margins.
Every decision and resource allocation is meticulously directed towards achieving their paramount goal.
▸ Safety and Capability: A Symbiotic Dance
At the heart of SSI's philosophy lies the notion that safety and capability are not mutually exclusive.
They envision a future where advancements in AI capabilities are accompanied by ironclad safety measures, developed in tandem.
This ensures that superintelligence doesn't become an uncontrollable force but a powerful tool wielded for good.
▸ Cracking the Code with a 'Lean, Cracked' Team
Recognising the immense challenge they face, SSI isn't seeking to build an army of researchers.
Instead, they're meticulously assembling a select group of the world's most brilliant minds – a "lean, cracked" team, as Sutskever himself described them.
This elite group will focus solely on the development of safe superintelligence, fostering a collaborative environment where the best ideas can flourish.
▸ Strategic Location
SSI understands that geographical location plays a crucial role in attracting top talent.
They've strategically established offices in Palo Alto and Tel Aviv, both hubs brimming with cutting-edge research and a deep pool of qualified engineers and researchers.
But why Tel Vivi?
A X user shared a possible reason for SSI being located there.
Translation:
Everyone has overlooked a detail about Ilya's new company SSI: their office is not only in Silicon Valley, but also in Tel Aviv, Israel. This is because Ilya and Daniel Gross both spent their childhood in Jerusalem, Israel, and they also value Israel's talent density.
▸ Business Model Built for the Long Haul
SSI recognises that the pursuit of safe superintelligence is a marathon, not a sprint.
Their business model is designed to insulate them from the short-term pressures of commercialisation.
This allows them to focus on long-term research and development, free from the constraints of quarterly profits.
The Guiding Force Behind SSI
Sutskever isn't alone in this ambitious venture. He is joined by two accomplished figures in the AI landscape – Daniel Gross and Daniel Levy.
Daniel Gross: The Strategic Helmsman
Socials: X | Linkedin
A veteran of the AI world, Gross brings a wealth of experience to SSI. Prior to co-founding and serving as CEO of SSI, Gross held the prestigious position of AI lead at Apple.
His journey began in Jerusalem, Israel, where he was born in 1991.
In 2010, Gross made headlines by becoming the youngest founder accepted into the Y Combinator program, launching Greplin (later renamed Cue), a pioneering search engine for consolidating online accounts.
Recognised for his entrepreneurial prowess, Gross was named one of Forbes' "30 Under 30" in Technology in 2011 and Business Insider's "25 under 25" in Silicon Valley in 2011.
His success continued with Cue's acquisition by Apple in 2013.
Following this, Gross joined Y Combinator as a partner, focusing on AI and launching the "YC AI" program in 2017. In 2018, he founded Pioneer, an early-stage startup accelerator and fund.
Gross's deep understanding of AI, coupled with his entrepreneurial track record, positions him as a pivotal figure in shaping SSI's strategic direction.
His insights will be critical as SSI navigates the complexities of AI safety and development.
Investors can be confident in Gross’s ability to secure funding, given his proven success in attracting capital for groundbreaking research initiatives like the AI Grant and Andromeda Cluster.
Daniel Levy: The Technical Virtuoso
Socials: X | Linkedin
Levy's reputation as a leading AI researcher precedes him.
His expertise in training large AI models, honed during his tenure at OpenAI, makes him an invaluable asset to SSI.
As both co-founder and Principal Scientist, Levy's technical prowess extends beyond his credentials.
His experience working alongside Sutskever at OpenAI ensures a seamless collaboration as they pursue this revolutionary project.
Levy's role reflects SSI's unwavering commitment to pushing the boundaries of what's possible in AI safety and capability.
The Potential Impact to the AI Sector
SSI's mission has the potential to redefine the AI sector in several ways.
Firstly, by prioritising safety, SSI sets a new standard for responsible AI development.
Their success could encourage other companies to adopt similar safety-first approaches.
Secondly, SSI's breakthroughs in safety protocols could be applicable to a wide range of AI systems, not just superintelligence.
This could lead to significant advancements in the overall safety and trustworthiness of AI technology.
Challenges and Counterarguments
Despite its ambitious goals, SSI faces several challenges.
Critics argue that developing superintelligence itself is fraught with technical difficulty, and integrating robust safety measures further complicates the process.
The concurrent development of both capabilities and safety mechanisms might be overly optimistic and difficult to achieve within projected timelines.
Additionally, some argue that SSI's singular focus on safety might limit its ability to adapt to the ever-changing dynamics of the AI market.
Focusing solely on superintelligence development could restrict SSI's ability to respond to emerging trends or unforeseen obstacles.
Furthermore, there's a potential risk associated with relying on a small, elite team.
If key members leave or fail to deliver, the concentration of knowledge and expertise within the group could become a vulnerability.
Is SSI Publicly Funded?
As of today, 8 July 2024, Safe Superintelligence Inc. (SSI) hasn't disclosed any information about their funding or who their backers are.
There has been speculation about potential investors based on the company's founders' backgrounds, but nothing confirmed.
However, SSI itself has chosen to remain tight-lipped about their financial situation.
A Daring Pursuit of Safe Superintelligence
The quest to achieve safe superintelligence is an audacious undertaking, one fraught with technical hurdles and philosophical quandaries.
SSI, with its laser focus and "lean, cracked" team, embodies a daring approach to this challenge.
Their success, if achieved, could usher in a new era of AI development, prioritising safety and setting a high bar for responsible research.
However, the road ahead is strewn with uncertainties.
Can a small team effectively navigate the complexities of superintelligence and safety?
Will their singular focus limit their ability to adapt in this rapidly evolving field?
SSI's journey will be closely watched, with the potential to redefine the future of AI and its impact on humanity.