戴夫公司的战略回购:高瞻远瞩,驾驭市场波动
戴夫公司回购 FTX 风险投资公司可转换票据的决定展现了其战略远见和财务敏锐性,有可能重塑其在新银行领域的未来。
KikyoIlya Sutskever, a name synonymous with innovative advancements in artificial intelligence, made headlines in the AI community in May 2024.
Having co-founded OpenAI, a research powerhouse dedicated to the ethical development of artificial general intelligence (AGI), Sutskever abruptly departed the organisation he helped create.
His departure was announced on the same day as his ex-colleague, Jan Leike, who was one of the leaders in the “superalignment team”.
Whispers of a rift between Sutskever and OpenAI leadership over the prioritisation of safety in AI development began to circulate.
This dramatic exit wasn't the end of Sutskever's journey, however. Just one month later, he announced his next venture – Safe Superintelligence Inc. (SSI).
I am starting a new company: https://t.co/BG3K3SI3A1
— Ilya Sutskever (@ilyasut) June 19, 2024
This new company marked a bold departure from OpenAI, with a laser focus on building a superintelligence that prioritises safety above all else.
Ilya Sutskever is an Israeli-Canadian computer scientist who has made significant contributions to the field of artificial intelligence, particularly in deep learning.
He is most well-known for co-inventing AlexNet, a convolutional neural network that achieved groundbreaking results in the 2012 ImageNet competition and helped propel deep learning into the mainstream.
Sutskever was also a co-founder and former chief scientist at OpenAI, a research company dedicated to developing safe artificial general intelligence.
While at OpenAI, he played a leading role in the development of the GPT series of large language models.
After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…
— Ilya Sutskever (@ilyasut) May 14, 2024
In June 2024, Sutskever co-founded Safe Superintelligence Inc., where he serves as Chief Scientist, aiming to focus solely on creating a safe and beneficial superintelligence.
Sutskever's departure from OpenAI stemmed from a fundamental disagreement about AI research priorities.
Sutskever (right) with Sam Altman, OpenAI CEO (left).
He believed OpenAI was prioritising rapid advancement in capabilities over ensuring the safety of increasingly powerful AI systems, a concern highlighted by the release of the potentially risky ChatGPT 4.0 language model.
Sutskever, along with other safety researchers, felt that robust safety protocols were crucial to develop alongside advancements.
This misalignment with OpenAI's leadership, focused on "shiny products" according to Sutskever, ultimately led him to establish Safe Superintelligence Inc. (SSI) to focus solely on safe AI development.
SSI's mission statement is refreshingly clear and concise – to develop a safe superintelligence.
This singular focus permeates every aspect of the company's structure and operation.
Unlike traditional tech companies with multiple product lines and commercial pressures, SSI operates with a streamlined approach.
Management overhead and product cycles are minimised, ensuring that resources and focus remain firmly on the core objective – building a safe superintelligence.
Additionally, despite having a newly created X account, SSI's follower count surged to over 68,400 within just two weeks of their first post.
This rapid growth reflects the high level of anticipation and interest surrounding the project.
Sutskever envisioned SSI as a revolutionary entity, one unlike any AI research lab before it.
Here are the cornerstones of SSI's approach:
One might say SSI isn't merely a company; it's a mission statement come to life.
The company's entire identity revolves around its core objective – building safe superintelligence. This translates to a streamlined operation, free from the distractions of product cycles or profit margins.
Every decision and resource allocation is meticulously directed towards achieving their paramount goal.
At the heart of SSI's philosophy lies the notion that safety and capability are not mutually exclusive.
They envision a future where advancements in AI capabilities are accompanied by ironclad safety measures, developed in tandem.
This ensures that superintelligence doesn't become an uncontrollable force but a powerful tool wielded for good.
Recognising the immense challenge they face, SSI isn't seeking to build an army of researchers.
Instead, they're meticulously assembling a select group of the world's most brilliant minds – a "lean, cracked" team, as Sutskever himself described them.
This elite group will focus solely on the development of safe superintelligence, fostering a collaborative environment where the best ideas can flourish.
SSI understands that geographical location plays a crucial role in attracting top talent.
They've strategically established offices in Palo Alto and Tel Aviv, both hubs brimming with cutting-edge research and a deep pool of qualified engineers and researchers.
But why Tel Vivi?
A X user shared a possible reason for SSI being located there.
Ilya 的新公司 SSI 大家漏掉一个细节,他们的办公室除了在硅谷以外,还在以色列的特拉维夫市。因为 Ilya 和 Daniel Gross 都在以色列的耶路撒冷度过了儿童时期,同时以色列的人才密度也是他们所看重的。 pic.twitter.com/dXLtdqMlvg
— Glowin (@glow1n) June 21, 2024
Translation:
Everyone has overlooked a detail about Ilya's new company SSI: their office is not only in Silicon Valley, but also in Tel Aviv, Israel. This is because Ilya and Daniel Gross both spent their childhood in Jerusalem, Israel, and they also value Israel's talent density.
SSI recognises that the pursuit of safe superintelligence is a marathon, not a sprint.
Their business model is designed to insulate them from the short-term pressures of commercialisation.
This allows them to focus on long-term research and development, free from the constraints of quarterly profits.
Sutskever isn't alone in this ambitious venture. He is joined by two accomplished figures in the AI landscape – Daniel Gross and Daniel Levy.
A veteran of the AI world, Gross brings a wealth of experience to SSI. Prior to co-founding and serving as CEO of SSI, Gross held the prestigious position of AI lead at Apple.
His journey began in Jerusalem, Israel, where he was born in 1991.
In 2010, Gross made headlines by becoming the youngest founder accepted into the Y Combinator program, launching Greplin (later renamed Cue), a pioneering search engine for consolidating online accounts.
Recognised for his entrepreneurial prowess, Gross was named one of Forbes' "30 Under 30" in Technology in 2011 and Business Insider's "25 under 25" in Silicon Valley in 2011.
His success continued with Cue's acquisition by Apple in 2013.
Following this, Gross joined Y Combinator as a partner, focusing on AI and launching the "YC AI" program in 2017. In 2018, he founded Pioneer, an early-stage startup accelerator and fund.
Gross's deep understanding of AI, coupled with his entrepreneurial track record, positions him as a pivotal figure in shaping SSI's strategic direction.
His insights will be critical as SSI navigates the complexities of AI safety and development.
Investors can be confident in Gross’s ability to secure funding, given his proven success in attracting capital for groundbreaking research initiatives like the AI Grant and Andromeda Cluster.
It is a great pleasure and honor to cofound this new endeavor with @ilyasut and @daniellevy__: https://t.co/Wd9V5BP3Rn
— Daniel Gross (@danielgross) June 19, 2024
Levy's reputation as a leading AI researcher precedes him.
His expertise in training large AI models, honed during his tenure at OpenAI, makes him an invaluable asset to SSI.
As both co-founder and Principal Scientist, Levy's technical prowess extends beyond his credentials.
His experience working alongside Sutskever at OpenAI ensures a seamless collaboration as they pursue this revolutionary project.
Levy's role reflects SSI's unwavering commitment to pushing the boundaries of what's possible in AI safety and capability.
Beyond excited to be starting this company with Ilya and DG! I can't imagine working on anything else at this point in human history. If you feel the same and want to work in a small, cracked, high-trust team that will produce miracles, please reach out. https://t.co/Hm0qutNoP8
— Daniel Levy (@daniellevy__) June 19, 2024
SSI's mission has the potential to redefine the AI sector in several ways.
Firstly, by prioritising safety, SSI sets a new standard for responsible AI development.
Their success could encourage other companies to adopt similar safety-first approaches.
Secondly, SSI's breakthroughs in safety protocols could be applicable to a wide range of AI systems, not just superintelligence.
This could lead to significant advancements in the overall safety and trustworthiness of AI technology.
Despite its ambitious goals, SSI faces several challenges.
Critics argue that developing superintelligence itself is fraught with technical difficulty, and integrating robust safety measures further complicates the process.
The concurrent development of both capabilities and safety mechanisms might be overly optimistic and difficult to achieve within projected timelines.
Additionally, some argue that SSI's singular focus on safety might limit its ability to adapt to the ever-changing dynamics of the AI market.
Focusing solely on superintelligence development could restrict SSI's ability to respond to emerging trends or unforeseen obstacles.
Furthermore, there's a potential risk associated with relying on a small, elite team.
If key members leave or fail to deliver, the concentration of knowledge and expertise within the group could become a vulnerability.
As of today, 8 July 2024, Safe Superintelligence Inc. (SSI) hasn't disclosed any information about their funding or who their backers are.
There has been speculation about potential investors based on the company's founders' backgrounds, but nothing confirmed.
However, SSI itself has chosen to remain tight-lipped about their financial situation.
The quest to achieve safe superintelligence is an audacious undertaking, one fraught with technical hurdles and philosophical quandaries.
SSI, with its laser focus and "lean, cracked" team, embodies a daring approach to this challenge.
Their success, if achieved, could usher in a new era of AI development, prioritising safety and setting a high bar for responsible research.
However, the road ahead is strewn with uncertainties.
Can a small team effectively navigate the complexities of superintelligence and safety?
Will their singular focus limit their ability to adapt in this rapidly evolving field?
SSI's journey will be closely watched, with the potential to redefine the future of AI and its impact on humanity.
Superintelligence is within reach.
— SSI Inc. (@ssi) June 19, 2024
Building safe superintelligence (SSI) is the most important technical problem of our time.
We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
戴夫公司回购 FTX 风险投资公司可转换票据的决定展现了其战略远见和财务敏锐性,有可能重塑其在新银行领域的未来。
Kikyo戴夫公司通过精明的金融手段,以 7100 万美元的折扣价获得了 FTX Ventures 的可转换债务。该交易的完成须经法院批准,并附有防止替代交易的严格条件。Dave Inc. 的首席执行官杰森-威尔克(Jason Wilk)表示对此举充满信心,而 FTX 的原生加密货币 FTT 也出现了 15% 的暴涨。FTT 目前的交易价格为 2.99 美元。
Cheng Yuan币安新币挖矿现已上线第42期项目 - Sleepless AI (AI) ,一个 Web3+AI 游戏平台。
JinseFinance该公司曾在 6 月份宣布,将在整个月内接受测试版申请,测试者将免费获得测试版产品。
Davin人工智能和 Web3 有可能重塑各行各业。但在这些技术融合的过程中,必须应对数据隐私、互操作性和合乎道德的人工智能使用等挑战。
Catherine近年来,AI(人工智能)已成为最引人入胜且发展迅速的技术叙事之一。
Bitcoinist全球市值最大的科技公司 Apple Inc. 设法摆脱了因涉嫌托管欺骗性加密钱包 Toast Plus 而提出的集体诉讼。
Bitcoinist(以下任何观点均为作者个人观点,不应构成投资决策的依据,也不应解释为对从事投资交易的推荐或建议。)控制我们的存在……
Cryptohayes该站点将结合使用太阳能电池板和电池存储,利用多余的日光为比特币挖矿提供动力。
Cointelegraph