布洛克的 Bitkey 比特币钱包现已在全球上市
Block公司的Bitkey比特币钱包具有独特的设计和以用户为中心的功能,现已在全球推出,使用户能够独立控制其持有的比特币。
![image Miyuki](https://image.coinlive.com/24x24/cryptotwits-static/fd40efae039c8f6e22c6ac73b2efe899.png)
Ilya Sutskever, a name synonymous with innovative advancements in artificial intelligence, made headlines in the AI community in May 2024.
Having co-founded OpenAI, a research powerhouse dedicated to the ethical development of artificial general intelligence (AGI), Sutskever abruptly departed the organisation he helped create.
His departure was announced on the same day as his ex-colleague, Jan Leike, who was one of the leaders in the “superalignment team”.
Whispers of a rift between Sutskever and OpenAI leadership over the prioritisation of safety in AI development began to circulate.
This dramatic exit wasn't the end of Sutskever's journey, however. Just one month later, he announced his next venture – Safe Superintelligence Inc. (SSI).
I am starting a new company: https://t.co/BG3K3SI3A1
— Ilya Sutskever (@ilyasut) June 19, 2024
This new company marked a bold departure from OpenAI, with a laser focus on building a superintelligence that prioritises safety above all else.
Ilya Sutskever is an Israeli-Canadian computer scientist who has made significant contributions to the field of artificial intelligence, particularly in deep learning.
He is most well-known for co-inventing AlexNet, a convolutional neural network that achieved groundbreaking results in the 2012 ImageNet competition and helped propel deep learning into the mainstream.
Sutskever was also a co-founder and former chief scientist at OpenAI, a research company dedicated to developing safe artificial general intelligence.
While at OpenAI, he played a leading role in the development of the GPT series of large language models.
After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…
— Ilya Sutskever (@ilyasut) May 14, 2024
In June 2024, Sutskever co-founded Safe Superintelligence Inc., where he serves as Chief Scientist, aiming to focus solely on creating a safe and beneficial superintelligence.
Sutskever's departure from OpenAI stemmed from a fundamental disagreement about AI research priorities.
Sutskever (right) with Sam Altman, OpenAI CEO (left).
He believed OpenAI was prioritising rapid advancement in capabilities over ensuring the safety of increasingly powerful AI systems, a concern highlighted by the release of the potentially risky ChatGPT 4.0 language model.
Sutskever, along with other safety researchers, felt that robust safety protocols were crucial to develop alongside advancements.
This misalignment with OpenAI's leadership, focused on "shiny products" according to Sutskever, ultimately led him to establish Safe Superintelligence Inc. (SSI) to focus solely on safe AI development.
SSI's mission statement is refreshingly clear and concise – to develop a safe superintelligence.
This singular focus permeates every aspect of the company's structure and operation.
Unlike traditional tech companies with multiple product lines and commercial pressures, SSI operates with a streamlined approach.
Management overhead and product cycles are minimised, ensuring that resources and focus remain firmly on the core objective – building a safe superintelligence.
Additionally, despite having a newly created X account, SSI's follower count surged to over 68,400 within just two weeks of their first post.
This rapid growth reflects the high level of anticipation and interest surrounding the project.
Sutskever envisioned SSI as a revolutionary entity, one unlike any AI research lab before it.
Here are the cornerstones of SSI's approach:
One might say SSI isn't merely a company; it's a mission statement come to life.
The company's entire identity revolves around its core objective – building safe superintelligence. This translates to a streamlined operation, free from the distractions of product cycles or profit margins.
Every decision and resource allocation is meticulously directed towards achieving their paramount goal.
At the heart of SSI's philosophy lies the notion that safety and capability are not mutually exclusive.
They envision a future where advancements in AI capabilities are accompanied by ironclad safety measures, developed in tandem.
This ensures that superintelligence doesn't become an uncontrollable force but a powerful tool wielded for good.
Recognising the immense challenge they face, SSI isn't seeking to build an army of researchers.
Instead, they're meticulously assembling a select group of the world's most brilliant minds – a "lean, cracked" team, as Sutskever himself described them.
This elite group will focus solely on the development of safe superintelligence, fostering a collaborative environment where the best ideas can flourish.
SSI understands that geographical location plays a crucial role in attracting top talent.
They've strategically established offices in Palo Alto and Tel Aviv, both hubs brimming with cutting-edge research and a deep pool of qualified engineers and researchers.
But why Tel Vivi?
A X user shared a possible reason for SSI being located there.
Ilya 的新公司 SSI 大家漏掉一个细节,他们的办公室除了在硅谷以外,还在以色列的特拉维夫市。因为 Ilya 和 Daniel Gross 都在以色列的耶路撒冷度过了儿童时期,同时以色列的人才密度也是他们所看重的。 pic.twitter.com/dXLtdqMlvg
— Glowin (@glow1n) June 21, 2024
Translation:
Everyone has overlooked a detail about Ilya's new company SSI: their office is not only in Silicon Valley, but also in Tel Aviv, Israel. This is because Ilya and Daniel Gross both spent their childhood in Jerusalem, Israel, and they also value Israel's talent density.
SSI recognises that the pursuit of safe superintelligence is a marathon, not a sprint.
Their business model is designed to insulate them from the short-term pressures of commercialisation.
This allows them to focus on long-term research and development, free from the constraints of quarterly profits.
Sutskever isn't alone in this ambitious venture. He is joined by two accomplished figures in the AI landscape – Daniel Gross and Daniel Levy.
A veteran of the AI world, Gross brings a wealth of experience to SSI. Prior to co-founding and serving as CEO of SSI, Gross held the prestigious position of AI lead at Apple.
His journey began in Jerusalem, Israel, where he was born in 1991.
In 2010, Gross made headlines by becoming the youngest founder accepted into the Y Combinator program, launching Greplin (later renamed Cue), a pioneering search engine for consolidating online accounts.
Recognised for his entrepreneurial prowess, Gross was named one of Forbes' "30 Under 30" in Technology in 2011 and Business Insider's "25 under 25" in Silicon Valley in 2011.
His success continued with Cue's acquisition by Apple in 2013.
Following this, Gross joined Y Combinator as a partner, focusing on AI and launching the "YC AI" program in 2017. In 2018, he founded Pioneer, an early-stage startup accelerator and fund.
Gross's deep understanding of AI, coupled with his entrepreneurial track record, positions him as a pivotal figure in shaping SSI's strategic direction.
His insights will be critical as SSI navigates the complexities of AI safety and development.
Investors can be confident in Gross’s ability to secure funding, given his proven success in attracting capital for groundbreaking research initiatives like the AI Grant and Andromeda Cluster.
It is a great pleasure and honor to cofound this new endeavor with @ilyasut and @daniellevy__: https://t.co/Wd9V5BP3Rn
— Daniel Gross (@danielgross) June 19, 2024
Levy's reputation as a leading AI researcher precedes him.
His expertise in training large AI models, honed during his tenure at OpenAI, makes him an invaluable asset to SSI.
As both co-founder and Principal Scientist, Levy's technical prowess extends beyond his credentials.
His experience working alongside Sutskever at OpenAI ensures a seamless collaboration as they pursue this revolutionary project.
Levy's role reflects SSI's unwavering commitment to pushing the boundaries of what's possible in AI safety and capability.
Beyond excited to be starting this company with Ilya and DG! I can't imagine working on anything else at this point in human history. If you feel the same and want to work in a small, cracked, high-trust team that will produce miracles, please reach out. https://t.co/Hm0qutNoP8
— Daniel Levy (@daniellevy__) June 19, 2024
SSI's mission has the potential to redefine the AI sector in several ways.
Firstly, by prioritising safety, SSI sets a new standard for responsible AI development.
Their success could encourage other companies to adopt similar safety-first approaches.
Secondly, SSI's breakthroughs in safety protocols could be applicable to a wide range of AI systems, not just superintelligence.
This could lead to significant advancements in the overall safety and trustworthiness of AI technology.
Despite its ambitious goals, SSI faces several challenges.
Critics argue that developing superintelligence itself is fraught with technical difficulty, and integrating robust safety measures further complicates the process.
The concurrent development of both capabilities and safety mechanisms might be overly optimistic and difficult to achieve within projected timelines.
Additionally, some argue that SSI's singular focus on safety might limit its ability to adapt to the ever-changing dynamics of the AI market.
Focusing solely on superintelligence development could restrict SSI's ability to respond to emerging trends or unforeseen obstacles.
Furthermore, there's a potential risk associated with relying on a small, elite team.
If key members leave or fail to deliver, the concentration of knowledge and expertise within the group could become a vulnerability.
As of today, 8 July 2024, Safe Superintelligence Inc. (SSI) hasn't disclosed any information about their funding or who their backers are.
There has been speculation about potential investors based on the company's founders' backgrounds, but nothing confirmed.
However, SSI itself has chosen to remain tight-lipped about their financial situation.
The quest to achieve safe superintelligence is an audacious undertaking, one fraught with technical hurdles and philosophical quandaries.
SSI, with its laser focus and "lean, cracked" team, embodies a daring approach to this challenge.
Their success, if achieved, could usher in a new era of AI development, prioritising safety and setting a high bar for responsible research.
However, the road ahead is strewn with uncertainties.
Can a small team effectively navigate the complexities of superintelligence and safety?
Will their singular focus limit their ability to adapt in this rapidly evolving field?
SSI's journey will be closely watched, with the potential to redefine the future of AI and its impact on humanity.
Superintelligence is within reach.
— SSI Inc. (@ssi) June 19, 2024
Building safe superintelligence (SSI) is the most important technical problem of our time.
We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
Block公司的Bitkey比特币钱包具有独特的设计和以用户为中心的功能,现已在全球推出,使用户能够独立控制其持有的比特币。
FinCEN 报告了与哈马斯有关的 1.65 亿美元潜在加密货币交易,引发了人们对恐怖组织利用虚拟资产的担忧。
四月的两次开庭将决定 XRP 的命运,投资者对有利的裁决持乐观态度。
美国证券交易委员会指控 17 人涉嫌 3 亿美元加密货币庞氏骗局,目标是拉美裔跨境人士。
专家们对以太坊 ETF 的需求是否证明牺牲赌注奖励是合理的争论不休。
Bifrost 将合成 BTC 和 BtcUSD 集成到 Stacks 上,以实现跨链实用性。
随着 Boyaa 等公司对加密货币的加倍投入,NFT 的炒作火焰围绕着文化标志物闪烁--这标志着对叙事之外的实用性的探索。
Darkfarm 的 BOME 代币受到佩佩备忘录的启发,在 Solana 上走红,在艺术家提供大量流动资金的情况下,24 小时内暴涨超过 47 倍。
星巴克在memecoin广告狂潮和企业区块链退潮中取消了NFT试点。
Memecoin 热再次引发索拉纳数百万美元的网络钓鱼骗局,欺诈狂潮如影随形。