March 1, 2024 - A legal storm is brewing in Silicon Valley, and Musk is taking OpenAI CEO & co-founder Sam Altman to court. In a detailed lawsuit, Musk accuses Altman and other defendants of violating OpenAI’s founding agreement, which aims to develop non-profit AI technology that benefits all humanity.
According to the filing, Musk and other plaintiffs accuse OpenAI’s management of straying from its nonprofit mission in 2023 and instead working with Microsoft has established an exclusive partnership and kept the advanced AI technology GPT-4 secret to serve Microsoft's business interests. The shift has been described as a blatant betrayal of OpenAI’s non-profit nature and defeats the purpose of its founding.
Musk claimed in the lawsuit that when he co-founded OpenAI with Altman and Greg Brockman, they were based on common concerns about the potential risks of AI technology and the need to ensure that AI The promise that technological development can benefit all mankind. However, as OpenAI solidified its leadership in the field of AI, especially after developing the GPT-4 model that was considered likely to be early AGI, the company's direction underwent a fundamental shift.
The lawsuit documents detail the formation of OpenAI, including Musk's early financial investment in the company, suggestions for research directions, and efforts to recruit top scientists and engineers. . These efforts are considered key factors in OpenAI’s rapid rise. However, Musk now believes that his contributions were used for purposes contrary to his original intentions.
The following are the main contents of this lawsuit document:
Musk Concerns about AGI
In 2012, Elon Musk met Demis Hassabis, co-founder of DeepMind, a for-profit AI company. Around this time, Musk and Hassabis met at SpaceX's Hawthorne, Calif., facility and discussed the biggest threats facing society. In this conversation, Hassabis highlights the potential dangers that advances in AI could pose to society.
After this conversation with Hassabis, Musk became increasingly concerned about the potential for AI to become superintelligent, surpass human intelligence and threaten humanity. In fact, Musk is not the only one who is scared of the AI research conducted by DeepMind and the dangers of AI. After meeting with Hassabis and DeepMind investors, one investor reportedly commented that the best thing he could do to humanity was to shoot Hassabis on the spot.
Musk began discussing AI and DeepMind with people in his circle, such as Larry Page, then CEO of Google parent company Alphabet, Inc. Musk often brought up the dangers of AI in his conversations with Page, but to Musk's shock, Page wasn't worried about it. In 2013, for example, Musk had a heated exchange with Page about the dangers of AI. He warned that unless safety measures are put in place, "AI systems could replace humans, rendering our species irrelevant or even extinct." Page responded that this was just the "next stage of evolution," claiming that Musk is a " Speciesist”—that is, preferring the human species to intelligent machines. Musk responded, "Yes, I'm pro-human."
By late 2013, Musk was deeply concerned about Google's planned acquisition of DeepMind. At the time, DeepMind was one of the most advanced AI companies in the industry. As such, Musk is deeply concerned that DeepMind's AI technology will be in the hands of someone who is so cavalier about its power and who might hide its design and capabilities behind closed doors.
To prevent this powerful technology from falling into the hands of Google, Musk and PayPal co-founder Luke Nosek tried to raise funds to buy DeepMind. The effort culminated in an hour-long phone call in which Musk and Nosek made a last-ditch effort to convince Hassabis not to sell DeepMind to Google. Musk told Hassabis: "The future of AI should not be controlled by Larry [Page]."
Musk and Nosek's efforts failed. In January 2014, there were reports that DeepMind would be acquired by Google. However, this has not stopped Musk from continuing to ensure the safe development and practice of AI.
After Google acquired DeepMind, Musk began "hosting his own series of dinner discussions on ways to fight Google and promote AI safety." Musk also contacted the President of the United States Barack Obama discusses AI and AI security. In 2015, Musk met with Obama to explain the dangers of AI and advocate for regulation. Musk believes Obama understood the dangers of AI, but regulation never materialized.
Despite these setbacks, Musk continues to advocate for safe AI practices. In 2015, Musk seemed to have found someone who understood his concerns about AI and his desire to keep the first AGI out of private companies like Google: defendant Sam Altman.
At the time, Altman was president of Y Combinator, a Silicon Valley startup accelerator. Prior to that, Altman was involved in various entrepreneurial ventures.
Altman appears to share Musk's concerns about AI. In a public blog post dating back to 2014, Altman claimed that if AGI were made, "it would be the biggest development ever in technology." Altman noted that many companies were moving toward achieving AGI, but acknowledged that "good companies are interested in This is very confidential."
On February 25, 2015, Altman also expressed his concerns about the development of "superhuman machine intelligence", which he believed "may be is the greatest threat to the continued existence of humanity," stressing that "as a human being programmed to survive and reproduce, I feel we should fight it." In addition, Altman criticized those who believe that "superhuman machine intelligence" is dangerous but regard it as For people who "will never happen or are certainly very far away," he accuses them of "dangerously lax thinking."
Indeed, in early 2015, Altman praised the administration regulation as a means of ensuring the safe creation of AI, and suggested that a group of "very smart people with a lot of resources," possibly involving "American companies in some way," would be the most likely to be the first to achieve "superhuman machine intelligence." group.
Later that month, Altman contacted Musk to ask if he would be interested in drafting an open letter to the U.S. government discussing AI. The two began preparing letters and reaching out to influential people in the technology and AI fields to sign. Soon, rumors of the letter spread throughout the industry.
For example, in April 2015, Hassabis contacted Musk to say he had heard from multiple sources that Musk was drafting a letter to the president calling for regulation of AI. Musk defended the idea of AI regulation to Hassabis, saying: "If done right, this could accelerate the development of AI in the long term. Without the public peace of mind provided by regulatory oversight, there is a high likelihood that AI will cause great harm and then AI research will be shut down." Prohibited circumstances because it poses a danger to public safety."
Five days after Hassabis contacted Musk about the open letter, Hassabis announced the first meeting of the Google DeepMind AI Ethics Committee , which Google and DeepMind promised when Google acquired DeepMind two years ago. Musk was invited to become a committee member and proposed that the first meeting be held at SpaceX in Hawthorne, California. Musk clearly felt after the first meeting that the committee was not a serious effort but a guise to try to slow down AI regulation.
The open letter was later released on October 28, 2015, and signed by more than eleven thousand people, including Musk, Stephen Hawking and Steve Wozniak.
OpenAI Agreement at the Establishment
2015 May On March 25, Sam Altman sent an email to Elon Musk, expressing his deep thoughts on whether it is possible to prevent humans from developing AI. Altman thinks the answer is almost certainly no. If the development of AI is inevitable, it would be best if someone other than Google makes it happen first. Altman has an idea: Let Y Combinator launch a Manhattan Project for AI (which might be an apt name). He proposed that the technology could belong to the world through some kind of non-profit organization, and if the project is successful, those involved can receive compensation similar to that of a start-up company. Obviously, they will comply with and actively support all regulations. Musk responded that it was "well worth talking about."
After further communication, on June 24, 2015, Altman sent Musk a detailed proposal for this new "AI laboratory." "The mission will be to create the first general-purpose AI and use it for personal empowerment - that is, distributed future versions look to be the most secure. More generally, security should be a first-class requirement." "The technology will be owned by the foundation and used for 'the good of the world.'" He proposed starting with a group of 7-10 people and expanding from there. He also proposed a governance structure. Musk responded, "Agreed on all counts."
Shortly thereafter, Altman began recruiting others to help develop the project. Notably, Altman turned to Gregory Brockman for help. In November 2015, Altman connected Brockman with Musk via email. Regarding the project, Brockman told Musk, "I hope we go into this space as a neutral group that seeks broad collaboration and shifts the conversation toward being about humanity winning rather than any particular group or company winning. (I think it's about making ourselves a The best way to lead a research organization.)" Optimistic about the possibility of a neutral AI research team focused on humanity rather than the interests of any particular person or group, Musk told Brockman he would commit the funding.
Musk proposed a name for the new lab, reflecting the founding agreement: "Open AI Institute," or "OpenAI" for short.
Guided by the principles founding the agreement, Musk joined forces with Altman and Brockman to formally launch and advance the project. Musk was actively involved in the project before it was publicly announced. For example, Musk advised Brockman on employee compensation packages, sharing his strategies for compensation and retention.
On December 8, 2015, OpenAI, Inc.’s certificate of formation was filed with the Delaware Secretary of State. The certificate documents the formation agreement in writing: “The Company is a non-profit corporation organized solely for charitable and/or educational purposes pursuant to Section 501(c)(3) of the Internal Revenue Code of 1986, as amended, or any future Corresponding provisions of the U.S. Internal Revenue Code. The Company's specific purpose is to fund the research, development and distribution of AI-related technology. The resulting technology will benefit the public, and the Company will seek to open source the technology where applicable for the benefit of the public."
OpenAI, Inc. announced on December 11, 2015. In the announcement, Musk and Altman were named co-chairmen and Brockman was named chief technology officer. The announcement emphasized that OpenAI aims to "benefit humanity" and that its research is "not subject to financial obligations": "OpenAI is a non-profit AI research company. Our goal is to advance digital intelligence in a way that is most likely to benefit humanity as a whole, Not constrained by the need to generate a financial return. Because our research is not subject to financial obligations, we can better focus on the positive impact on humanity."
Musk’s early contributions to OpenAI
Elon Musk played a key role in the successful launch of OpenAI, Inc. On the day of the public announcement, Musk wrote via email that "our most important consideration is recruiting the best talent." He promised that helping with recruiting efforts would be his "absolute priority around the clock." "We're outnumbered by a ridiculous amount of people and resources by organizations you know, but we're on the side of justice, and that means a lot," he admitted. "I like the odds." Musk leveraged his connections, status and influence to assist recruitment efforts. The fact that OpenAI, Inc. is a project sponsored by Elon Musk, with Musk serving as co-chairman, played a key role in OpenAI, Inc.'s hiring efforts, especially in the face of Google/DeepMind's counter-hiring efforts. Without Musk's involvement, substantial support efforts and resources—and the emphasis he placed on the founding agreement—OpenAI, Inc. likely would never have gotten off the ground.
One of the most important early hires is the chief scientist position. Altman, Brockman and Musk all wanted Google research scientist Ilya Sutskever for the role. Sutskever was hesitant about leaving Google to join the project, but ultimately a phone call from Musk on the day of OpenAI, Inc.'s public announcement convinced Sutskever to commit to joining the project and become OpenAI, Inc.'s chief scientist.
In the following months, Musk actively recruited for OpenAI, Inc. Google/DeepMind is making increasingly lucrative counter offers to recruiters at OpenAI, Inc., in an attempt to stifle the fledgling business. In late February, Musk sent an email to Brockman and Altman, reiterating that "we need to do what's necessary to get top talent. Let's raise our offer. If at some point, we need to reconsider compensation for current personnel, that's also It doesn't matter. We either get the best talent in the world or we get beat by Deepmind. Attracting top talent at all costs is fine with me. Deepmind puts a lot of mental pressure on me. If they win , which would be very bad news for their philosophy of wanting to dominate the world. Given the level of talent there, they are clearly making significant progress."
Musk isn't just using his connections and influence to recruit on behalf of OpenAI, Inc., when he tells Brockman and Altman to raise their offers and "do what's necessary to get top talent," he's funding those higher offers. In 2016 alone, Musk contributed more than $15 million to OpenAI, Inc., more than any other donor. His funding has enabled OpenAI, Inc. to assemble a team of top talent. Likewise, in 2017, Musk contributed nearly $20 million to OpenAI, Inc., again more than any other donor. Overall, Musk contributed more than $44 million to OpenAI, Inc. from 2016 to September 2020.
In addition, through Musk Industries LLC, Musk leases and pays monthly rental fees for OpenAI, Inc.'s initial office space in San Francisco's Pioneer Building. Musk regularly visits OpenAI, Inc. and attends important company milestone events, such as when the first DGX-1 AI supercomputer was donated to OpenAI, Inc. in 2016. Musk receives updates on OpenAI, Inc.’s progress and provides his feedback and suggestions.
Altman and Brockman reiterated their establishment agreement many times
In In 2017, Greg Brockman and others proposed converting OpenAI, Inc. from a non-profit organization to a for-profit company. After weeks of communication, Elon Musk told Brockman, Ilya Sutskever, and Sam Altman, “Either go do something yourself, or continue OpenAI as a non-profit organization. Until you make a firm commitment to stay, I will not Fund OpenAI any more or I'm a fool basically giving free money to a startup. End of discussion."
In response, Altman told Musk said he was "still passionate about the nonprofit structure!" Eventually, Brockman and Sutskever made it clear to Musk that they, too, were determined to maintain the nonprofit structure and would spend the next year working on raising funds for the nonprofit.
On February 21, 2018, Musk resigned as co-chairman of OpenAI, Inc. Despite this, Musk continues to provide funding to OpenAI, Inc. under the founding agreement. For example, in 2018, Musk donated approximately $3.5 million to OpenAI, Inc. He also continues to receive updates on OpenAI, Inc. from Brockman, Sutskever, and Altman.
In April 2018, Altman sent Musk a draft OpenAI charter and asked for his feedback. The draft states that OpenAI’s mission is to ensure that AGI “benefits all of humanity.” It states, "We are committed to using any influence we have over the deployment of AGI to ensure that it is used for the benefit of all humanity and to avoid enabling the use of AI or AGI that could harm humanity or unduly concentrate power. Our first fiduciary responsibility is to humanity . We anticipate needing to mobilize significant resources to achieve our mission, but will always strive to minimize conflicts of interest that could compromise broader interests."
March 11, 2019 Today, OpenAI, Inc. announced that it will create a profitable subsidiary: OpenAI, L.P. Potential investors are told an "important caveat" at the top of the summary term sheet that the for-profit entity "exists to advance OpenAI Inc.'s (non-profit organization) mission of ensuring secure AGI is developed and benefits all humanity." The general partner said The obligations of this mission and the principles advanced in the OpenAI Inc. Charter take precedence over any obligation to make profits." Therefore, investors are expressly advised to "treat any investment in OpenAI LP as a donation."
After the announcement, Musk contacted Altman and asked him to "make it clear that I have no financial interest in the for-profit arm of OpenAI." However, Musk continued to support OpenAI, Inc., a non-profit organization, again donating $348 in 2019 Ten thousand U.S. dollars.
On September 22, 2020, OpenAI announced that it had exclusively licensed certain of its pre-AGI technologies to Microsoft. Consistent with the founding agreement, OpenAI's website states that AGI, "a highly autonomous system that surpasses human performance in most economically valuable tasks" "is excluded from intellectual property licenses and other commercial terms with Microsoft, which apply only to pre- AGI technology." However, OpenAI's board of directors "determines when we reach AGI."
OpenAI's development, AI to AGI< /h2>
OpenAI borrowed heavily from DeepMind's practices in its initial work. Unlike chess, OpenAI competed on Dota 2, a strategy video game with more moving parts than chess. OpenAI's team quickly built a new model that beat the world champion team, demonstrating that "self-play reinforcement learning can achieve superhuman performance on difficult tasks."
Meanwhile, at Google, an algorithm called Transformer solves many of the problems deep learning encounters in understanding long sequences of text. This algorithm is an example of a "large language model" that translates text by forming connections between words in the source language and mapping these connections to the target language.
OpenAI researchers continued this research and soon produced another amazing result: by using the first half of Google's Transformer architecture, a deep neural The network can be pre-trained on a large corpus of text and used to generate new text. In January 2018, OpenAI released the source code and training model of this Generative Pre-trained Transformer (GPT), as well as a paper describing the model and its capabilities in detail.
In 2019, OpenAI released the second-generation model, GPT-2. This release also comes with a detailed paper stating that unlike previous models, this model does not need to be trained for a specific task. "When a large language model is trained on a sufficiently large and diverse data set, it can perform well on many tasks." Performs well across domains and datasets". These models proved to be very different from previous AI systems. Rather than training systems to perform specific tasks, they can simply be "asked" to perform new tasks in natural language.
As anticipated by the founding agreement, OpenAI publicly released the full version of GPT-2. It’s worth noting that OpenAI decided to publish despite “humans finding GPT-2’s output convincing,” “GPT-2 can be fine-tuned for abuse,” and “detecting (text generated by GPT) challenging.” . At the time, OpenAI stated that it hoped that doing so "will be useful to developers of powerful models in the future." The release is accompanied by a detailed paper co-authored by OpenAI scientists and independent social and technical scientists, explaining the many benefits of releasing models publicly rather than keeping them closed.
Their publication proves to be of real use to developers of future, powerful models. Entire communities emerged to enhance and extend the models released by OpenAI. These communities extend to open source, grassroots efforts, and commercial entities.
In 2020, OpenAI announced the third version of its model, GPT-3. It uses "175 billion parameters, 10 times more than any previous non-sparse language model." OpenAI has once again announced the development of this model by publishing a research paper describing its complete implementation for others to build upon.
In 2022, researchers at Google took these results and showed that a small modification called chain thinking prompts can enable "large language models to perform complex reasoning ". Researchers at the University of Tokyo and Google quickly expanded on Google's results and showed that OpenAI's GPT-3 can reason about an entirely new problem in a step-by-step manner, just like humans, "just by adding 'let' before each answer. We think step by step'".
The path of the AI becomes visible. And the timeline to get to this point is compressing significantly.
On March 14, 2023, OpenAI released a new generation of its model, GPT-4. Not only can this generation reason, it can reason better than the average human. The GPT-4 scores in the 90th percentile on the Uniform Bar Examination, in the 99th percentile on the GRE Speaking Assessment, and even scores 77th percentile on the Advanced Sommelier Exam. By OpenAI's own objective measures, GPT-4 has been able to demonstrate superior-to-human intelligence across a wide range of economically valuable tasks.
This development has not gone unnoticed by the research community. In a detailed analysis titled "The Spark of AGI: Early Experiments with GPT-4," Microsoft researchers noted that "GPT-4 can solve multiple problems across mathematics, programming, vision, medicine, law, psychology, and more. novel and difficult tasks in the domain without requiring any special prompts. Moreover, on all these tasks, GPT-4 performs surprisingly close to human level, often far exceeding previous models such as those based on [GPT-3.5] ChatGPT."
They compared the performance of GPT-4 with that of a system based on GPT-3 and found that "Compared to the output of GPT-4 , there is no comparison." On mathematical problems, they showed that "GPT-4 gave correct solutions and sound arguments, while ChatGPT based on GPT-3 produced wrong solutions, which would reflect the impact of a human on Lack of understanding of the concept of function inversion”. In another example, they showed "GPT-4 gave the correct solution, while ChatGPT started rearranging terms aimlessly and ended up with a wrong solution."
Microsoft's own scientists admit that GPT-4 "achieves a form of general intelligence" and that "given the breadth and depth of GPT-4's capabilities, we believe it can reasonably be considered An early (albeit still incomplete) version of the AGI system”.
Violation OpenAI Establishment Agreement
< p style="text-align: left;">After reaching the threshold of AGI, according to the founding agreement, they were supposed to develop AGI for the benefit of humanity rather than for the benefit of any profit-making company or individual, but the defendants fundamentally deviated from their mission, Violation of the founding agreement. GPT-4 is a completely closed model. GPT-4’s internal design remains secret, and no code has been released. OpenAI has not released a paper describing any aspect of its internal design; it has only issued press releases touting its performance. The internal details of GPT-4 are known only to OpenAI and what is believed to be Microsoft. Therefore, GPT-4 is the exact opposite of “open AI.” It's closed for commercial proprietary reasons: Microsoft makes a lot of money selling GPT-4 to the public, which wouldn't be possible if OpenAI made the technology freely available to the public, as it needs to. Contrary to the founding agreement, the defendants chose to use GPT-4 not for the benefit of humanity, but as proprietary technology to maximize profits for the world’s largest companies. Additionally, the entire development of OpenAI is now shrouded in secrecy, leaving the public with only rumors and scattered snippets of communications to learn about what might be released next.
Reuters reports that OpenAI is developing a secret algorithm called Q*. While the specifics of Q* are unclear, Reuters reported that several OpenAI employees wrote a letter warning of Q*'s potential power. It appears that Q* may now or in the future be part of a clearer and more prominent AGI developed by OpenAI. As AGI, it would be expressly outside the scope of OpenAI's license with Microsoft and must be made available in the broad public interest.
For its licensing agreement with Microsoft, the Board of Directors of OpenAI, Inc. determines whether OpenAI has achieved AGI, following a shocking series of events described in more detail below , a majority of OpenAI, Inc.'s board members were forced to resign on November 22, 2023, and their replacements are believed to have been hand-picked by Altman and Microsoft.
On November 17, 2023, the board of directors of OpenAI, Inc. fired Altman. OpenAI announced in a blog post that Altman was fired, "His departure follows a thoughtful review process by the Board of Directors, which concluded that he was not always candid in his communications with the Board and hindered the performance of his duties." Ability. The board no longer has confidence in him to continue to lead OpenAI."
Brockman was also removed from the board but was told he would retain his role at OpenAI.
At that time, the board of directors consisted of Helen Toner, Adam D’Angelo, Tasha McCauley, Dr. Sutskever, Brockman and Altman. In addition to serving on the Board of Directors, Ms. Toner is a researcher and consultant at the Center for Governance of AI (GovAI) and Director of Strategy at Georgetown University’s Center for Security and Emerging Technologies. Ms. McCauley is a senior management scientist at the RAND Corporation, a nonprofit organization specializing in public policy decisions. She is also an advisor to GovAI. D’Angelo — the only remaining board member after Altman’s return — is a technology CEO and entrepreneur.
The board of directors of OpenAI, Inc. has been chosen to include a number of academics and public policy experts with deep AI policy experience, most of whom have no financial experience in the company Benefits, this is intentional. This structure of financially disinterested board members with strong records of public service ensures that the board will put the nonprofit’s primary beneficiary—human beings—before financial success. This safeguard is in furtherance of OpenAI, Inc.’s non-profit mission and founding agreement: to securely create AGI that benefits humanity, rather than the monetary gain of a for-profit company.
It is believed that Altman was fired in part because of OpenAI's breakthrough in achieving AGI. In fact, news reports indicate that there are disagreements among OpenAI board members and executives over security concerns and the potential threats that OpenAI's next-generation Q* could pose.
News of Altman's firing spread quickly. After the board announced Altman's firing, Brockman announced that he would be leaving OpenAI along with Altman.
When Microsoft CEO Satya Nadella learned that Altman had been fired, he was reportedly furious. Nadella, a 49% shareholder in the profitable unit of OpenAI, believes Microsoft should have been consulted before firing Altman. However, at the time, other than Altman, OpenAI, Inc.'s board of directors had no ties to Microsoft and no fiduciary responsibility to investors in the for-profit sector. Altman is believed to be the primary liaison between Microsoft and OpenAI, Inc.
Nadella invited Altman and Brockman to lead a new Microsoft AI research lab that would not be constrained by OpenAI, Inc.'s humanitarian mission. Nadella made it clear that employees leaving OpenAI would be welcome to join Microsoft's new lab at the same salary.
Microsoft is confident that, through its large ownership stake in the profitable OpenAI unit, it can completely insulate OpenAI, Inc.'s research should the company cease to exist. In fact, in an interview shortly after Altman was fired, Nadella said: "We're very confident in our capabilities. We have all the intellectual property and all the capabilities. If OpenAI disappeared tomorrow, I really don't want us to any customer is worried because we have all the rights to continue to innovate. Not just to deliver products, but to continue what we are doing in collaboration. We have the talent, we have the computing power, we have the data, we have Everything."
Despite Microsoft's statements about its ability to continue operating without OpenAI, Microsoft never gave up on ensuring that Altman returned to being OpenAI, Inc. The CEO's plan. Days after Altman's firing, OpenAI, Inc.'s board of directors came under pressure from lawyers and major shareholders, including Microsoft, to reinstate Altman.
Ms. Toner was specifically targeted in her efforts to rehabilitate Altman. Amid those efforts, a lawyer representing OpenAI told Ms. Toner that if OpenAI failed because of Altman's firing, she and the board could face charges of breach of fiduciary duty to investors.
However, OpenAI, Inc.'s board of directors has never owed a fiduciary duty to investors. Virtually all investors investing in for-profit sectors are told that a company's responsibility to its mission takes precedence over its responsibility to its investors, and OpenAI, Inc.'s website clearly states that it has fiduciary responsibilities only to humans.
Ms. Toner, who described the attorney's actions as an intimidation tactic, argued that Altman's continued removal would actually be done by promoting human safety rather than profit. Advance the company's mission. However, none of this has stopped shareholders and Altman from pushing for his reinstatement.
In addition, Microsoft has considerable coercive power over OpenAI, Inc. and its board of directors. During the time Altman was fired, Nadella said of Microsoft's relationship with OpenAI: "We're right there. We're underneath them, above them, around them. We do kernel optimizations, we build tools, we build infrastructure .So that's why I think a lot of industry analysts are saying, 'Oh wow, this is really a joint project between Microsoft and OpenAI.' Actually, as I said, we're very self-sufficient in all of this. ."
Furthermore, at the time of the firings, Microsoft had paid out only a fraction of the $10 billion investment it had committed to OpenAI, giving Microsoft " Independent” nonprofit boards have serious influence. Furthermore, if Microsoft withdraws its cloud computing systems—which OpenAI relies on—the company would be unable to operate.
Upon Mr. Altman's return, he is believed to have hand-picked a new board that lacked similar technical expertise or expertise in AI as the previous board was designed to Any substantive background in governance. Mr. D'Angelo, a technology CEO and entrepreneur, is the only former board member remaining after Mr. Altman's return. The new board includes members with more experience in profit-focused business or politics than in AI ethics and governance. They are also reported to be "huge Altman fans." Among the new board members are Bret Taylor and Larry Summers. Mr. Taylor is no stranger to Silicon Valley and has been deeply involved in a variety of profit-driven businesses in the Bay Area. On February 14, 2024, Mr. Taylor and former Google executive Clay Bavor launched a startup focused on building AI chatbots for enterprises. Dr. Summers is an economist and is believed to have had no experience working in AI-based businesses prior to November 2023. Microsoft also gained an observer seat on its board of directors, allowing it to keep a close eye on its ostensibly nonprofit cash cow.
With the re-appointment of Mr. Altman and the reorganization of the Board of Directors, OpenAI's corporate structure - originally designed to be between a non-profit department, a for-profit department, a Board of Directors and a CEO The balancing system between to ensure the nonprofit mission could be carried out—collapsed overnight. OpenAI, Inc.'s once carefully crafted non-profit structure was replaced by a purely profit-driven CEO and a board of directors with less technical expertise in AGI and AI public policy. The board now reserves an observer seat for Microsoft. With this reorganization, OpenAI, Inc. abandons its non-profit mission to develop AGI for broad human benefit, thereby moving it away from becoming a giant for-profit company in which enormous power would be unduly concentrated.
The Board of Directors of OpenAI, Inc.’s AI technical expertise, neutrality, and commitment to OpenAI, Inc.’s non-profit mission are particularly important to OpenAI, Inc.’s mission. Because it is the board of directors that determines whether OpenAI has achieved AGI for the purposes of the Microsoft licensing agreement. That means the board will be tasked with determining whether OpenAI's most powerful and advanced technology is actually excluded from Microsoft's exclusive license. Given Microsoft's huge financial interest in keeping the door closed to the public, OpenAI, Inc.'s new board of directors, as captured, conflicted, and compliant as it is, will have every reason to never make the discovery that OpenAI has reached AGI. Instead, OpenAI's AGI achievements, like "tomorrow" in "Annie," were always far in the future, ensuring that Microsoft would get a license to OpenAI's latest technology while the public was excluded, the exact opposite of the founding agreement.
OpenAI's actions could have a significant impact on Silicon Valley and, if allowed to exist, could represent a paradigm shift for technology startups. It's important to reflect on what happened here: A nonprofit startup collected tens of millions of dollars in donations with the express purpose of developing AGI technology for the public good, and shortly before achieving a milestone for the purpose for which the company was created, the company became became a closed-loop, profitable partner to the world's largest corporations, thereby personally benefiting the defendants. If this business model works, it could completely redefine the way venture capital is practiced in California and elsewhere. Rather than starting out as a for-profit entity, smart investors will set up a non-profit organization, use pre-tax contributions to fund research and development, and then transfer the resulting intellectual property assets once their technology is developed and proven effective. Move into a new profitable enterprise as a way to enrich themselves and their profit maximizing business partners. This should not be the way the law works in California or this country, nor should this be the first court to say otherwise.
To further understand why this is important, if OpenAI's new business model works, for every dollar investors "invest" in the nonprofit, Investors will receive about 50 cents in return from the state and federal government in the form of reduced income taxes, so their net cost per dollar invested is only 50 cents. However, with OpenAI's new business model, they gain the same "profitable" upside as those who invest in profitable companies in the traditional way and therefore do not receive immediate tax relief, funded by the government and ultimately the public. From an investment perspective, competing against entities adopting the new OpenAI business model is like playing a basketball game where the other team scores twice as many baskets as you. If this court approves OpenAI's actions here, any startup hoping to remain competitive in Silicon Valley will essentially be required to follow this OpenAI strategy, and it will become standard operating procedure for startups, legal non-profits, governments tax revenue and ultimately harm to the people of California and beyond. It’s worth noting that OpenAI’s profitable unit was recently valued at nearly $80 billion.
November to present: Altman’s OpenAI
Public still Nothing is known about what the board's "deliberative review process" revealed that led to Altman's initial firing. However, one thing is clear to Mr. Musk and the public at large: OpenAI has abandoned its “irrevocable” non-profit mission in favor of profit. Many leaders and intellectuals have publicly commented on the irony and tragedy of OpenAI becoming “closed, for-profit AI.”
For example, on November 29, 2023, MIT economists published an opinion article in the Los Angeles Times expressing their new concerns about OpenAI. Concerns about profit-driven orders. In their words, "Disruption and uncontrolled growth have become the religion of the tech industry, and Altman has been one of its most loyal senior pastors." Economists stress that the new board is more likely to allow Altman to grow as fast as possible Scale OpenAI, no matter how severe the social cost.
The president of Public Citizen, a civil rights organization, wrote an open letter to California Attorney General Rob Bonta earlier this year, raising questions about whether OpenAI's profitable subsidiary There were concerns that inappropriate controls were exerted on nonprofits, or that the purpose of nonprofits shifted toward profit under Altman and Microsoft. The letter suggested that if a nonprofit abandons its original mission, it should be dissolved and its assets transferred to another charitable enterprise.
A January 2024 investigation by WIRED found that OpenAI had also recently closed public access to "critical files" that had previously been available. In line with OpenAI's original commitment to transparency, OpenAI's IRS filings since its inception have stated that any member of the public can view a copy of its governance documents, financial statements and conflict of interest rules. However, when WIRED requested the documents, OpenAI said it had changed its policy. So while OpenAI has long touted its commitment to transparency, information revealing the events of November 2023 has been unavailable to the public.
Access to OpenAI's filings could give the public a sense of whether it has changed its governance structure to please Microsoft and other shareholders. At the very least, changes will have to be made to accommodate Microsoft's seat on the board, and now Altman is in discussions with Middle Eastern investors to raise up to $7 trillion in funding aimed at developing a global network of AI chip manufacturing plants. If Microsoft's $10 billion is enough to give it a board seat, imagine how much leverage these new potential investments could give investors. That's especially troubling when one potential donor is the United Arab Emirates' national security adviser, and U.S. officials are concerned about the United Arab Emirates' ties to China. Additionally, Altman was quoted discussing the possibility of making the United Arab Emirates a “regulatory sandbox” where AI technology could be tested.
Additionally, access to OpenAI's conflict of interest policy will be critical to revealing the board's ability to control Altman's use of OpenAI to further his personal financial interests, which so far appears to have been Unrestricted. For example, in 2019, when Altman was CEO, OpenAI signed a letter of intent to purchase $51 million worth of chips from a startup in which Altman had invested heavily.
While OpenAI, Inc. was a pioneer in the development of safe, responsible AGI based on open communication with the public, it has now closed the door to its The largest investors in for-profit subsidiaries are brought to the Board of Directors, which has the only fiduciary responsibility to humanity, and continue to secretly move toward a profit-centered future with potentially disastrous consequences for humanity.
Musk co-founded and funded OpenAI, Inc., along with Altman and Brockman, in exchange for relying on a founding agreement to ensure that AGI would benefit humans rather than a for-profit company. As events unfolded in 2023, his contributions to OpenAI, Inc. were twisted to benefit the defendants and the world’s largest company. This is a clear betrayal of the founding agreement, turning that agreement upside down and distorting the mission of OpenAI, Inc. Imagine donating to a nonprofit that states its mission is to protect the Amazon rainforest, but then the nonprofit creates a profitable Amazon logging company that uses the results of the donation to clear the rainforest. This is the story of OpenAI, Inc.