Guide to participating in the popular airdrop New Paradigm
New Paradigm is a liquidity incentive activity within the Manta ecosystem launched by the Manta public chain.
JinseFinanceAuthor: Edward Zitron Translator: Block unicorn
If you are paying attention to AI in the crypto industry, or AI in the traditional Internet, you need to think seriously about the future of this industry. The article is relatively long, if you don't have patience, you can leave immediately.
What I wrote in this article is not to spread doubt or "criticism", but to make a calm assessment of the situation we are in today and the possible outcomes of the current path. I believe that the artificial intelligence boom - more precisely, the generative AI boom - (as I said before) is unsustainable and will eventually collapse. I am also worried that this collapse may be devastating to big technology companies, severely damage the entrepreneurial ecosystem, and further weaken public support for the technology industry.
I’m writing this post today because it feels like the landscape is changing rapidly, with multiple AI “pocalypse signs” already emerging: OpenAI’s (hastily) launched “o1 (codename: Strawberry)” model being called “a big, stupid magic trick” (a false illusion); rumors of price hikes for future models at OpenAI (and elsewhere); layoffs at Scale AI; and leadership departures from OpenAI. These are all signs that things are starting to fall apart.
So I think it’s important to explain the crisis of the current situation, and why we’ve reached a stage of disillusionment. I want to express concerns about the fragility of the movement, and the over-obsession and lack of direction that have gotten us to this point, and I hope some people can do better.
Also — and perhaps this is a point I haven’t paid enough attention to before — I want to emphasize the human costs that could come from a bursting of the AI bubble. Whether Microsoft and Google (and other big generative AI backers) gradually slow their investments or sap corporate resources to sustain OpenAI and Anthropic (and their own generative AI projects), I believe the end result will be the same. Thousands of people will lose their jobs, I fear, and much of the tech industry will realize that the only thing that can grow forever is cancer.
There won’t be much lightheartedness in this post. I’m going to paint you a bleak picture—not just of the big AI players, but of the tech industry as a whole and its employees—and tell you why I think the messy, destructive end is coming sooner than you think.
Read on, and get into thinking mode.
How Can Generative AI Survive?
Right now, OpenAI—a nominally nonprofit that may soon turn for-profit—is raising at least $6.5 billion and possibly as much as $7 billion in a new round of funding at a valuation of at least $150 billion. The round was led by Josh Kushner’s Thrive Capital, with rumors that NVIDIA and Apple may also participate. As I’ve previously detailed, OpenAI will have to continue to raise unprecedented amounts of money to survive.
To make matters worse, according to Bloomberg, OpenAI is also trying to raise $5 billion in debt from banks in the form of a “revolving credit line,” which typically comes with higher interest rates.
The Information also reported that OpenAI is in talks with MGX, a $100 billion investment fund backed by the UAE, seeking to invest in AI and semiconductor companies, and may also raise funds from the Abu Dhabi Investment Authority (ADIA). This is an extremely serious warning sign, because no one voluntarily seeks money from the UAE or Saudi Arabia. You would only choose to ask them for help if you need a lot of money and are not sure you can get it from elsewhere.
Side note: As CNBC points out, one of MGX’s founding partners, Mubadala, holds about $500 million in Anthropic equity, which was acquired from FTX’s bankruptcy assets. You can imagine how “happy” Amazon and Google must be about this conflict of interest!
As I discussed in late July, OpenAI needs to raise at least $3 billion, and more likely $10 billion, to stay afloat. It expects to lose $5 billion in 2024, a number that could continue to increase as more complex models require more computing resources and training data. Anthropic CEO Dario Amodei predicts that future models could require up to $100 billion in training costs.
The “$150 billion valuation” here, by the way, refers to the way OpenAI prices its shares for investors — although the word “shares” is a bit vague here, too. For example, in a normal company, investing $1.5 billion at a $150 billion valuation would typically get you “1%” of the company, however in OpenAI’s case, things are much more complicated.
OpenAI attempted to raise money earlier this year at a $100 billion valuation, but some investors balked at the high price, in part due to (quote The Information’s Kate Clark and Natasha Mascarenhas) growing concerns about overvaluation of generative AI companies.
To complete this round, OpenAI may be transitioning from a nonprofit to a for-profit entity, but the most confusing part is what investors are actually getting. The Information’s Kate Clark reports that investors participating in this round were told (quote) that “they would not receive traditional equity for their investment… Instead, they were given units that promised a share of the company’s profits — once the company becomes profitable, they would get a share of the profits.”
It’s unclear whether switching to a for-profit entity would solve this problem, since OpenAI’s odd “nonprofit + for-profit arm” corporate structure means Microsoft is entitled to 75% of OpenAI’s profits as part of its 2023 investment — though in theory, a switch to a for-profit structure could include equity.However, what you get when you invest in OpenAI are “profit-sharing units” (PPUs), not equity. As Jack Raines writes in Sherwood: “If you own OpenAI’s PPUs but the company never turns a profit and you can’t sell them to someone who thinks OpenAI will eventually turn a profit, then your PPUs are worthless.”
Last weekend, Reuters published a report saying that any $150 billion valuation would “depend” on whether OpenAI can restructure its entire corporate structure and, in the process, lift a cap on investor profits that is currently limited to 100 times the original investment. The profit cap was set in 2019, when OpenAI said any profits above it would be “returned to nonprofits for the benefit of humanity.” The company has amended that rule in recent years to allow for 20% annual increases starting in 2025.
Given OpenAI’s existing profit-sharing agreement with Microsoft — not to mention the massive losses it’s mired in — any return would be theoretical at best. At the risk of sounding flippant, even a 500% increase in zero is still zero.
Reuters also added that any move to a for-profit structure (and thus a valuation higher than its recent $80 billion valuation) would force OpenAI to renegotiate with existing investors, as their stakes would be diluted.
In addition, the Financial Times reportedly noted that investors had to "sign an operating agreement that states: 'Any investment in [OpenAI's for-profit subsidiary] should be considered in the spirit of a donation' and that OpenAI 'may never become profitable.'" Such terms are indeed insane, and anyone who invests in OpenAI who suffers from them is doing so entirely at their own peril, because it is an extremely absurd investment.
In reality, investors are not getting a stake in OpenAI, or any control over it, but rather a share in the future profits of a company that is losing more than $5 billion a year and will likely lose more by 2025 (if it makes it that far).
OpenAI's models and products - we'll discuss their usefulness later - are extremely unprofitable in operation. The Information reports that OpenAI will pay Microsoft about $4 billion in 2024 to support ChatGPT and its underlying models, and that’s on top of the discounted price Microsoft is offering it of $1.30 per GPU per hour, compared to the $3.40 to $4 per hour that other customers are paying. This means that without the deep partnership with Microsoft, OpenAI could be spending as much as $6 billion per year on servers — and that’s not counting other expenses like employee costs ($1.5 billion per year). And, as I’ve discussed before, training costs are currently $3 billion per year and will almost certainly continue to increase.
While The Information reported in July that OpenAI’s annual revenue was $3.5 billion to $4.5 billion, the New York Times reported last week that OpenAI’s annual revenue “is now over $2 billion,” meaning the year-end figure is likely to be toward the lower end of that estimated range.
In short, OpenAI is “burning money” and will only burn more money in the future, and in order to continue to burn money, it will have to raise funds from investors who have signed a statement that “we may never be profitable.”
As I’ve written before, another problem for OpenAI is that generative AI (which extends to the GPT model and the ChatGPT product) does not solve the complex problems that justify its huge costs. The models are based on probabilities, which leads to huge, intractable problems—in other words, they know nothing and are just generating answers (or images, translations, or summaries) based on training data, which model developers are exhausting at an alarming rate.
The phenomenon of “hallucinations”—where the model clearly generates information that is not real (or generates content that looks like it is wrong in an image or video)—cannot be completely solved with existing mathematical tools. Although it may be possible to reduce or mitigate the phenomenon of hallucinations, its existence makes generative AI difficult to truly rely on for critical business applications.
Even if generative AI solves technical problems, it’s unclear whether it actually brings value to the business. The Information reported last week that customers of Microsoft’s 365 suite (which includes Word, Excel, PowerPoint, and Outlook, among others, and especially many of the enterprise-focused packages, which are also closely tied to Microsoft’s consulting services) have barely adopted its AI-driven “Copilot” products. Only 0.1% to 1% of 4.4 million users, at $30 to $50 per person, are paying for the features. One company that’s testing the AI features said, “Most people don’t see much value in it right now.” Others said, “Many businesses haven’t seen breakthrough gains in productivity and other areas yet” and they’re “not sure when they will.”
So how much is Microsoft charging for these unimportant features? An eye-popping $30 per user per month, or up to $50 per user per month for the “sales assistant” feature. This effectively requires customers to double their existing fees — on an annual contract, by the way! — for products that don’t seem all that useful.
One thing to add: Microsoft’s problems are so complex that they may require a dedicated news story in the future.
This is the state of generative AI — the leader in productivity and business software can’t find a product that customers are willing to pay for, partly because the results are too mediocre and partly because the costs are too high to justify. If Microsoft needs to charge so much, it’s either because Satya Nadella wants to achieve $500 billion in revenue by 2030 (a goal revealed in a memo released during the public hearing on Microsoft’s acquisition of Activision Blizzard), or because the costs are too high and it can’t lower the price, or both.
Yet almost everyone is emphasizing that the future of AI is going to blow us away — the next generation of large language models is just around the corner, and they’re going to be amazing.
Last week, we got our first real glimpse into that so-called ‘future.’ And it turned out to be a big disappointment.
A silly magic trick
OpenAI released o1 — codenamed “Strawberry” — on Thursday evening with the kind of excitement that comes with a visit to the dentist. In a series of tweets, Sam Altman described o1 as OpenAI’s “most powerful and most aligned model yet.” While he admitted that o1 “still has flaws, is still limited, and after using it for a while, it’s not as impressive as it first seemed,” he promised that o1 will provide more accurate results when tackling tasks that have a clear correct answer, like programming, math problems, or scientific questions.
That in itself is pretty revealing — but we’ll get to that in a minute. First, let’s talk about how it actually works. I’ll introduce some new concepts, but I promise not to get into too much detail. If you really want to read OpenAI’s explanation, you can find it in an article on their official website - Learning to Reason with LLMs.
When faced with a problem, o1 breaks it down into individual steps - steps that hopefully will eventually lead to the correct answer, a process called the "Chain of Thought." It's easier to understand if you think of o1 as two parts of the same model.
At each step, one part of the model applies reinforcement learning, and the other part (the part that outputs the results) is "rewarded" or "punished" based on the correctness of its progress (its "reasoning" step), and adjusts its strategy when it is punished. This is different from how other large language models work, because instead of just generating an answer and then giving it directly, the model will ignore or recognize "good" steps to get to the final answer.
While this sounds like a major breakthrough, or even another step toward the much-praised artificial general intelligence (AGI) — it isn’t — and that can be seen in the fact that OpenAI chose to release o1 as a standalone product, rather than an updated version of GPT. The examples OpenAI showed — like math and science problems — were tasks where the answers were known in advance, where the answers were either correct or incorrect, allowing the model to guide the “chain of thought” at each step.
You’ll notice that OpenAI didn’t show how the o1 model would solve complex problems, math or otherwise, where the answers weren’t known. OpenAI itself acknowledged that it had received feedback that o1 was more prone to “hallucinations” than GPT-4o, and was more reluctant to admit that it didn’t have an answer than previous models. This is because, although there is a part of the model that checks its output, this “checking” part can also hallucinate (sometimes the AI will make up answers that seem plausible, creating hallucinations).
According to OpenAI, o1 is also more persuasive to human users because of its “thought chaining” mechanism. Because o1 provides more detailed answers, people are more inclined to trust its outputs, even if those answers are completely wrong.
If you think I’m being too harsh in my criticism of OpenAI, consider how the company promotes o1. It describes the reinforcement training process as “thinking” and “reasoning,” but in reality it’s just guessing, and at every step it’s guessing whether it’s right, and the final result is often known in advance.
This is an insult to humans—real thinkers. Human thinking is based on a complex range of factors: from personal experience to a lifetime of accumulated knowledge to brain chemistry. Although we also “guess” whether certain steps are correct when tackling complex problems, our guesses are based on concrete facts, not clumsy math like o1.
And, my God, it’s expensive.
o1-preview is priced at $15 per million input tokens and $60 per output token. That means o1 costs three times as much as GPT-4o for inputs and four times as much for outputs. However, there’s a hidden cost. Data scientist Max Woolf points out that OpenAI’s “inference tokens” — the outputs used to arrive at the final answer — are not visible in the API. This means that not only is o1 more expensive, but the nature of the product requires users to pay more frequently. All the content generated to “consider” the answer (to be clear, the model is not “thinking”) is also charged, making complex problems like programming potentially extremely expensive to answer.
Now let’s talk about accuracy. On Hacker News, a Reddit-like site owned by Sam Altman’s former company Y Combinator, some complained that o1 “made up” libraries and functions that didn’t exist when working on a programming task, and made mistakes when answering questions that couldn’t be easily answered online.
On Twitter, startup founder and former game developer Henrik Kniberg asked o1 to write a Python program to calculate the product of two numbers and predict the program’s output. While o1 wrote the code correctly (although it could have been more concise, with only one line), the actual output was completely wrong. AI company founder Karthik Kannan also took the programming task, and o1 “made up” a command that didn’t exist in the API.
Another user, Sasha Yanshin, tried to play chess with o1, and o1 “created” a chess piece on the board out of thin air and then lost the game.
Because I was being playful, I also tried asking o1 to list states with an "A" in their name. It thought for eighteen seconds and came up with 37 states, including Mississippi. The correct answer should be 36 states.
When I asked it to list states with a "W" in their name, it pondered for eleven seconds and actually included North Carolina and North Dakota.
I also asked o1 how many times the letter "R" appeared in its code name "Strawberry", and it answered two.
OpenAI claims that o1 performs at the same level as a PhD student on complex benchmarks such as physics, chemistry, and biology. But it apparently performs poorly in geography, basic English language tests, mathematics, and programming.
It is worth noting that this is exactly the "big, stupid magic trick" I predicted in my previous newsletter. OpenAI launched Strawberry simply to prove to investors and the public that the AI revolution is still going on, and what it actually launched is a clunky, boring, and expensive model.
To make matters worse, it’s really hard to explain why anyone should care about o1. While Sam Altman may tout its “reasoning power,” those with the money to continue funding him see 10-20 second wait times, issues with ground truth accuracy, and a lack of any exciting new features.
No one cares about “better” answers anymore—they want something completely new, and I don’t think OpenAI knows how to get there. Altman’s attempt to anthropomorphize o1 by having it “think” and “reason” is clearly meant to imply that it’s some kind of step toward artificial general intelligence (AGI), but it’s hard to get even the staunchest AI advocates excited.
In fact, I think o1 shows that OpenAI is both desperate and uninspired.
Prices haven’t dropped, the software hasn’t gotten more useful, and the “next generation” models we’ve been hearing about since November have turned out to be a dud. These models are also desperate for training data, to the point where nearly every large language model has ingested some kind of copyrighted content. This urgency led Runway, one of the largest generative video companies, to launch a “company-wide effort” to collect thousands of YouTube videos and pirated content to train its models, while a federal lawsuit in August accused NVIDIA of doing similar things to many creators to train its “Cosmos” AI software.
The current legal strategy is basically a slog through willpower, hoping that these lawsuits don’t go as far as setting any legal precedent that could define training these models as copyright infringement — which is exactly what a recent interdisciplinary study sponsored by the Copyright Initiative concluded.
These lawsuits are moving forward, and in August a judge granted the plaintiffs further copyright infringement claims against Stability AI and DeviantArt (which used these models), as well as copyright and trademark infringement claims against Midjourney. If any of the lawsuits succeed, it would be catastrophic for OpenAI and Anthropic, and even more so for Google and Meta, which use datasets of millions of artists’ works, because it would be nearly impossible for AI models to “forget” their training data, meaning they would need to be retrained from scratch, costing billions of dollars and greatly reducing their effectiveness at tasks they are not particularly good at.
I am deeply concerned that this industry is built like a fortress on the sand. Large language models of the size of ChatGPT, Claude, Gemini, and Llama are unsustainable, and there seems to be no path to profitability, as the computationally intensive nature of generative AI means that training them costs hundreds of millions or even billions of dollars, and requires such large amounts of training data that these companies are effectively stealing data from millions of artists and writers and hoping to get away with it.
Even if we set these issues aside, generative AI and its related architectures do not seem to be revolutionary, and the hype cycle around generative AI does not really fit the meaning of the term "artificial intelligence" at all. At its best, generative AI is only occasionally able to correctly generate some content, summarize documents, or conduct research at some indeterminate "faster" speed. Microsoft's Copilot for Microsoft 365 claims to have "thousands of skills" and "endless possibilities" for enterprises, but the examples it shows are nothing more than generating or summarizing emails, "starting presentations with prompts", and querying Excel tables - functions that may be useful, but are by no means revolutionary. We are not in the “early stages.” Since November 2022, large tech companies have spent more than $150 billion on capital expenditures and investments in infrastructure and emerging AI startups, as well as their own models. OpenAI has raised $13 billion and can hire anyone they want, and so can Anthropic. Yet the industry version of the Marshall Plan to get generative AI off the ground has resulted in four or five nearly identical large language models, the world’s least profitable startup, and thousands of expensive but mediocre ensembles. Generative AI is being sold on multiple lies: 1. It is AI. 2. It will get better. 3. It will be real AI. 4. It is unstoppable.
Leaving aside terms like “performance” — which are often used to describe the “accuracy” or “speed” of generated content, rather than the skill level — large language models have actually plateaued. “More powerful” often doesn’t mean “does more”, it means “more expensive”, which means you just created something that costs more but doesn’t increase its functionality.
If the combined forces of every venture capitalist and big tech giant still haven’t found a truly meaningful use case that a lot of people are willing to pay for, then there won’t be new use cases. Large language models — yes, that’s where these billions of dollars are going — are not going to suddenly become more capable just because the tech giants and OpenAI throw another $150 billion at it. No one is trying to make these things more efficient, or at least no one has succeeded in doing so. If someone succeeded, they would hype it up.
We are dealing with a collective delusion - a dead-end technology based on copyright theft (as every generation of technology has, inevitably), that requires constant capital to keep running, and that provides services that are at best optional, disguised as some kind of automation that is not actually provided, that cost billions of dollars, and will continue to do so. Generative AI does not run on money (or cloud computing credits), but on confidence. The problem is that confidence - like investment capital - is a finite resource.
My concern is that we may be in an AI crisis similar to the subprime mortgage crisis - thousands of companies integrating generative AI into their businesses, but prices are far from stable and profitability is even further away.
Almost every startup that claims to be "AI-powered" is based on some combination of GPT or Claude. These models were developed by two companies that are deeply loss-making (Anthropic expects to lose $2.7 billion this year) and whose pricing strategy is designed to attract more customers rather than turn a profit. As mentioned before, OpenAI relies on Microsoft funding - both for the "cloud computing credits" it receives and the favorable pricing provided by Microsoft - and its pricing is entirely dependent on Microsoft's continued support as an investor and service provider, and Anthropic faces similar problems in its deals with Amazon and Google. Based on their losses, I would speculate that if OpenAI or Anthropic's pricing was closer to actual costs, the price of API calls would probably increase ten to a hundred times, although it is difficult to say exactly without actual data. But we can consider the numbers reported by The Information, that OpenAI expects to spend $4 billion on servers at Microsoft in 2024 - which, I would add, is two and a half times cheaper than Microsoft charges other customers - plus the fact that OpenAI is still losing more than $5 billion a year.
OpenAI is most likely charging only a fraction of what it charges to run its models, and can only maintain the status quo if it can keep raising more venture capital than ever before and continue to get favorable pricing from Microsoft, which recently said it views OpenAI as a competitor. While it’s impossible to be sure, it’s reasonable to assume that Anthropic is getting similar favorable pricing from Amazon Web Services and Google Cloud.
Assuming Microsoft gave OpenAI $10 billion in cloud computing credits and OpenAI spent $4 billion on server costs, plus the assumed $2 billion in training costs—costs that will surely increase after the new o1 and “Orion” models are launched—OpenAI may need more credits by 2025, or start paying Microsoft with actual cash.
While Microsoft, Amazon, and Google may continue to offer favorable pricing, the question is whether these deals are profitable for them. As we saw after Microsoft’s latest quarterly earnings, investors are increasingly concerned about the capital expenditures (CapEx) required to build generative AI infrastructure, and many are skeptical about the potential profitability of this technology.
What we don’t really know is how profitable generative AI is for these massive tech companies, as they factor these costs into other earnings. While we can’t be sure, I imagine they would talk about the revenues from these businesses if they were profitable at all, but they aren’t.
The market’s extreme skepticism about the generative AI boom, and Nvidia CEO Jensen Huang’s lack of substantive answers about the return on investment in AI, caused Nvidia’s market value to plummet by $279 billion in a single day. It was the largest stock market crash in the history of the US market, with the total value lost being equivalent to the peak of nearly five Lehman Brothers. While the comparison stops there — Nvidia wasn’t even at risk of failing, and even if it did, the systemic impact wouldn’t be that severe — it’s still a staggering sum, and shows the distorting power of AI on the market.
In early August, Microsoft, Amazon, and Google all got hammered by the market for their massive AI-related capital expenditures. If they can’t show significant revenue growth from $150 billion (or more) in new data centers and NVIDIA GPUs in the next quarter, they will face more pressure.
It’s important to remember that there is no other market for ideas for big tech companies besides AI. When companies like Microsoft and Amazon start to show signs of slowing growth, they also start to rush to show the market that they can still compete.Google, a multi-risk monopoly that relies almost entirely on search and advertising, also needs something new and eye-catching to attract investors’ attention - however, these products have not brought enough utility, and it seems that most of the revenue comes from companies that "tried" AI and found that it was not worth it.
Currently, there are two possibilities:
1. Big Tech realizes they are in deep trouble and chooses to reduce AI-related capital expenditures out of fear of Wall Street's displeasure.
2. In search of new growth points, Big Tech decides to cut costs to maintain their disruptive operations, lay off employees and divert funds from other businesses to support the "death race" of generative AI.
It is not clear which scenario will happen. If Big Tech accepts that generative AI is not a future reality, they don't really have anything else to show Wall Street, but may adopt a "year of efficiency" strategy similar to Meta, reducing capital expenditures (and laying off employees) while promising to "lower investment" to a certain extent. This is the most likely path for Amazon and Google to take, because although they are eager to please Wall Street, at least for now they still have their profitable monopoly businesses to fall back on.
However, actual revenue growth from AI needs to be seen in the coming quarters, and it needs to be substantial, not some vague statement about AI being a “mature market” or “annualized growth rate.” If capex increases follow, then this actual contribution will need to be significantly higher.
I don’t think that growth will happen. Whether it’s Q3, Q4, or Q1 of 2024, Wall Street will start punishing big tech companies for their greed for AI, and that punishment will be much harsher than it has been for Nvidia, which is the only company that can actually show how AI can increase revenue, despite Jensen Huang’s empty words and useless slogans.
I’m somewhat concerned that the second scenario is more likely: these companies are so convinced that “AI is the future” that their culture is so completely disconnected from software development that solves real problems that it could burn the company out. I’m deeply concerned that mass layoffs will be used to fund this movement, and the past few years don’t make me think they’ll make the right choice to walk away from AI.
Big tech has been thoroughly poisoned by management consultants — Amazon, Microsoft, and Google are all run by MBAs — and has similar monsters around them, like Google’s Prabhakar Raghavan, who drove out the people who actually built Google Search so he could take control.
These people don’t really face human problems, they create cultures focused on solving imaginary problems that software can fix. Generative AI may seem a little magical to people whose entire lives are spent in meetings or reading emails. I guess Satya Nadella’s (Microsoft CEO) success mentality is largely “let the technologists solve the problem.” Sundar Pichai could have ended the whole generative AI craze by simply laughing at Microsoft’s investment in OpenAI — but he didn’t, because these people don’t have any real ideas, and these companies are not run by people who have experienced the problems, let alone people who actually know how to solve them.
They are also desperate, and the situation has never been so serious for them, except for Meta burning billions of dollars on the Metaverse. However, this is all the more serious and ugly because they have invested so much money and tied AI so tightly into their companies that pulling it out would be both embarrassing and hurtful to the stock, effectively a tacit admission that this is all a waste.
If the media were actually responsible, this could have stopped sooner. This narrative is sold through the same scam as previous hype cycles, with the media assuming that these companies will "solve the problem" even though it's obvious that they won't. Do you think I'm being pessimistic? So what's next for generative AI? What will it do next? If your answer is that they will "solve the problem" or that they "have amazing stuff behind the scenes", then you are an unwitting participant in a marketing operation (think about that for a minute).
Author's Aside: We really need to stop being fooled by this stuff. When Mark Zuckerberg claimed we were about to enter the Metaverse, a ton of media outlets — like The New York Times, The Verge, CBS News, and CNN — all joined in promoting an obviously flawed concept that looked terrible and was sold on outright lies about the future. It was clearly nothing more than a bad VR world, but the Wall Street Journal still called it a “vision of the future of the internet” six months after the hype-cycle had clearly expired. The same thing happened with cryptocurrencies, Web3, and NFTs! The Verge, The New York Times, CNN, CBS News — these outlets once again participated in promoting technology that was clearly useless — I should specifically mention The Verge, in fact, Casey Newton, who, after three consecutive hypes of the technology, despite his good reputation, claimed in July that “having a single most powerful large language model could provide the company with the basis for all kinds of money-making products”, when in reality, the technology only loses money and has yet to provide any truly useful and lasting products.
I believe that at least Microsoft will start reducing costs in other areas of the business to help sustain the AI hype. In emails shared with me earlier this year by a source, Microsoft’s senior leadership team had requested (but ultimately shelved) that power demand be reduced in multiple areas of the company to free up power for GPUs, including moving compute for other services to other countries to free up computing power for AI.
On the Microsoft section of the anonymous social network Blind (company email verification required),a Microsoft employee complained in mid-December 2023 that “AI is taking their money,” saying that “the cost of AI is too high, it’s eating up salary increases, and it’s not going to get better.”Another employee shared their anxieties in mid-July, saying they clearly felt Microsoft had a “marginal addiction” to “operating cash flow from cutting costs to fund Nvidia’s stock price” and that this practice “deeply hurt Microsoft’s culture.”
Another employee added that they believe "Copilot will ruin Microsoft in FY2025" and that "Copilot focus will drop significantly in FY2025," also revealing that they know of "large Copilot deals in their country that are less than 20% utilized after nearly a year of PoCs, layoffs, and adjustments," and said that "the company took too many risks" and Microsoft's "huge AI investment will not pay off."
While Blind is anonymous, it's hard to ignore the fact that a large number of online posts tell of Microsoft Redmond's cultural problems, especially that senior leaders are out of touch with actual work and will only fund projects that have the AI label attached. Many posts express disappointment with Microsoft CEO Satya Nadella's "rhetorical nonsense" and complain about the lack of bonuses and promotion opportunities in an organization focused on chasing an AI craze that may not exist.
At the very least, it can be seen that there is a deep cultural sadness within the company, with many posts saying "I don't like working here", "Everyone is confused on the one hand why we have to invest so much in AI, and on the other hand they feel that they can only accept it because Satya Nadella doesn't care at all."
The Information article mentioned that Microsoft hides a worrying problem in the actual adoption rate of its AI feature Office Copilot: Microsoft has reserved enough server capacity in its data centers for 365 Copilot to cope with millions of daily users. However, the actual use of this capacity is unclear.
According to estimates, Microsoft's current Office Copilot feature users may be between 400,000 and 4 million, which means that Microsoft may have built a lot of idle infrastructure that is not fully utilized.
However, these companies have spent a lot of time and money embedding generative AI capabilities into their products, and I think they may face a few scenarios:
1. These companies develop and launch AI capabilities, only to find that customers are not willing to pay for them, as Microsoft found with its 365 Copilot. If they can’t find a way to get customers to pay now — in the middle of the AI craze — it will only be worse when the craze passes and bosses stop asking their employees to “jump on the AI bandwagon.”
2. These companies developed and launched AI features, but could not find a way to get users to pay extra for them, which means they can only embed AI features into existing products without increasing profit margins. In the end, AI features may become a "parasite" that erodes the company's revenue.
Jim Covello of Goldman Sachs also mentioned in his report on generative AI that if the benefit of AI is just to improve efficiency (such as being able to analyze documents faster), then competitors can do that. Almost all generative AI integrations are similar: some form of collaborative assistant to answer customer or internal questions (such as Salesforce, Microsoft, Box), content creation (Box, IBM), code generation (Cognizant, Github Copilot), and the upcoming "intelligent agent", which is actually a "customizable chatbot that can connect to other parts of the website."
This question reveals one of the biggest challenges of generative AI: While it is “powerful” to some extent, this power is more reflected in “generating content based on existing data” rather than true “intelligence”. This is also why many companies’ introduction pages about AI on their websites are full of empty words, because their biggest selling point is actually “uh… figure it out yourself!”
What I am worried about is a chain reaction. I believe that many companies are “trialing” AI now, and once these trials are over (according to Gartner’s forecast, 30% of generative AI projects will be abandoned after the proof-of-concept stage by the end of 2025), they are likely to stop paying for these additional features or stop integrating generative AI into the company’s products.
If this happens, the already depressed revenues of the super-scale companies that provide cloud computing for generative AI applications and large language model suppliers such as OpenAI and Anthropic will be further reduced. This will likely put further pressure on prices at these companies, as their already loss-making margins will deteriorate further. At that point, OpenAI and Anthropic will almost certainly have to raise prices, if they haven’t already done so.
While the big tech companies can continue to finance the craze — after all, they are almost entirely responsible for driving it — this won’t help the smaller startups that have become accustomed to discounted prices, as they won’t be able to afford to continue operating. While there are cheaper alternatives, such as independent vendors running Meta’s LLaMA model, it’s hard to believe that they won’t face the same profitability issues as the hyperscalers.
Also note that the hyperscalers are also very afraid of pissing off Wall Street. While they could theoretically (as I fear they will) improve margins through layoffs and other cost-cutting measures, these are short-term solutions that are only likely to work if they can somehow shake some money out of this barren generative AI tree.
In any case, it’s time to accept that the money isn’t here. We need to stop and look at the fact that we are in the third era of the tech industry’s hallucination. However, unlike cryptocurrencies and the Metaverse, this time everyone is in on the money-burning binge, pursuing an unsustainable, unreliable, unprofitable, and environmentally harmful project that is packaged as “artificial intelligence” and promoted as something that will “automate everything” but never actually has a path to actually achieve that goal.
Why does this happen over and over again? Why have we gone through cryptocurrencies, the Metaverse, and now generative AI, technologies that don’t seem to be truly designed for ordinary people?
This is actually the natural evolution of a tech industry that is completely focused on increasing the value extracted from each customer, rather than providing more value to customers.Or, rather, they don’t even really understand who their customers are and what they need.
Today, the products you’re being marketed to will almost certainly try to tie you into an ecosystem — at least as a consumer, controlled by Microsoft, Apple, Amazon, Google. This makes it increasingly expensive to leave that ecosystem. Even cryptocurrencies — ostensibly a “decentralized” technology — quickly abandoned their laissez-faire ethos in favor of aggregating users through a handful of big platforms (like Coinbase, OpenSea, Blur, or Uniswap), which are often backed by the same venture capital firms (like Andreessen Horowitz). Rather than becoming the standard-bearer for a new, entirely independent online economy, cryptocurrencies have been able to scale only through the connections and money that have funded other waves of the internet.
As for the Metaverse, while it’s a scam, it’s also Mark Zuckerberg’s attempt to control the next generation of the internet, with Horizon as the main platform. We’ll talk about generative AI later.
All of this is about further monetization—that is, increasing the average value of each customer, whether by getting them to use the platform more so as to show more ads, push “semi-useful” new features, or creating a new monopoly or oligopoly where only the tech giants with huge war chests can participate, with little actual value or utility provided to customers.
Generative AI is exciting (at least to a certain kind of people) because the tech giants see it as the next big money-maker—by adding a way to charge for everything from consumer tech to enterprise services.Most generative computing flows through OpenAI or Anthropic and back to Microsoft, Amazon, or Google, generating cloud computing revenues that sustain their growth performance. The biggest innovation here is not what generative AI can do, but the creation of an ecosystem that is hopelessly dependent on a handful of hyperscale companies.
Generative AI may not be terribly practical, but it is incredibly easy to integrate into a wide variety of products, allowing companies to charge for these “new features.” Whether it’s a consumer app or a service for an enterprise software company, these products can make millions or even billions of dollars in revenue by upselling to as many customers as possible.
Sam Altman was smart enough to realize that the tech industry needed a “new thing” — a new technology that everyone could take a piece of and sell. While he may not fully understand technology, he does understand the economic system’s desire for growth and productized generative AI based on the Transformer architecture as a “magic tool” that can be easily plugged into most products to bring some unique features.
However, the rush to integrate generative AI everywhere reveals a huge disconnect between these companies and actual consumer needs or effectively operating businesses. For the past 20 years, simply “doing something new” seemed to work — launching new features and having sales teams hard sell them was enough to sustain growth. This has trapped tech leaders in a toxic and unprofitable business model.
The executives running these companies—almost all MBAs and management consultants who have never built a product or tech company from scratch—either don’t understand or don’t care that there is no path to profitability for generative AI, and probably assume it will naturally become profitable like Amazon Web Services (AWS) did (which took nine years to become profitable), even though the two are very different things. Things “naturally worked out” in the past, so why not now?
Of course, in addition to the fact that rising interest rates have dramatically changed the venture capital market, reducing VCs’ reserves and shrinking fund sizes, the attitude toward tech has never been more negative. Add to that a host of other reasons why 2024 is so different from 2014 that are too numerous to discuss in this 8,000-word article.
What’s really worrying is that many of these companies don’t seem to have any new products other than AI. What do they have? What else can they do to keep growing? What other options do they have?
No, they have nothing. And that’s the problem, because if AI fails, the impact will inevitably be felt by other companies across the tech industry.
Every major tech player — both in the consumer and enterprise space — sells some kind of AI product that integrates large language models or their own models, often running in the cloud on Big Tech’s systems. To some extent, these companies are dependent on Big Tech’s willingness to subsidize the entire industry.
I speculate that a subprime-style AI crisis is brewing, in which nearly the entire tech industry is involved in a technology that is sold at extremely low prices, is highly concentrated, and is subsidized by Big Tech. At some point, the staggering and pernicious rate at which generative AI burns money will catch up to them, leading to price hikes or companies releasing new products and features with fees so steep — like Salesforce’s $2 per conversation for its “Agentforce” product — that even enterprise customers with deep budgets can’t justify the expense.
What happens when the entire tech industry becomes dependent on a piece of software that loses money and doesn’t have much real value on its own? What happens when the pressure is too great, these AI products become irreconcilable, and these companies have nothing else to sell?
I really don’t know, but the tech industry is headed for a terrible reckoning, where the lack of creativity is enabled by an economic environment that rewards growth over innovation, monopoly over loyalty, and management over actual creation.
New Paradigm is a liquidity incentive activity within the Manta ecosystem launched by the Manta public chain.
JinseFinanceKyberSwap exchange was hacked for US$48 million, not only demanding control over Kyber's assets, but also seeking authority over the entire protocol and company.
OliveNavigating the future of healthcare by integrating AI's transformative role in medicine.
Hui XinIt appears that most of the funds transferred off the bankrupt exchange were not moved under the direction of the Bahamian government.
Crypto BriefingA malicious actor was able to exploit a vulnerability in the tool and make off with almost $1M worth of ETH.
OthersMany experts believe that the introduction of blockchain technology and cryptocurrency is a giant leap in human evolution. For some ...
BitcoinistThe basic premise for making a profit in cryptocurrencies is to buy cheap and sell high. There is no certain ...
BitcoinistHyundai also gave cryptic clues to a possible metaverse in line with its earlier concept for robots to bridge the real and virtual worlds to enhance mobility.
CointelegraphThe first tweet ever sent on Twitter was turned into an NFT and sold by founder Jack Dorsey for $2.9 million but is pulling about one percent of that in a current auction.
Cointelegraph