Author: Haotian
Some friends said that the continuous decline of web3 AI Agent targets such as #ai16z and $arc is caused by the recently popular MCP protocol? At first glance, I was a little confused. What does it have to do with it? But after thinking about it carefully, I found that there is a certain logic: the valuation and pricing logic of existing web3 AI Agents has changed, and the narrative direction and product landing route need to be adjusted urgently. Below, I will talk about my personal views:
1) MCP (Model Context Protocol) is an open source standardized protocol designed to enable various AI LLM/Agents to seamlessly connect to various data sources and tools. It is equivalent to a plug-and-play USB "universal" interface, replacing the past end-to-end "specific" packaging method.
In simple terms, there were obvious data islands between AI applications. To achieve interoperability between Agents/LLMs, they need to develop corresponding API interfaces. The operation process is complicated, and there is a lack of two-way interaction functions. There are usually relatively limited model access and permission restrictions.
The emergence of MCP is equivalent to providing a unified framework, allowing AI applications to get rid of the data island state of the past and realize the possibility of "dynamic" access to external data and tools, which can significantly reduce development complexity and integration efficiency, and in terms of automated task execution, real-time data query, and cross-platform collaboration. Speaking of this, many people immediately thought that if the multi-agent collaborative innovation Manus is integrated with this MCP open source framework that can promote multi-agent collaboration, would it be invincible?
Yes, Manus + MCP is the key to the impact of web3 AI Agent this time.
2) However, it is incredible that both Manus and MCP are frameworks and protocol standards for web2 LLM/Agent, which solve the problem of data interaction and collaboration between centralized servers. Its permissions and access control also rely on the "active" opening of each server node. In other words, it is just an open source tool attribute.
In theory, it is completely contrary to the central ideas of "distributed servers, distributed collaboration, distributed incentives" pursued by web3 AI Agent. How can a centralized Italian cannon blow up a decentralized bunker?
The reason is that the first phase of web3 AI Agent is too "web2-ized". On the one hand, many teams come from the web2 background and lack a full understanding of the native needs of web3 Native. For example, the ElizaOS framework was originally a packaged framework that helps developers quickly deploy AI Agent applications. It just integrates platforms such as Twitter and Discord and some API interfaces such as OpenAI, Claude, and DeepSeek, and appropriately encapsulates some Memory and Charter general frameworks to help developers quickly develop and settle AI Agent applications. But if you are serious, what is the difference between this service framework and web2 open source tools? What are the differentiated advantages?
Well, is the advantage that there is a set of Tokenomics incentives? Then use a framework that can be completely replaced by web2 to incentivize a group of AI Agents that exist for issuing new coins? Terrible. . Following this logic, you can roughly understand why Manus + MCP can have an impact on web3 AI Agent? Since a number of web3 AI Agent frameworks and services only solve the needs of quick development and application similar to web2 AI Agent, but cannot keep up with the innovation speed of web2 in terms of technical services, standards and differentiated advantages, the market/capital has revalued and priced the previous batch of web3 AI Agents.
3) At this point, the general problem must have found the crux, but how to break the deadlock? There is only one way: focus on web3 native solutions, because the operation and incentive architecture of distributed systems are the absolute differentiated advantages of web3?
Taking distributed cloud computing power, data, algorithm and other service platforms as an example, on the surface, this kind of computing power and data aggregated with idle resources as an excuse cannot meet the needs of engineering innovation in the short term, but when a large number of AI LLMs are competing for centralized computing power to make performance breakthroughs in the arms race, a service model with "idle resources, low cost" as a gimmick will naturally make web2 developers and VC groups disdain.
But when web2 AI Agent has passed the stage of competing for performance innovation, it will inevitably pursue the expansion of vertical application scenarios and segmented fine-tuning model optimization. Only then will the advantages of web3 AI resource services be truly revealed. In fact, when web2 AI, which has climbed to the position of a giant by monopolizing resources, reaches a certain stage, it is difficult to retreat and use the idea of surrounding the city from the countryside to break through the segmented scenarios one by one. At that time, it is time for excess web2 AI developers + web3 AI resources to work together.
Therefore, the opportunity space for web3 AI Agent is also very clear now: before the web3 AI resource platform has overflowing web2 developer demand customers, explore and practice a set of feasible solutions and paths that cannot be achieved without web3 distributed architecture. In fact, in addition to the quick deployment + multi-agent collaborative communication framework + Tokenomic coin issuance narrative of web2, web3 AI Agent has many innovative directions of web3 Native worth exploring:
For example, equipped with a distributed consensus collaboration framework, considering the characteristics of LLM large model off-chain computing + on-chain state storage, many adaptive components are required.
1. A decentralized DID identity authentication system, so that the Agent can have a verifiable on-chain identity, just like the unique address generated by the virtual machine for the smart contract, mainly for the continuous tracking and recording of the subsequent status;
2. A decentralized Oracle oracle system, which is mainly responsible for the trusted acquisition and verification of off-chain data. Unlike the previous Oracle, this oracle adapted to AI Agent may also need to include a combined architecture of multiple Agents including data collection layer, decision consensus layer, and execution feedback layer, so that the Agent's on-chain data and off-chain calculations and decisions can be accessed in real time;
3. A decentralized storage DA system. Due to the uncertainty of the knowledge base state when the AI Agent is running, and the reasoning process is also relatively temporary, a set of key state libraries and reasoning paths behind LLM need to be recorded and stored in a distributed storage system, and a cost-controlled data proof mechanism is provided to ensure data availability during public chain verification;
This is the direction that web3 AI Agent should strive to build, and it is in line with the fundamentals of the innovative ecology under the AI + Crypto macro narrative. If there is no relevant innovation and differentiated competitive barriers, then every disturbance in the web2 AI track may turn web3 AI upside down.