"USB-C Moment" in the History of AI Evolution, In November 2024, the MCP protocol released by Anthropic is causing an earthquake in Silicon Valley. This open standard, known as the "USB-C in the AI world", not only reconstructs the connection between large models and the physical world, but also hides the code to crack the AI monopoly dilemma and reconstruct the production relations of digital civilization. While we are still arguing about the parameter scale of GPT-5, MCP has quietly paved the way for decentralization in the AGI era...
Bruce: I'm studying Model Context Protocol (MCP) recently. This is the second thing that excites me in the field of AI after ChatGPT, because it has the potential to solve three problems that I have been thinking about for many years:
How can ordinary people, non-scientists and geniuses, participate in AI industry and earn income?
What is the win-win combination of AI and Ethereum?
How to achieve AI d/acc? Avoid centralized large companies' monopoly and censorship, and AGI destroys humanity?
01 What is MCP?
MCP is an open standard framework that simplifies the integration of LLM with external data sources and tools. If we compare LLM to the Windows operating system, applications such as Cursor are keyboards and hardware, then MCP is the USB interface that supports flexible insertion of external data and tools, and then users can read and use these external data and tools.
MCP provides three capabilities to expand LLM:
Resources (knowledge expansion)
Tools (execute functions, call external systems)
Prompts (pre-written prompt templates)
MCP can be developed and hosted by anyone, provided as a server, and can be offline and stopped at any time.
02 Why do we need MCP?
Currently, LLM uses as much data as possible to perform a large number of calculations and generate a large number of parameters, integrating knowledge into the model, so as to achieve dialogue output of corresponding knowledge. However, there are several major problems:
Large amounts of data and calculations require a lot of time and hardware, and the knowledge used for training is usually outdated.
Models with a large number of parameters are difficult to deploy and use on local devices, but in fact, users may not need all the information to meet their needs in most scenarios.
Some models use crawlers to read external information for calculations to achieve timeliness, but due to the limitations of crawlers and the quality of external data, they may produce more misleading content.
Since AI has not brought good benefits to creators, many websites and content have begun to implement anti-AI measures, generating a large amount of spam, which will cause the quality of LLM to gradually decline.
LLM is difficult to extend to all aspects of external functions and operations. For example, to accurately call the GitHub interface to implement some operations, it will generate code according to documents that may be outdated, but it cannot ensure accurate execution.
03 Architectural evolution of fat LLM and thin LLM + MCP
We can regard the current ultra-large-scale model as a fat LLM, and its architecture can be represented by the following simple diagram:

After the user enters information, the input is disassembled and reasoned through the Perception & Reasoning layer, and then a large number of parameters are called to generate results.
Based on MCP, LLM may focus on language parsing itself, stripping away knowledge and capabilities to become a thin LLM:

Under the thin LLM architecture, the Perception & Reasoning layer will focus on how to parse all aspects of human physical environment information into tokens, including but not limited to: voice, tone, smell, image, text, gravity, temperature, etc., and then orchestrate and coordinate up to hundreds of MCP Servers through MCP Coordinator to complete the task. The training cost and speed of thin LLM will increase rapidly, and the requirements for deployment equipment will become very low.
04 How does MCP solve three major problems
How do ordinary people participate in the AI industry?
Anyone with unique talents can create their own MCP Server to provide services to LLM. For example, a bird lover can provide his bird notes for many years through MCP. When someone uses LLM to search for information related to birds, the current bird notes MCP service will be called. The creator will also get a share of the income.
This is a more accurate and automated creator economic cycle, with more standardized service content, and the number of calls and output tokens can be accurately counted. LLM providers can even call multiple bird notes MCP Servers at the same time to let users choose and score to determine who has better quality and gets a higher matching weight.
Win-win combination of AI and Ethereum
a. We can build an OpenMCP.Network creator incentive network based on Ethereum. MCP Server needs to host and provide stable services. Users pay LLM providers. LLM providers distribute actual incentives to the called MCP Servers through the network to maintain the sustainability and stability of the entire network and inspire MCP creators to continue to create and provide high-quality content. This set of networks will need to use smart contracts to achieve automation, transparency, trustworthiness and anti-censorship of incentives. Signatures, permission verification, and privacy protection during operation can all be implemented using technologies such as Ethereum wallets and ZK.
b. Develop MCP Servers related to Ethereum chain operations, such as AA wallet call services. Users will support wallet payments in LLM through language without exposing related private keys and permissions to LLM.
c. There are also various developer tools to further simplify Ethereum smart contract development and code generation.
Decentralize AI
a. MCP Servers decentralize the knowledge and capabilities of AI. Anyone can create and host MCP Servers, register on platforms such as OpenMCP.Network, and get incentives according to the calls. No company can control all MCP Servers. If an LLM provider gives unfair incentives to MCP Servers, creators will support blocking the company, and users will switch to other LLM providers to achieve fairer competition after not getting high-quality results.
b. Creators can implement fine-grained permission control on their own MCP Servers to protect privacy and copyright. Thin LLM providers should allow creators to contribute high-quality MCP Servers by providing reasonable incentives.
c. The gap in LLM capabilities will be gradually eliminated, because human language has an upper limit and evolves slowly. LLM providers will need to focus their attention and funds on high-quality MCP Servers instead of reusing more graphics cards to make elixirs.
d. AGI's capabilities will be decentralized and downgraded. LLM will only be used for language processing and user interaction, and specific capabilities will be distributed in each MCP Server. AGI will not threaten humans, because after shutting down MCP Servers, only basic language conversations can be carried out.
05 Overview
The architectural evolution of LLM + MCP Servers is essentially the decentralization of AI capabilities, reducing the risk of AGI destroying humans.
LLM is used in a way that allows the number of calls and input and output of MCP Servers to be counted and automated at the token level, laying the foundation for the construction of an AI creator economic system.
A good economic system can drive creators to actively contribute to the creation of high-quality MCP Servers, thereby driving the development of the entire human race and achieving a positive flywheel. Creators no longer resist AI, and AI will also provide more jobs and income, and reasonably distribute the profits of monopoly commercial companies like OpenAI.
This economic system, combined with its characteristics and the needs of creators, is very suitable for implementation based on Ethereum.
06 Future Outlook: The Next Step of Script Evolution
MCP or MCP-like protocols will emerge in an endless stream, and several large companies will begin to compete for the definition of standards.
MCP Based LLM will emerge, focusing on a small model for parsing and processing human language, with an MCP Coordinator attached to access the MCP network. LLM will support automatic discovery and scheduling of MCP Servers without complex manual configuration.
MCP Network service providers will emerge, each with its own economic incentive system, and MCP creators can earn income by registering and hosting their own servers.
If the economic incentive system of MCP Network is built with Ethereum and based on smart contracts, then the transactions of the Ethereum network will conservatively increase by about 150 times (based on a very conservative 100 million calls to MCP Servers per day, and currently a Block of 12s including 100 txs).