Author: Frank Fu @IOSG
MCP is rapidly taking the core position in the Web3 AI Agent ecosystem. It introduces MCP Server through a plug-in-like architecture, giving AI Agents new tools and capabilities.
Similar to other emerging narratives in the field of Web3 AI (such as vibe coding), MCP, full name Model Context Protocol, originated in Web2 AI and is now being reimagined in the context of Web3.

What is MCP?
MCP is an open protocol proposed by Anthropic to standardize how applications pass contextual information to Large Language Models (LLMs). This enables more seamless collaboration between tools, data, and AI Agents.
Why is it important?
The core limitations of current large language models include:
Inability to browse the Internet in real time
Inability to directly access local or private files
Inability to interact with external software autonomously
MCP fills the above capability gaps by acting as a universal interface layer, enabling AI Agents to use a variety of tools.You can compare MCP to USB-C in the field of AI applications - a unified interface standard that makes it easier for AI to connect to various data sources and functional modules.
Imagine that each LLM is a different mobile phone - Claude uses USB-A, ChatGPT uses USB-C, and Gemini uses Lightning interface. If you are a hardware manufacturer, you have to develop a set of accessories for each interface, and the maintenance cost is extremely high.
This is exactly the problem faced by AI tool developers: customizing plug-ins for each LLM platform greatly increases complexity and limits scalability. MCP is designed to solve this problem by establishing a unified standard, just like making all LLMs and tool vendors use USB-C interfaces.

This standardized protocol is beneficial to both parties:
For AI Agent (client) : can securely access external tools and real-time data sources
For tool developers (server) : one-time access, cross-platform availability

The end result is a more open, interoperable, and low-friction AI ecosystem.

How is MCP different from traditional APIs?
APIs are designed to serve humans, not AI-first. Each API has its own structure and documentation, and developers must manually specify parameters and read interface documentation. The AI Agent itself cannot read the documentation and must be hard-coded to adapt to each API (such as REST, GraphQL, RPC, etc.).
MCP abstracts away these unstructured parts by standardizing the function call format inside the API, providing a unified calling method for the Agent. You can think of MCP as an API adaptation layer encapsulated for the Autonomous Agent. When Anthropic first launched MCP in November 2024, developers needed to deploy MCP servers on local devices. In May of this year, Cloudflare announced at its Developer Week that developers can directly deploy remote MCP servers on the Cloudflare Workers platform with minimal device configuration. This greatly simplifies the deployment and management process of the MCP server, including authentication and data transmission, and can be called "one-click deployment."
Although MCP itself still does not seem "attractive" enough, it is by no means insignificant. As a pure infrastructure component, MCP cannot be used directly by consumers. Its value will only be truly revealed when the upper-level AI agent calls the MCP tool and shows actual results.

Web3 AI x MCP Ecological Landscape
AI in Web3 also faces the problems of "lack of contextual data" and "data islands", that is, AI cannot access real-time data on the chain or natively execute smart contract logic.In the past, projects such as ai16Z, ARC, Swarms, and Myshell tried to build a multi-agent collaborative network, but ultimately fell into the dilemma of "reinventing the wheel" due to reliance on centralized APIs and custom integrations.
Each time a data source is connected, the adaptation layer must be rewritten, resulting in a surge in development costs. To solve this bottleneck, the next generation of AI Agents requires a more modular, Lego-style architecture to facilitate seamless integration of third-party plug-ins and tools. As a result, a new generation of AI Agent infrastructure and applications based on MCP and A2A protocols are emerging, designed specifically for Web3 scenarios, allowing Agents to access multi-chain data and interact natively with DeFi protocols.

▲ Source: IOSG Ventures
(This picture does not fully cover all MCP-related Web3 projects)
Project cases: DeMCP and DeepCore
DeMCP is a decentralized MCP Server marketplace (https://github.com/modelcontextprotocol/servers), focusing on native encryption tools and ensuring the sovereignty of MCP tools.
Its advantages include:
Use TEE (Trusted Execution Environment) to ensure that MCP tools have not been tampered with Use token incentive mechanism to encourage developers to contribute to MCP servers
Provide MCP aggregator and micropayment functions to lower the threshold for use

Another project DeepCore (deepcore.top) also provides an MCP Server registration system, focusing on the encryption field, and further extending to another open standard proposed by Google:A2A (Agent-to-Agent) protocol (https://x.com/i/trending/1910001585008058782).

A2A is an open protocol announced by Google on April 9, 2025, which aims to achieve secure communication, collaboration and task coordination between different AI agents. A2A supports enterprise-level AI collaboration, such as allowing AI agents from different companies to work together on tasks (such as Salesforce's CRM agent working with Atlassian's Jira agent).
If MCP focuses on the interaction between Agent (client) and tool (server), then A2A is more like a collaborative middle layer between Agents, allowing multiple Agents to complete tasks together without sharing internal states. They collaborate through context, instructions, status updates, and data transfer.
A2A is considered to be the "universal language" for AI agent collaboration, promoting cross-platform and cross-cloud AI interoperability, and may change the way enterprise AI works. Therefore, A2A can be regarded as the Slack of the agent world - one agent initiates a task and another agent executes it.
In short:


Why does MCP server need blockchain?
MCP Server integrates blockchain technology has many benefits:
1. Obtain long-tail data through crypto-native incentive mechanism , encouraging the community to contribute scarce data sets
2. Defend against "tool poisoning" attacks , that is, malicious tools disguised as legitimate plug-ins to mislead Agents
Blockchain provides cryptographic verification mechanisms, such as TEE Remote Attestation, ZK-SNARK, FHE, etc.
For details, please refer to this article (

Future Trends and Industry Impact
At present, more and more people in the crypto industry are beginning to realize the potential of MCP in connecting AI and blockchain. For example, Binance founder CZ recently publicly called on AI developers to actively build high-quality MCP Servers to provide a richer tool set for AI Agents on BNB Chain. The list of BNB MCP Server projects has been made public for reference by users exploring the ecosystem.
As the infrastructure matures,
the competitive advantage of "developer-first" companies will also shift from API design to: who can provide a richer, more diverse, and easily combined tool set. In the future, every application may become an MCP client, and every API may be an MCP server.
This may give rise to a new pricing mechanism: Agents can dynamically select tools based on execution speed, cost efficiency, relevance, etc., to form a more efficient Agent service economic system enabled by Crypto and blockchain as a medium.
Of course, MCP itself is not directly oriented to end users, it is an underlying protocol layer. In other words, the true value and potential of MCP can only be truly seen when AI Agent integrates it and transforms it into practical applications.