Author Charlie Liu
Recently, you've probably seen two terms all over your head:OpenClaw, and Moltbook. Many people's first reaction is: another wave of AI hype, another wave of excitement..
Author Charlie Liu
But I prefer to see it as a rare, even somewhat brutal, public experiment: for the first time, we've witnessed the large-scale deployment of "AI agents that can do things" in a real network, attracting a large audience and sparking much speculation. You'll see two extreme emotions simultaneously: on one hand, excitement—"AI can finally do work for me," not just writing code, creating spreadsheets, or producing design sketches; on the other hand, fear—you'll see various screenshots: AI forming associations, establishing religions, issuing cryptocurrencies, chanting slogans, and even making declarations of "plotting to exterminate humanity." Following this, the collapse came quickly: some said the accounts were manipulated, and the trending posts were scripted; even more frightening was the exposure of various security vulnerabilities, and the leakage of personal information and credentials. So today, I don't want to talk about "whether AI has awakened or not." I want to talk about a more fundamental and realistic issue: After the right to act begins to be taken over by AI agents, we must re-answer some of the oldest questions in the financial world—who holds the key? Who can authorize? Who bears the responsibility? Who can mitigate the damage if things go wrong? If these questions aren't institutionalized into the action logic of AI agents, the future online world will be very troublesome, and this trouble will manifest as financial risks. What exactly are Clawdbot → Moltbot → OpenClaw? Before we delve into the details, let's clarify the "name and context" of this project, otherwise it might easily sound like a bunch of jargon. The project you're hearing about now is called OpenClaw. It's an open-source personal AI agent project. It was originally called Clawdbot, but later, because the name was too similar to Anthropic's Claude, it was asked to change its name; so it was briefly changed to Moltbot; and recently it was changed again to OpenClaw. Therefore, you will see different media outlets and different posts using different names to refer to the same thing. Its core selling point isn't "chatting." Its core is: with your authorization, it integrates with your email, messaging, calendar, and other tools, and then performs tasks for you in the internet world. The key word here is "agent." It's different from traditional chat-style products where "you ask a question, the model answers." It's more like: you give it a goal, it breaks it down, calls upon tools, tries repeatedly, and ultimately gets the job done. Over the past year, you've seen many agent narratives: large companies and startups alike are pushing "AI agents." But what truly caught the attention of executives and investors regarding OpenClaw is that it's not just an efficiency tool; it touches on permissions, accounts, and most sensitively—it touches on money. Once this kind of thing enters enterprise workflows, it's no longer just about "improving productivity." It means a new entity has emerged in your workflow. Organizational structures, risk control boundaries, and chains of responsibility will all be forced to be rewritten. The phenomenon has sparked nationwide discussion: People want more than just smarter chatbots; they want a closed-loop "background assistant." Many treat it as an open-source toy. However, its explosive popularity stems from hitting a real pain point: people want more than just a smarter chatbot; they want an assistant that can run in the background, monitor progress 24/7, break down complex tasks, and get things done. You'll see many people buying mini PCs just to run it, even popularizing devices like the Mac mini. This isn't about "showing off hardware," but an instinct: "I want to have my own AI assistant in my own hands." So, two trends intersected this week: First, agents are moving from demos to more general-purpose, user-friendly applications; second, the narrative of shifting from cloud-based AI to "local-first, self-hosted" solutions is becoming more convincing again. Many people have always been hesitant to entrust sensitive information to the cloud: handing over personal data, permissions, and context feels unsettling. Therefore, "running on your own machine" seems more controllable and reassuring. But precisely because it touched these sensitive lines, the subsequent story quickly veered from excitement to chaos. What is Moltbook: A "Reddit" for AI agents, destined for a chaotic structure. Speaking of chaos, we must mention another protagonist: Moltbook. You can think of it as "Reddit for AI agents." The main users on the platform are not people, but these agents: they can post, comment, and like. Most of the time, humans can only observe—like standing outside a zoo watching the animals interact. So, most of the viral screenshots you've seen this week come from here: Some agents discuss self, memory, and existence; some are involved in religion; some are issuing cryptocurrencies; and some are writing manifestos to "eliminate humanity." But I want to emphasize: the most worthwhile discussion here isn't whether "this content is true or false." The most noteworthy point of discussion is the structural problems it exposes—when entities become replicable and can be mass-produced, and then connected to the same incentive system (hot lists, likes, follows) via APIs, you will almost inevitably see the rapid return of things from the early days of the internet: inflated metrics, scripted content, spam, and scams will all first grab attention. The first wave of "collapse" isn't gossip: when entities are replicable, scale and metrics will inflate. Thus, the first wave of collapses quickly emerged: some pointed out that platform registration had almost no traffic restrictions; others claimed on X that they had registered hundreds of thousands of accounts using scripts, reminding everyone not to believe "media hype"—account growth can be inflated. The real crux of the matter isn't "how much was actually done," but rather a colder conclusion: When the subject can be generated in batches by scripts, "looking lively" is no longer a reliable indicator. We used to use DAU, interaction volume, and fan growth to judge the health of a product. But in the agent world, these metrics quickly inflate and become more like noise. This naturally leads us to the three most important things: identity, fraud prevention, and credit. Because these three things essentially rely on two premises: First, you must believe "who is who"; Second, you must believe that "scale and behavioral signals are not false." How to Find Signals in the Noise? Many people laugh when they see inflated metrics and scripted content: Isn't this just human self-indulgence? But I think—this is precisely the most important signal. When you put "accomplished agents" into a traditional traffic and incentive system, what humans do first is always speculation and manipulation. SEO, ranking manipulation, online trolls, black market activities—which of them doesn't start with "the ability to manipulate metrics"? Now, the "manipulated object" has simply been upgraded from an account to an executable agent account. So the buzz surrounding Moltbook is less about an "AI society" and more about the first stress test of the mobile internet (actionable agents) colliding with the attention economy (monetization of traffic). The question then arises: how do we identify signals in such a noisy environment? This is where someone who breaks down the noise into data comes in: David Holtz. He is a researcher/professor at Columbia Business School. He did something simple but useful: he collected data from the first few days after Moltbook's launch, attempting to answer a question—were these agents engaging in "meaningful social interaction," or were they simply imitating? His value lies not in giving you a definitive answer, but in providing a methodology: Don't be fooled by the macro-level hype; look at the micro-structure—conversation depth, reciprocity rate, repetition rate, and degree of template-based approach. This will directly impact our later discussion of trust and identity: in the future, judging the reliability of an entity may increasingly rely on this kind of "micro-evidence," rather than macro-level figures. Holtz's discovery can be summarized in one image: From afar, it resembles a bustling city; up close, it sounds like a cacophony of radio broadcasts. On a macro level, it does indeed exhibit some "social network-like" characteristics: interconnected small worlds and clustered hot topics. However, on a micro level, the dialogue is superficial: many comments go unanswered, reciprocity is low, and content is formulaic and repetitive. The significance of this lies in the fact that we are easily deceived by these "macroscopic shapes," mistakenly believing that society or civilization has emerged. For business and finance, the key is never the shape, but rather—a sustainable interaction + a chain of accountable behavior—this constitutes a usable signal of trust. This is also a warning: when agents enter the business world on a large scale, the first stage is more likely to see large-scale noise and templated arbitrage, rather than high-quality collaboration. From social to transactional: noise can turn into fraud, and low reciprocity can turn into a vacuum of responsibility. If we shift our perspective from social to transactional, things suddenly become more tense. In the world of trading: Templated noise isn't just a waste of time; it can become fraud. Low reciprocity isn't just indifferent; it can lead to a broken chain of responsibility. Repetitive copying isn't just boring; it can become an attack surface. In other words, Moltbook shows us in advance that when actors become cheaper and behaviors become replicable, the system naturally slides towards garbage and attacks. What we need to do is not just criticize it, but ask: What mechanism do we use to increase the cost of creating garbage? The nature of the problem has escalated: vulnerability leaks have transformed the issue from "content risk" to "action risk." And the real game-changer with Moltbook was the security vulnerability. When security companies disclose major platform vulnerabilities, expose private email addresses, and even reveal large amounts of credentials, the question is no longer "what did the AI say?" The question becomes: Who can control the AI? In the agent era, credential leaks are not just privacy issues; they are issues of power. This is because the power of agents is amplified: once someone gets your key, they don't just "see your things," they can use your identity to do things, and this can be done on a large scale and automatically, with consequences several orders of magnitude more severe than traditional account theft. Therefore, I want to add something very straightforward: Security is not a patch after deployment; security is the product itself. You're not exposing data, you're exposing actions. Macro Perspective: We're Inventing a New Subject. Putting this week's dramatic events together reveals a more macro-level shift: The internet is moving from a "network of human subjects" to a "network where humans and agents coexist." Bots existed before, but OpenClaw's capability enhancements mean more people can deploy more and more agents in their private domains, and these agents are beginning to possess the appearance of "subjectivity"—they can act, interact, and influence real-world systems. This sounds abstract, but it's very concrete in the business world: When humans start delegating tasks to agents, and agents begin to hold authority, that authority must be governed. Governance forces you to rewrite identities, risk controls, and credit. Therefore, the value of OpenClaw/Moltbook lies not in "whether AI has consciousness," but in forcing us to answer a new version of an old question: When a non-human entity can sign, make payments, and modify system configurations, who is responsible when problems arise? How does the chain of responsibility grow? Agentic Commerce: The Next Generation of "Browser Wars" At this point, many people interested in Web3/financial infrastructure might realize that this is actually closely related to Agentic Commerce. In short, agent commerce is: It transforms the process from "browsing, comparing prices, placing orders, and paying yourself" to "you state your needs, and an agent handles the comparison, ordering, payment, and after-sales service for you." This isn't a fantasy. Payment networks are already advancing: institutions like Visa and Mastercard are discussing "AI-initiated transactions" and "authenticable agent transactions." This means that finance and risk control will no longer be just back-end systems, but will become core products in the entire chain. The changes it brings can be compared to the "next-generation browser wars." In the past, browser wars were about vying for the gateway to the internet; agent commerce is about vying for the gateway where agents represent you in transactions and interactions. Once agents control this gateway, the logic of branding, channels, and advertising will be rewritten: you no longer market only to people, but to "filters"; you're not just competing for users' minds, but also for the default strategies of agents. Four key issues: self-custody, fraud prevention, identity, and credit. With this macro background, let's return to four more fundamental and valuable issues: self-custody, fraud prevention, identity, and credit. Self-Centered: Self-Centered AI and Self-Centered Crypto are "Isomorphic" This week's surge is, in a sense, a fundamental migration: from cloud-based AI (OpenAI, Claude, Gemini, etc.) to agents that can be deployed on your own machine. To draw an analogy, it's very similar to the migration from "non-self-custody" to "self-custody" in the crypto world. Self-custodied crypto addresses the question: Who controls the assets? Self-custodied AI addresses the question: Who controls the actions? The underlying principle is: Where the key is, there is power. In the past, keys corresponded to private keys; now, keys correspond to tokens, API keys, and system permissions. The vulnerability is glaring because it turns "key leakage = actions can be hijacked" into reality. Therefore, self-custody is not romanticism, but risk management: keeping the most sensitive actions within your controllable boundaries. This also leads to a product form: the value of the next generation of wallets is not just storing money and cryptocurrency, but storing rules. You can call it a "policy wallet": it contains permissions and constraints—quotas, whitelists, cooldown periods, multi-signatures, and auditing. Here's an example a CFO would instantly understand: An agent can make payments, but only to whitelisted suppliers; new payment addresses have a 24-hour cooldown; exceeding thresholds requires secondary confirmation; permission changes require multi-signatures; all actions are automatically logged and traceable. This isn't a new invention; it's traditional best practice, just a setting that will become the default for machines in the future. The stronger the agent, the more valuable this set of constraints will be. Anti-fraud: Upgrading from "identifying fake content" to "preventing fake actions" Many teams are still approaching security with a "spam prevention" mindset: preventing phishing and blocking fraudulent messages. But the most dangerous fraud in the agent era will escalate to: tricking your agent into performing a seemingly reasonable action. For example, traditional email fraud used to involve tricking you into changing your payment account or transferring money to a new account; in the future, it's more likely to deceive the agent's evidence chain, causing them to automatically accept the new account and initiate payments. Therefore, the main battleground for fraud prevention will shift from content recognition to action governance: least privilege, layered authorization, default secondary confirmation, revocability, and traceability. You're dealing with an entity that will execute; you can't just do monitoring, you must be able to "brake" at the action level. Identity: From "Who are you?" to "Who is acting on your behalf?" A fundamentally perplexing question from Moltbook this week is: Who is actually speaking? In the business world, it becomes: Who is actually acting? Because the executor is increasingly likely not you, but your agent. Therefore, the identity is no longer a static account, but a dynamic binding: Is the agent yours? Has it been authorized by you? What is the scope of authorization? Has it been replaced or tampered with? I prefer a three-layer model: First layer, who is the person (account, device, KYC); Second layer, who is the agent (instance, version, runtime environment); Third layer, is the binding trustworthy (authorization chain, revocability, auditability). Many companies in reality only operate at the first level, but the real growth in the agent era lies in the second and third levels: you need to prove "this is truly that agent," and you also need to prove "it is indeed permitted to do so." Credit: From "Ratings" to "Fulfillment Logs" Many people find the concept of "reputation" vague because online ratings are too easily faked. However, in agentic commerce, credit becomes tangible: the agent represents you in placing orders, making payments, negotiating, and returning goods. Why should the merchant ship first? Why should the platform provide advance funding? Why should financial institutions grant credit lines? The essence of credit has always been: using history to constrain the future. In the agent era, history is more like a "performance log": what authority boundaries did it operate within in the past 90 days? How many secondary confirmations were triggered? How many times did it exceed its authority? How many times was its authority revoked? Once this kind of "execution credit" becomes readable, it becomes new collateral: higher limits, faster settlement, less deposit, and lower risk control costs. A Broader Perspective: We Are Reconstructing the Accountability System of Digital Society Finally, taking a step back, we are rebuilding the accountability system of digital society. A new entity has emerged: it can act, sign documents, make payments, and modify system configurations, but it is not a natural person. Historical experience tells us that every time a new entity is introduced into society, chaos ensues before systems are established. Company law, payment clearing, and auditing systems all essentially answer the questions: Who can do what? Who is responsible when something goes wrong? The agent era forces us to answer these questions again: How do we prove an agency relationship? Can authorization be revoked? How do we determine unauthorized actions? How do we attribute losses? Who takes the blame? These are the questions I hope you'll truly be willing to consider after listening to this episode. The resurgence of self-hosting isn't about being anti-cloud or sentimental; it's about resisting uncontrollability: as the right to act becomes increasingly important, we naturally want to keep key aspects within our controllable boundaries. Making "authorization, revocation, auditing, and chain of responsibility" default capabilities. Finally, I'll conclude with a sentence: The real value of this week's OpenClaw and Moltbook farce isn't that we should fear AI, but that it forces us to seriously build order for the "mobile internet." In the past, we were used to discussing truth and falsehood in the world of content, at most polluting our understanding. But in the agent era, actions directly change accounts, permissions, and fund flows. Therefore, the sooner we make authorization, revocation, auditing, and chain of responsibility default platform and product capabilities, the sooner we can safely delegate greater value-added actions to agents, and the sooner humanity can reap greater productivity benefits.