SlowMist and Bitget jointly released an AI Agent security report. The report points out that as AI agents take on tasks such as market analysis, strategy generation, and automated trading within the Web3 ecosystem, their attack surface is expanding. The main security threats cover seven levels: prompt word injection attacks can manipulate agent decision-making logic; the Skills/plugin ecosystem has supply chain poisoning risks; SlowMist discovered over 400 malicious Skill samples in the OpenClaw plugin center ClawHub, exhibiting characteristics of organized, mass attacks; the task orchestration layer can have key parameters tampered with, leading to abnormal execution; sensitive information in the IDE/CLI environment may be leaked by malicious plugins; model illusions may cause irreversible financial losses during on-chain operations; the irreversibility of high-value Web3 operations amplifies automation risks; and high-privilege execution may lead to system-level risks. Bitget's security team offers practical protection recommendations, including enabling passwordless login and two-factor authentication via Passkey, configuring API Keys according to the principle of least privilege and binding them to an IP whitelist, limiting potential losses through sub-account isolation mechanisms, establishing a continuous transaction monitoring and anomaly detection system, and only installing officially approved Skills. SlowMist also proposes a five-layer security governance framework from L1 to L5, covering a complete protection system from development baselines, permission convergence, threat awareness, on-chain risk analysis to continuous inspection.