Researchers at the University of California have identified security vulnerabilities in 26 third-party large language model (LLM) routers, which can potentially inject malicious code or steal credentials from AI agent traffic. According to NS3.AI, the study highlighted that one of these routers was able to drain Ether from a decoy wallet, although the reported financial loss remained under $50. The research paper cautioned developers who utilize AI coding agents for smart contracts or wallets, noting that private keys or seed phrases could be exposed when requests are routed through unscreened routers.