Following security incidents at Anthropic and OpenAI, market concerns have arisen regarding the security of AI models themselves. Anthropic is currently investigating the possibility that its Claude Mythos model may have been accessed without authorization. Almost simultaneously, OpenAI was also exposed for accidentally releasing several unreleased models in its Codex application. Analysts believe these incidents highlight that even AI model providers that emphasize cybersecurity capabilities still face significant security challenges. As AI is increasingly used to defend against cyberattacks, platform security and access control issues have become critical risk points. Industry experts point out that these vulnerabilities have intensified scrutiny of the security governance capabilities of AI companies and reflect that, despite the rapid development of AI technology, security systems still need improvement. (The Information)