In an AMA on the X platform, OpenAI founder Sam Altman responded to the recent collaboration with the U.S. Department of War. He disclosed that Anthropic and the U.S. Department of War were once "very close to reaching an agreement," and both sides had a strong desire to cooperate for most of the negotiations. However, the highly tense negotiation environment could quickly deteriorate, which may be one of the important reasons why the deal ultimately failed to materialize. In terms of security philosophy, OpenAI adopts a "layered approach," which includes building a security technology stack, deploying Frontier Deployment Engineers (FDEs), involving security researchers in the project, delivering through cloud deployment, and cooperating directly with the U.S. Department of War. Compared to setting specific prohibitions in the contract, Anthropic seems to focus more on explicit restrictions at the contractual level, while OpenAI tends to rely more on the applicable legal framework and use technical security protection measures as the core guarantee. However, other companies may have different positions on this. Anthropic may hope to gain more operational control in the collaboration, which may also be one of the reasons for the divergence in their approaches.