The U.S. Pentagon is contemplating ending its partnership with Anthropic, an artificial intelligence company, due to the firm's insistence on imposing certain restrictions on military use of its models. According to Jin10, a senior government official disclosed that the Pentagon is urging four leading AI labs to allow military use of their tools for 'all legitimate purposes,' including sensitive areas such as weapons development, intelligence gathering, and battlefield operations. Anthropic has not agreed to these terms, and after months of difficult negotiations, the Pentagon has grown frustrated.
Anthropic maintains that two areas must remain prohibited: mass surveillance of U.S. citizens and fully autonomous weapon systems. The senior official noted that there is considerable ambiguity regarding what should fall into these categories and what should not. Moreover, negotiating each specific use case with Anthropic or dealing with Claude unexpectedly blocking certain applications is not feasible for the Pentagon.
The official stated that 'anything is possible,' including temporarily reducing collaboration with Anthropic or completely terminating the partnership. 'But if we believe this is the right course of action, we must find a suitable replacement for them,' the official added.