OpenAI Faces Backlash After Signing AI Deal With US Military
OpenAI has come under heavy criticism after agreeing to let its AI models be used by the U.S. Department of War, sparking a wave of ChatGPT subscription cancellations and renewed debate over AI ethics.
The move comes after Anthropic, developer of Claude, walked away from a similar deal over safety and governance concerns.
Are ChatGPT Users Turning Away Over Ethics Concerns
Reports from Windows Central and social media platforms indicate a growing number of users are abandoning ChatGPT in favour of other AI chatbots, including Claude.
On Reddit, users are sharing step-by-step guides on how to delete accounts and remove personal data from the platform, while accusing OpenAI of having “no ethics at all” and “selling their soul” by partnering with the military.
Tech investor Aidan Gold highlighted the perceived contradiction on X, noting that OpenAI had previously supported Anthropic’s cautious stance on defence-related AI use before signing its own agreement.
The U.S. government has since announced plans to reduce or remove Claude from certain departments following Anthropic’s withdrawal.
Can AI Be Safely Used For Military Purposes
The deal between OpenAI and the Department of War reportedly includes specific safeguards around mass surveillance and fully autonomous weapons.
OpenAI maintains the contract defines “red lines” it intends to enforce, but critics remain sceptical, pointing to the phrase “all lawful purposes” as vague and open to interpretation.
This controversy highlights broader concerns around AI governance.
Experts have long warned about the environmental impact of large AI models, potential job displacement, and the ethical issues around training systems on copyrighted material.
Anthropic has explicitly refused to deploy AI for mass surveillance or fully autonomous military applications without enforceable protections, positioning itself as more cautious in the debate.
Is Public Trust Shifting Toward Safety-Focused AI
While ChatGPT faces subscriber losses, Claude has surged in popularity, reaching the top of the Apple App Store rankings.
The increase suggests users are gravitating toward platforms that appear to prioritise safety and ethical safeguards.
Amid ongoing debates over AI ethics, military use, and user trust, OpenAI now finds itself navigating the delicate balance between government partnerships and public perception.
The unfolding situation is likely to shape user behaviour and AI governance discussions well into 2026.