
Can AI firms set limits on how and where the military uses their models? Anthropic is in heated negotiations with the Pentagon after refusing to comply with new military contract terms that would require it to loosen the guardrails on its AI models, allowing for “any lawful use,” even mass surveillance of Americans and fully autonomous lethal weapons.
Pentagon CTO Emil Michael is pushing for Anthropic to be designated a “supply chain risk” if it doesn’t comply, a label usually only given to national security threats. Anthropic’s rivals OpenAI and xAI have reportedly agreed to the new terms, but even after a White House meeting with Defense Secretary Pete Hegseth, Anthropic CEO Dario Amodei is still refusing to cross his company’s red line, stating that “threats do not change our position: we cannot in good conscience accede to their request.”
Follow along here for the latest updates on the clash between AI companies and the Pentagon…
- We don’t have to have unsupervised killer robots
- Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance
- Pete Hegseth’s Pentagon AI bro squad includes a former Uber executive and a private equity billionaire
- Inside Anthropic’s existential negotiations with the Pentagon









