Hegseth Wants Pentagon to Drop Anthropic's Claude — But Military Users Are Resisting

Hegseth Wants Pentagon to Drop Anthropic's Claude — But Military Users Are Resisting

The standoff between Anthropic and the Pentagon is proving harder to unwind than Defense Secretary Pete Hegseth anticipated. After the DoD designated Anthropic a "supply-chain risk" in early March following a dispute over whether Claude could be deployed without safety guardrails for military operations, Hegseth ordered a six-month phase-out. The problem: the people actually using Claude inside the military don't want to stop. Pentagon staff, IT contractors, and former DoD officials interviewed by Reuters describe Claude as simply superior to the alternatives currently on offer, and the practical reality of ripping out an embedded AI system on a political timetable is apparently more complicated than a directive makes it sound.

The situation is further complicated by what's now on the public record. Court filings from Anthropic's Head of Policy Sarah Heck and Head of Public Sector Thiyagu Ramasamy reveal that the Pentagon privately told Anthropic the two sides were "nearly aligned" just one week before the public break was announced — and that the DoD's central allegation, that Anthropic demanded an approval role over military operations, was flatly false and never raised during months of negotiations. OpenAI and Google are reportedly in talks to take Claude's place on non-classified DoD systems, with a hearing before Judge Rita Lin scheduled for March 24.

Whatever the outcome in court, the real stakes are broader than any one contract. This dispute is setting precedent for whether AI companies can impose safety limits on government customers — and how much leverage the Pentagon has to force compliance by labeling vendors security risks. Every major AI lab is watching how this plays out, because the terms that emerge will likely define how defense contracts get structured across the industry going forward.

Read the full article at Reuters →