Pentagon Weighs Axing $200M Anthropic Deal in Moral Standoff Over AI Safeguards
Here’s a sentence you don’t hear every day: the US military is threatening to punish an AI company for being too ethical.
Axios reported that Defense Secretary Pete Hegseth is “close” to cutting ties with Anthropic and designating it a “supply chain risk,” a label normally reserved for foreign adversaries like Chinese tech firms. The reason? Anthropic won’t give the Pentagon blanket permission to use Claude for “all lawful purposes.”
Anthropic’s two red lines: no mass surveillance of Americans and no weapons that fire without a human involved.
The Pentagon’s response, per a senior official: “We are going to make sure they pay a price for forcing our hand like this.”
Here’s what makes this wild
- Claude is the only AI model currently running on the military’s classified systems.
- It was used during the Maduro raid in January.
- Pentagon officials openly praise its capabilities.
- The contract in question is worth up to $200 million, a fraction of Anthropic’s $14 billion annual revenue.
- But the real threat is the “supply chain risk” label, which would force every company doing business with the Pentagon to certify that it doesn’t use Claude (potentially knocking Claude out of Microsoft Copilot, for example).
- That’s a big deal, given that 8 of the 10 largest US companies already use it.
Meanwhile, OpenAI, Google, and xAI have all agreed to remove their safeguards for military use on unclassified systems. Anthropic is the only one holding out.
The Reddit reaction was immediate and almost unanimously pro-Anthropic:
- Top comment on r/ClaudeAI: “This is a selling point. Make it an ad.”
- Multiple users said they were upgrading their subscriptions specifically to support the company.
- One defense contractor pointed out that many companies would rather drop their government contracts than rip Claude out of their workflows. The compliance costs alone aren’t worth it.
The irony is hard to miss
If any other model were as good, the Pentagon would just switch. The fact that they’re threatening punishment instead of walking away is maybe the strongest endorsement Claude has ever received.
As one Redditor put it: “This article is basically a billboard for Anthropic. ‘We’re so good the Pentagon can’t replace us even when they want to.'”
Elsewhere, the timeline is turning against Anthropic: developers have been prioritizing Codex over Opus after Anthropic made a series of unforced errors:
- First suing instead of wooing (well, threatening to sue) the creator of OpenClaw over its previous name, Clawdbot, then losing out on hiring him.
- The Super Bowl ads that general audiences ranked in the bottom 3% for likeability, but that drove an 11% user bump among those who got the joke.
- Losing developer trust over a series of non-dev friendly moves re: using Claude Code in other tools (and just generally being the “premium priced” AI model).
What can we say? I guess Anthropic is the we-don’t-need-you-after-all AI company. We’ll see if that strategy works out for them…
Editor’s note: This content originally ran in the newsletter of our sister publication, The Neuron. To read more from The Neuron, sign up for its newsletter here.
The post Pentagon Weighs Axing $200M Anthropic Deal in Moral Standoff Over AI Safeguards appeared first on eWEEK.