Anthropic’s Statement To The ‘Department Of War’ Reads Like A Hostage Note Written In Business Casual
We’ve been covering the ongoing saga of the Trump administration’s attempt to destroy Anthropic for the sin of having modest ethical guidelines around its AI technology.
The short version: Anthropic said it didn’t want its AI making autonomous kill decisions without human oversight. Defense Secretary Pete Hegseth responded by declaring the company a supply chain risk—a designation designed for foreign adversaries, not San Francisco companies with ethics policies—and ordering every federal agency to purge Anthropic’s technology. Now Anthropic is back at the negotiating table with the same people who just tried to kill it.
On Thursday, Anthropic CEO Dario Amodei published a new statement about “where things stand” with the Defense Department. And it is… something. It reads like what happens when a serious person at a serious company has to write a serious document in an environment that has gone completely insane—and the result is a press release that, under any previous administration, would have been recognized as deeply alarming corporate groveling, but which now just kind of… slides into the news cycle as another Thursday.
The statement is titled “Where things stand with the Department of War.” Not the Department of Defense. The Department of War. Yes, Trump and Hegseth have spent hundreds of millions of dollars renaming the Defense Department, but it’s not up to them. It’s up to Congress. According to the law, it’s still the Department of Defense, and anyone using the name Department of War is clearly sucking up to the administration. It’s all theater.
Amodei uses the fictitious name throughout his statement. Every single reference. “Department of War.” This is a company that six days ago was being praised for standing on principle, and its CEO can’t even bring himself to use the department’s legal name because the administration insists upon everyone using the cosplay version. Before you even get to the substance, the document has already bent the knee. He’s negotiating with people who branded him a national security threat, and he opens by adopting their preferred terminology like a hostage reading a prepared script.
From there, the statement proceeds through a series of passages that are individually rational and collectively dystopian. Take this section:
I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible. As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
“We are very proud of the work we have done together with the Department”—the department that is currently trying to destroy the company over a contractual dispute. The department whose secretary called Anthropic’s stance “a master class in arrogance and betrayal” and “a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.” The department that declared Anthropic a supply chain risk to national security—again, a designation designed for hostile foreign infiltration of military systems, not for a San Francisco company that said “maybe a human should be in the loop before the robot decides to kill someone.”
And here’s Dario, proudly listing all the ways Anthropic has served these same people. “Supporting frontline warfighters.” This is the language of a Pentagon press release. Six days. It took six days to go from “we have principles about autonomous weapons” to “we are very proud of supporting frontline warfighters with cyber operations.”
This may be a rational decision from a company trying to stave off a ridiculous fight, but the real story is that they feel the need to act this way.
Then there’s the apology. Earlier this week, an internal Amodei memo leaked in which he described OpenAI’s rushed Pentagon deal as “safety theater” and “straight up lies,” and noted that the key difference between the two companies’ positions was that OpenAI “cared about placating employees” while Anthropic “actually cared about preventing abuses.” It was blunt. It was competitive. It also appeared to be accurate—OpenAI subsequently rewrote its contract to address many of the concerns Amodei identified.
But accuracy is apparently a liability now:
I also want to apologize directly for a post internal to the company that was leaked to the press yesterday. Anthropic did not leak this post nor direct anyone else to do so—it is not in our interest to escalate this situation. That particular post was written within a few hours of the President’s Truth Social post announcing Anthropic would be removed from all federal systems, the Secretary of War’s X post announcing the supply chain risk designation, and the announcement of a deal between the Pentagon and OpenAI, which even OpenAI later characterized as confusing. It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation.
He is apologizing for the tone of an accurate description of events because the accurate description made the people trying to destroy his company unhappy. He notes it was “a difficult day for the company”—the day the President of the United States directed every federal agency to cease using your technology and the Defense Secretary branded you a threat to national security. Yeah, I’d call that a difficult day. And on that difficult day, Amodei accurately described what was happening, and now he has to say sorry for it because the accurate description “does not reflect my careful or considered views.”
Translation: the careful and considered view is that you don’t say true things out loud when the administration is watching and deeply focused on punishing you.
And then we arrive at the closing:
Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations. Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so.
Anthropic is offering to provide its AI models to the military at nominal cost—essentially a discount—while simultaneously preparing to challenge the supply chain risk designation in court. The company is saying: “We believe your action against us is illegal, we will fight it in court, and also here’s our technology at a steep discount, please don’t hurt us anymore.”
And the framing: “Our most important priority right now is making sure that our warfighters… are not deprived of important tools in the middle of major combat operations.” This is Anthropic fully adopting Hegseth’s rhetoric—the exact framing that was used to justify the attack on them in the first place. Hegseth’s entire argument was that Anthropic’s ethical guidelines were depriving “warfighters” of critical tools. And now Anthropic is echoing that language as though it were their own concern all along. The “warfighters” language is especially rich given that this administration keeps tap dancing around the question of whether we’re actually “at war” with Iran—apparently we have warfighters who aren’t fighting a war.
The statement closes with what might be the single most remarkable sentence:
Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise.
Remember, this company was founded by people who left OpenAI specifically because they thought AI safety was being treated as an afterthought. Their entire brand, their entire reason for existing, was the proposition that there are some things AI should not be used for without significant guardrails. “Anthropic has much more in common with the Department of War than we have differences” is the kind of sentence you write when survival has replaced principle as the operating framework.
Every individual decision in this statement is probably the rational play. Using the administration’s preferred name costs nothing. Apologizing for the memo reduces friction. “Warfighter” language signals alignment. These are survival tactics, and they’re being deployed by someone who appears to have no good options.
That’s the actual horror. This is what the “good” decisions look like in an authoritarian world.
Under any previous administration—Democrat or Republican—a company telling the Defense Department “we’d prefer our AI not make autonomous kill decisions without human oversight” would have been a mostly unremarkable negotiating position. It might have been a deal breaker for that particular contract. The two sides might have parted ways. What would not have happened is the Secretary of Defense going on social media to accuse the company of “betrayal” and “duplicity,” the President directing all federal agencies to stop using the company’s products, and the company’s CEO subsequently having to write a public groveling statement apologizing for having accurately described the situation while pledging free labor to the government that attacked him.
And every AI company watching this—every tech company of any kind—is absorbing the lesson. Tell the administration “no” on even the most modest ethical point, and this is what follows: a week of chaos, a supply chain risk designation, your CEO apologizing for telling the truth, and a press release pledging your technology to the military at cost while you simultaneously sue to stay alive.
As I wrote last year, authoritarian systems are fundamentally incompatible with innovation. They produce exactly this kind of environment—one where the rational move for a company is to grovel in public while fighting in court, to adopt the language of the people attacking you, and to apologize for having been right. The AI bros who supported Trump because Biden’s AI plan involved some annoying paperwork should take a long look at this statement and ask themselves whether this is the “pro-innovation” environment they were promised.
Because right now, the most “pro-innovation” thing happening in American AI is a hostage note written in business casual—and everyone pretending it’s just a press release.