From the Community | Anthropic v. DoW is a wakeup call: Stanford must lead on AI and democracy
Avi Parrack is a Ph.D. student in physics and the president of Stanford Effective Altruism.
The Department of War’s (DoW) actions toward Anthropic reveal fault lines in our democratic guardrails and portend greater stresses to come as we grapple with how to preserve a balance of power in the face of unprecedented AI capabilities. If any community has the standing and the expertise to lead that effort, it is Stanford.
On Feb. 27, Secretary of War Pete Hegseth posted on X that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” and announced his intention to designate the company a “supply chain risk to national security,” a classification historically reserved for foreign adversaries like China’s Huawei and never before applied to an American company. That designation has since been made official, prompting Anthropic to plan a challenge in court.
The episode is an abuse of power by the Department of War, and the questions it raises about AI in military operations and domestic surveillance are too consequential to be settled by executive action alone. They demand congressional oversight and public deliberation.
Anthropic, the AI company founded by Stanford alumnus Dario Amodei ’06, was a key AI provider on the Pentagon’s classified networks. The company’s contract included two restrictions: first, that Claude would not be used for mass domestic surveillance of American citizens; second, that Claude could not be used in fully autonomous weapons without human oversight, meaning Claude cannot authorize strikes without a human making the final call.
The DoW wanted those restrictions gone, arguing that it was not the place of a private company to restrict government use of AI and that Anthropic’s redlines should be eliminated to allow for “all lawful use.” Anthropic replied that “we cannot in good conscience accede.” The company argued that, while technically legal, domestic surveillance powered by AI would be antithetical to American values and that AI systems could not be trusted to make wartime decisions without a human involved.
An agreement could not be reached. But the DoW then moved to attack its business with a supply chain risk designation that even Dean Ball, a former AI policy advisor to the Trump administration, called “obviously a psychotic power grab” that is “almost surely illegal.” The order has drawn condemnation from the political right and left alike.
When the government disagrees with a contractor, it should cancel the contract. It should not weaponize national security designations against a homegrown company. This sets a deeply un-American precedent of top-down control: if a private company negotiates terms the government dislikes, will it be punished? If we had read this story with the proper nouns redacted, we’d be forgiven for assuming it described a classic maneuver from Beijing.
Clearly, the substance of the Pentagon’s demands is not merely academic and should be spurring serious public discussion. Claude was reportedly integrated into U.S. targeting workflows in Iran, with the military striking over a thousand targets in the first twenty-four hours of operations. The same system the government labeled a national security risk is in the kill chain. For its part, Anthropic stood to lose a $200 million contract, and potentially its relationships with every defense contractor in the country.
Yet the company held to its redlines anyway because, as Amodei explained, today’s AI systems are simply too unreliable to be trusted with life-and-death decisions. When a company forfeits financial gain and absorbs political punishment to warn you that its own product is not ready, we ought to listen.
Regarding the mass surveillance of American citizens, we know that AI has made it possible to collect social media posts, geolocation data and financial records from data brokers and assemble them into comprehensive surveillance profiles at a scale never before feasible. None of this is currently illegal. The Fourth Amendment requires a warrant to search people, but federal agencies can already buy data from commercial brokers without one. The law did not anticipate what today’s and tomorrow’s AI systems would make possible. The Pentagon should not ask us to trust that it will choose restraint when no statute requires it.
Military contracts and Twitter ultimatums are not the proper channels to determine the scope of government power over its citizens or corporations. The radical possibilities of AI will require thoughtful consideration and legislation in order to preserve our checks and balances.
The DoW’s actions represent a clear and dangerous overreach, incompatible with the democratic principles this country was built upon. The tech community is responding: over 200 founders, engineers, investors and executives have signed an open letter urging the DoW to withdraw the designation and Congress to examine whether such “extraordinary authorities” should be used against an American technology company.
We can disagree about where the lines on autonomous weapons and domestic surveillance should be drawn. We can disagree about whether Anthropic’s restrictions were justified or inappropriate. But we cannot accept a government that retaliates against private enterprise for adhering to their contracts. And we cannot accept those decisions being made in boardrooms, without public input, when the technology is already being used in combat.
It is time for the public to weigh in and for Congress to legislate to restrict the use of AI in military operations and domestic surveillance. There is no institution better equipped to lead that effort than Stanford, which sits at the intersection of the technical frontier and some of the most serious legal scholarship in the country. Shoring up our democracy amid rapid developments in AI will require the urgency of a startup and the rigor of a courtroom. Stanford has both.
We will have to look to the future with the foresight of our founding fathers, updating our legal and legislative frameworks to preserve liberty and the separation of powers in an era of autonomous weapons, mass surveillance and automation. Students and faculty alike should hear Anthropic v. DoW as a starting gun, calling us to get involved or to redouble our speed and diligence in building the democratic architecture that this century will demand.
I trust I speak for many of us when I say it’s a call we intend to answer.
The post From the Community | Anthropic v. DoW is a wakeup call: Stanford must lead on AI and democracy appeared first on The Stanford Daily.