Anthropic Sues Trump Administration Over Pentagon ‘Supply Chain Risk’ Label
Anthropic has sued the Trump administration after being labelled a “supply chain risk” by the Department of War and effectively banned from supplying products to the government or its contractors for defense purposes.
The ban was confirmed last week by Defense Secretary Pete Hegseth, the culmination of several weeks of growing tensions between the AI company and the Trump administration. At the heart of the disagreement is Anthropic wanting more assurances that its AI model, Claude, would not be used for autonomous weapons or mass domestic surveillance.
Anthropic has sued the government, arguing that the designation as a supply chain risk to national security goes well beyond a typical contract dispute and that the administration has ignored normal procedures. It has also accused President Donald Trump of overstepping presidential authority by telling all federal agencies to stop working with the company.
In the lawsuit, Anthropic claims that the supply chain risk label and comments from President Trump and others in the administration are already “jeopardizing hundreds of millions of dollars”, despite Anthropic CEO Dario Amodei telling CBS News that the impact of the designation would be relatively small and the company would be fine.
Claude had reportedly been deployed across several areas of the Pentagon, including as a support tool for intelligence assessments and targeting recommendations, as part of its partnership with Palantir. It has also been adopted by other agencies.
With the designation as a supply chain risk, OpenAI quickly swooped in to sign a new Pentagon contract. CEO Sam Altman was quick to defend the deal, claiming it had the same red lines as Anthropic, plus another that ensures no social credit systems use its GPT model. Altman has received some internal backlash for the decision, with the company’s robotics leader resigning over “ethical lines” being crossed.
The dividing lines between the two rivals
The dividing lines between the two AI leaders are becoming more apparent, even though the morality and ethical stability of both are shaky. Anthropic recently softened its responsible scaling policy to speed up the development and production of its AI models.
It is also clear that, while Anthropic has marketed itself as a safer alternative to OpenAI, it is still researching and developing tools that could be used by bad actors. The company’s safety leader recently resigned, warning in his resignation that the “world is in peril”.
OpenAI is not the only one focused on Anthropic. Elon Musk, the co-founder of xAI, recently took a swipe at Anthropic, accusing its models of being “misanthropic and evil”. The comment came shortly after Anthropic closed a funding round valuing the company at $380 billion.
Anthropic at the peak of its powers
The designation may have hit the brakes on what has been a strong few months for Anthropic. Its growth has accelerated, especially in the enterprise market where its Opus 4.5 model is considered best-in-class for coding and other work tasks.
This surge in popularity has led to Anthropic taking over the mantle from OpenAI of shifting market sentiment. The launch of a legal tool to help legal professionals read and analyse documents more quickly led to a sharp drop in the share prices of several leading legal firms.
The same happened a few weeks later when Anthropic announced a similar tool for the finance industry.
The post Anthropic Sues Trump Administration Over Pentagon ‘Supply Chain Risk’ Label appeared first on eWEEK.