Pentagon Seeks Anthropic Alternatives Amid $200M Contract Dispute
It looks like the Pentagon and Anthropic will not be patching things up anytime soon, as the Department of Defense is already pursuing multiple language models to replace the AI company.
The Department of Defense (DoD) is exploring the use of OpenAI’s GPT and xAI’s Grok, both of which have signed contracts with the Pentagon following the fallout from the Anthropic split.
Anthropic had its $200 million contract with the Pentagon terminated after the two sides were unable to agree on terms. Anthropic wanted the DoD, which US President Donald Trump has referred to as the Department of War, to commit to not using its AI models for mass surveillance or autonomous weaponry, while the DoD sought unrestricted access.
“The Department is actively pursuing multiple LLMs into the appropriate government-owned environments,” said Cameron Stanley, the Pentagon’s chief digital and AI officer, in an interview with Bloomberg. “Engineering work has begun on these LLMs and we expect to have them available for operational use very soon.”
A supply chain risk
Following the termination of the contract, Hegseth said the government had designated Anthropic a supply chain risk. This label is usually reserved for foreign adversaries, as it bars companies from working with the Pentagon or any of its contractors.
A few days later, OpenAI signed an agreement with the Pentagon. This was met with a poor reception from users and internally, with at least one employee quitting over the decision. OpenAI CEO Sam Altman has argued that the agreement keeps both of Anthropic’s red lines intact.
Anthropic has been back and forth on the severity of the ban. At first, CEO Dario Amodei claimed it would not be harmful, as it had a limited scope. However, the company has since challenged the designation in court, arguing in filings that it is already seeing hundreds of millions of dollars in contracts potentially affected.
Principles vs competition
The growth of the AI ecosystem has been a balance between principles and competition.
OpenAI was originally a non-profit research company set up to develop AI safely and for the benefit of humanity. As it grew, particularly after the success of ChatGPT, the company expanded its for-profit arm and in 2025 shifted to a more conventional public benefit corporation structure, designed to enable greater capital raising and potentially open the door to a future IPO.
Anthropic was born out of frustration among former OpenAI employees over AI safety. It has focused heavily on the effects of AI on the job market, launching a job-tracking system to monitor where AI is having the greatest impact, and building out its research efforts.
At the same time, Anthropic has tried to maintain a more principled approach to releasing models and products, but recently dropped a pledge not to train or release systems unless it could guarantee safety in advance, citing competitive pressure.
What makes this difficult for OpenAI and Anthropic is that rivals do not share the same constraints around corporate structure or AI safety. Elon Musk’s xAI has repeatedly shown a willingness to tune its models around certain political viewpoints.
Google, meanwhile, has worked with the DoD and other defence contractors for decades, having faced its own internal backlash in the past, including when it removed the “Don’t be evil” line from its code of conduct.
Also read: User backlash over OpenAI’s Pentagon deal is showing up in a surge of ChatGPT uninstalls.
The post Pentagon Seeks Anthropic Alternatives Amid $200M Contract Dispute appeared first on eWEEK.