In the case, Anthropic sued to challenge the Defense Department’s designation of the company as a supply chain risk, and the judge paused the government’s plan while also putting her order on hold for a week so that the government could appeal, according to the report.
It was reported Feb. 27 that the White House told federal agencies that the federal government will no longer work with Anthropic and that agencies using the company’s AI models will get a six-month phaseout period.
The move marked an escalation in a dispute that started inside the Defense Department but expanded to touch the broader government.
The decision came just ahead of a Pentagon deadline for Anthropic to agree that the military can use its models in “all lawful use cases.” The company refused that concession and sought contract language that would prohibit use of its models for autonomous weapons and mass domestic surveillance.
After the Pentagon designated the company a security risk, Anthropic said Feb. 27 that it planned to challenge the government’s decision in court.
“Legally, a supply chain risk designation … can only extend to the use of [Anthropic’s AI model] Claude as part of Department of War contracts — it cannot affect how contractors use Claude to serve other customers,” Anthropic wrote on its blog.
Anthropic then sued the Department of Defense on March 9 to block it from placing a supply chain risk designation on the company. The company argued that the designation violates its rights to free speech and due process and asked the court to reverse the designation and block the federal government from enforcing it.
It was reported March 10 that during a court hearing, Anthropic said it could lose billions of dollars due to government ban on its services. An Anthropic attorney told the court that the government’s actions had caused more than 100 customers to express concerns about continuing to engage with the company.