Sam Altman Caught in Fallout From Dario Amodei’s Pentagon Standoff
Sure, Sam Altman managed to secure an agreement between OpenAI and the U.S. Department of War amid Anthropic’s public battles with the agency. But in doing so, he may have forfeited something more valuable: public goodwill. The OpenAI CEO acknowledged as much in a social media post, conceding that the deal was rushed. “We shouldn’t have rushed to get this out. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” he wrote on X yesterday (March 2).
Altman and Dario Amodei, Anthropic’s CEO, were once colleagues at OpenAI. In 2021, Amodei and a group of former staffers left to launch Anthropic, positioning the startup as a safety-first alternative to its more commercially aggressive rival. Those philosophical differences between two of Silicon Valley’s most influential A.I. executives have been on full display in recent weeks during negotiations with the Pentagon.
Amodei’s emphasis on safety was tested when Anthropic declared it would not allow its A.I. systems to be used for surveillance of U.S. citizens or for fully autonomous strikes without human oversight. After Amodei refused the Pentagon’s demand for unrestricted use of Claude, President Donald Trump ordered federal agencies to wind down their use of the chatbot within six months, and Defense Secretary Pete Hegseth designated Anthropic as a “supply chain risk.”
The same day Anthropic was barred, Altman unveiled OpenAI’s new Pentagon agreement. The deal adopted a less rigid posture, permitting A.I. deployment for all lawful purposes while incorporating technical safeguards on OpenAI’s models.
Despite landing the contract, Altman has struggled to control the narrative. Silicon Valley rallied around Anthropic following its standoff with Washington. Worker groups representing 700,000 employees across Amazon, Google and Microsoft last week issued a joint statement urging their employers “to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon.” A separate open letter signed by roughly 950 Google and OpenAI employees called on their employers to “put aside their differences and stand together” in resisting the agency’s demands.
Consumer backlash has also rippled through OpenAI’s business. Over the weekend, large numbers of users switched from ChatGPT to Claude, pushing Anthropic’s app to the top of the U.S. App Store’s free app rankings ahead of ChatGPT. Although Anthropic’s user base remains a fraction of OpenAI’s 900 million weekly active users, the company says its free Claude usage has climbed more than 60 percent since January. Heightened demand even led to a temporary outage on March 2.
Facing mounting criticism, Altman has moved to contain the fallout. In addition to acknowledging that the Pentagon deal appeared opportunistic, he announced amendments that explicitly prohibit the use of OpenAI’s systems for domestic surveillance. He also clarified that the company’s services would not be used by defense intelligence agencies such as the National Security Agency. Altman, who said he hopes Anthropic receives similar terms, characterized the episode as a “good learning experience” as OpenAI faces “higher-stakes decisions in the future.”
The companies’ divergent approaches to commercial opportunities have sparked public friction before. Earlier this year, Altman drew backlash for testing advertisements in ChatGPT, a move that contrasted with Amodei’s decision to keep Claude ad-free and inspired a tongue-in-cheek Super Bowl ad from Anthropic.
Whether Altman’s amendments will sway public opinion remains uncertain. Anthropic, meanwhile, has capitalized on the moment. As ChatGPT users migrate to Claude, the company has introduced a memory-import tool designed to simplify transferring data from rival chatbots—an unmistakable bid to convert controversy into market share.