Anthropic drops hallmark safety pledge in race with AI peers
By Newley Purnell, Bloomberg
Anthropic PBC, which for years billed itself as a safer alternative to artificial intelligence rivals, has loosened its commitment to maintaining its guardrails, one of the most dramatic policy shifts in the AI industry yet as startups once focused on helping humanity turn their attention to profit and success.
The company in 2023 said in its Responsible Scaling Policy that it would delay AI development that might be dangerous. In a Tuesday blog post, Anthropic said it was updating its rules to say it would no longer do so if it believes it lacks a significant lead over a competitor.
“The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level,” Anthropic said in its post.
The company’s move underscores how the high-minded intentions that guided AI startups in their early years have increasingly collided with the pressures to make money and beat out the competition. Anthropic is racing for dominance in the revolutionary technology against a host of formidable rivals, including OpenAI, Alphabet Inc.’s Google and Elon Musk’s xAI Corp.
Dario Amodei, chief executive officer of Anthropic, used to work at OpenAI and left in 2020 in part because of his concerns that the startup was prioritizing commercialization and speed over safety.
OpenAI began as a nonprofit and converted to a more traditional for-profit enterprise last year. It also updated its mission statement in 2024 by dropping the word “safely” from its goal of ensuring that artificial general intelligence benefits humanity.
Both companies are now pushing for initial public offerings as soon as this year, seeking to tap investor interest in AI. Anthropic was recently valued at $380 billion, while OpenAI is raising money at a valuation of more than $850 billion.
“From the beginning, we’ve said the pace of AI and uncertainties in the field would require us to rapidly iterate and improve the policy,” an Anthropic spokeswoman said.
The updated policy coincides with a growing dispute with the US Defense Department over Anthropic’s insistence on guardrails for use of its Claude AI tool. The Pentagon on Tuesday threatened to invoke a Cold War-era law to compel Anthropic to allow the US military to use the startup’s technology if the company failed to comply with the government’s terms by Friday.
During the meeting Tuesday between Amodei and Defense Secretary Pete Hegseth, US officials laid out a series of consequences, including threats to declare Anthropic a supply-chain risk and invoke the Defense Production Act to use the AI software anyway, even if the company didn’t agree, Bloomberg reported.
Earlier this month Mrinank Sharma, a senior safety researcher, said he was leaving Anthropic. “I continuously find myself reckoning with our situation,” he wrote in a letter to colleagues that he posted to X.
“The world is in peril,” he wrote. “And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
He didn’t immediately respond to a request for comment on the company’s change to its safety policy.
It’s not just Anthropic and OpenAI that have wrestled with the proper safety measures for AI. Earlier this month, Bloomberg reported that Elon Musk’s SpaceX and subsidiary xAI are competing in a secretive new Pentagon contest to produce voice-controlled, autonomous drone swarming technology, a potentially controversial departure for Musk who had been among those arguing against making “new tools for killing people.” Musk had gone as far as suing OpenAI, which he had backed with tens of millions of dollars with the expectation it would remain a nonprofit committed to creating safe AI for public benefit.
OpenAI is also supporting a submission from Applied Intuition in the drone contest, Bloomberg previously reported, though OpenAI will limit its contribution to the project solely to the “mission control” element that will convert voice and other instructions from battlefield commanders into digital instructions.
The clash between Amodei and OpenAI CEO Sam Altman has broken out into public view at times. During an AI summit in New Delhi last week, the two men ended up standing next to each other with Prime Minister Narendra Modi, and they refused to hold hands while everyone else lined up on stage did.
–With assistance from Vlad Savov and Lynn Doan.
(Updates with additional context throughout)
More stories like this are available on bloomberg.com
©2026 Bloomberg L.P.