Anthropic Safety Leader Resigns, Warns ‘the World Is in Peril’
The head of AI safety at one of Silicon Valley’s most prominent artificial intelligence companies has resigned, citing deep concerns about global crises and internal pressures that clash with core values.
Mrinank Sharma, a senior AI safety leader at Anthropic, has resigned from the company, saying he is stepping away at a time when he believes the “world is in peril.”
Sharma, who led the company’s Safeguards Research team, announced his departure in a letter to colleagues published publicly on Feb. 9. In it, he reflected on his work, his concerns about global risks, and his decision to pursue a different path outside the fast-moving AI industry.
“I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote in his letter.
During his two-year tenure at Anthropic, Sharma worked on several critical safety initiatives. He cited accomplishments including understanding AI sycophancy, developing defenses against AI-assisted bioterrorism, and producing one of the first AI safety cases.
His final project focused on “understanding how AI assistants could make us less human or distort our humanity,” according to the letter.
Values, pressure, and a turning point
A recurring theme in Sharma’s letter is the tension between values and real-world pressures inside organizations and society.
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too,” he wrote.
He added a broader warning about the pace of technological power: “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”
While the letter does not accuse Anthropic of specific wrongdoing, its tone has fueled discussion across the tech community about the ethical strain of working at the frontier of AI development.
Beyond internal debates, Sharma’s departure taps into broader public anxiety about artificial intelligence. Polls and public discourse increasingly reflect fears that advanced AI could trigger catastrophic outcomes, ranging from mass job displacement and social destabilization to loss of human agency and even existential risk.
Prominent technologists and researchers have warned that unchecked AI development could outpace society’s ability to govern it, a concern amplified by the rapid release of ever more powerful models. Sharma’s language about a world “in peril” mirrors these broader worries, reinforcing the sense that AI’s promise and its potential for harm are accelerating in tandem.
A trend of departures?
Rather than moving to another tech company, Sharma is taking an unconventional path. He plans to explore a poetry degree and practice what he calls “courageous speech.” Sharma wrote, “My intention is to create space to set aside the structures that have held me these past years, and see what might emerge in their absence.”
Sharma is not alone in his recent exit.
As Business Insider noted, other researchers, including Harsh Mehta and Behnam Neyshabur, have recently left Anthropic to “start something new.”
This comes as the company is reportedly seeking a new funding round that could value it at $350 billion and is aggressively rolling out new, more powerful models like Claude Opus 4.6.
Sharma closed his letter with a poem by William Stafford, The Way It Is, about holding onto a personal thread through life’s changes. He’s clearly following his own thread, leaving a booming AI industry behind to grapple with its largest questions from the outside.
Also read: Anthropic CEO Dario Amodei warned at Davos that AI chip sales to China are “like nukes for North Korea.”
The post Anthropic Safety Leader Resigns, Warns ‘the World Is in Peril’ appeared first on eWEEK.