Dario Amodei Warns of A.I.’s Direst Risks—and How Anthropic Is Stopping Them
Anthropic is known for its stringent safety standards, which it has used to differentiate itself from rivals like OpenAI and xAI. Those hard-line policies include guardrails that prevent users from turning to Claude to produce bioweapons—a threat that CEO Dario Amodei described as one of A.I.’s most pressing risks in a new 20,000-word essay.
“Humanity needs to wake up, and this essay is an attempt—a possibly futile one, but it’s worth trying—to jolt people awake,” wrote Amodei in the post, which he positioned as a more cynical follow-up to a 2024 essay outlining the benefits A.I. will bring.
One of Amodei’s biggest fears is that A.I. could give large groups of people access to instructions for making and using dangerous tools—knowledge that has traditionally been confined to a small group of highly trained experts. “I am concerned that a genius in everyone’s pocket could remove that barrier, essentially making everyone a Ph.D. virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon step-by-step,” wrote Amodei.
To address that risk, Anthropic has focused on strategies such as its Claude Constitution, a set of principles and values guiding its model training. Preventing assistance with biological, chemical, nuclear or radiological weapons is listed among the constitution’s “hard constraints,” or actions Claude should never take regardless of user instructions.
Still, the possibility of jailbreaking A.I. models means Anthropic needed a “second line of defense,” said Amodei. That’s why, in mid-2025, the company began deploying additional safeguards designed to detect and block any outputs related to bioweapons. “These classifiers increase the costs to serve our models measurably (in some models, they are close to 5 percent of total inference costs) and thus cut into our margins, but we feel that using them is the right thing to do,” he noted.
Beyond urging other A.I. companies to take similar steps, Amodei also called on governments to introduce legislation to curb A.I.-fueled bioweapon risks. He suggested countries invest in defenses such as rapid vaccine development and improved personal protective equipment, adding that Anthropic is “excited” to work on those efforts with biotech and pharmaceutical companies.
Anthropic’s reputation, however, extends beyond safety. The startup, co-founded by Amodei in 2021 and now nearing a $350 billion valuation, has seen its Claude products—particularly its coding agent—gain wide adoption. Its 2025 revenue is projected to reach $4.5 billion, a nearly 12-fold increase from 2024, as reported by The Information, although its 40 percent gross margin is lower than expected due to high inference costs, which include implementing safeguards.
Amodei argues that the rapid pace of A.I. training and improvement is what’s driving these fast-emerging risks. He predicts that models with capabilities on par with Nobel Prize winners will arrive within the next one to two years. Other dangers include the potential for A.I. models to go rogue, be weaponized by governments, or disrupt labor markets and concentrate economic power in the hands of a few, he said.
There are ways development could be slowed, Amodei added. Restricting chip sales to China, for example, would give democratic countries a “buffer” to build the technology more carefully, particularly alongside stronger regulation. But the vast sums of money at stake make restraint difficult. “This is the trap: A.I. is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all,” he said.