Financial services companies and other organizations have six to 12 months to fix vulnerabilities in their software before Chinese artificial intelligence models develop capabilities equal to those of Anthropic’s Mythos, Anthropic CEO Dario Amodei said Tuesday (May 5).
Speaking during Anthropic’s livestreamed event The Briefing: Financial Services, Amodei said the Chinese models are six to 12 months behind Anthropic’s models, “so I think we have roughly that amount of time to fix all these vulnerabilities.”
Mythos has identified tens of thousands of vulnerabilities that Anthropic has not announced, because only a small fraction of them have been fixed, and the company doesn’t want bad guys to exploit those vulnerabilities, Amodei said.
These vulnerabilities could lead to breaches and ransomware attacks.
“I actually think if we handle this right, in six to 12 months, which is the time that we have to handle it because the Chinese models will catch up by that time. … If we handle this right, we could be in a better position than we started in because we fixed all these bugs, there’s only so many bugs to find, you know, the depth doesn’t go forever, and if we’re rewriting all of our code with models like Mythos, we can use Mythos to write code that’s inherently more secure by design,” Amodei said.
It was reported April 7 that Anthropic was allowing select partners early access to Claude Mythos Preview, a model positioned for defensive cybersecurity work, so that they could identify vulnerabilities and strengthen systems before threats could be exploited by new AI models.
Amodei’s comments on Tuesday came on the same day that the Center for AI Standards and Innovation (CAISI), which is part of the Department of Commerce’s National Institute of Standards and Technology, announced that Google DeepMind, Microsoft and xAI have each agreed to share their frontier AI models with CAISI for national security testing before releasing those models to the public.
The companies joined Anthropic and OpenAI, which signed agreements with CAISI’s predecessor, the U.S. Artificial Intelligence Safety Institute, in August 2024 and later renegotiated those agreements to match CAISI’s directives.