Anthropic’s AI Attack Threatens the Strategy at the Foundation of Cybersecurity
Anthropic’s AI Attack Threatens the Strategy at the Foundation of Cybersecurity
To defend against AI-enabled cyberattacks, such as those by Chinese hackers using Anthropic’s models, companies must adopt AI-resistant cybersecurity strategies.
Chinese hackers successfully used Anthropic’s artificial intelligence (AI) models to run a hacking campaign almost entirely without human interaction, breaching “major technology companies and [foreign] government agencies,” the company reported in November. Nothing about any of the individual hacks was especially noteworthy or sophisticated. Still, the campaign marks a step change in cybersecurity.
AI-Assisted Hacking
The Chinese attackers did not just use Anthropic’s AI model to find a vulnerability in the digital armor. They built a hacking system that combined many distinct AI sessions that each worked on different aspects of the intrusion. Those separate sessions were coordinated by still other AI systems that were only occasionally directed by humans. Not only was the AI hacking system complex, but the attacks themselves were conducted in six phases, with many steps in each phase. In total, the AI system likely took dozens or more separate steps against each successful target.
None of the individual steps were groundbreaking, nor was stringing them together. The offensive tools and techniques were the same old ones that are already popular among hackers. And even a mediocre hacker could automate each of those steps without any AI at all. In fact, without having to run AI models, the attack would be faster, cheaper, and easier to scale across the internet. But each of those steps is a point where the attack could fail. The more steps in an attack, the less likely it is to succeed. What AI provides is a way to adapt on its own after failing, so that no single failure would end the attack.
How Adaptive Hacking Attacks Cyber Defense Strategies
By adapting at each step, this campaign took aim at the foundational strategy of cyber defense rather than at any of its components. In an approach called defense in depth, cybersecurity builds layers of defenses rather than relying on any individual defense to hold. Our study, titled “Defending Against Intelligent Attackers at Large Scales,” shows that defense in depth gives defenders an exponential advantage as long as there are relatively few attackers who can adapt intelligently at each step. Against large scales of intelligent attackers, the strategy crumbles.
There are two ways to implement defense in depth. One is to have enough layers of defenses that at least one will hold. The other is for each layer to slow the attacker enough to detect and expel them. Both have worked well in the past, but neither was designed to handle adaptable attackers at scale. Succeeding in the future will require using both strategies together.
In the combined strategy, no individual defense needs to hold for long. It just needs to force the attacker to make several attempts. If the defender has some chance of detecting the attacker in each attempt, then even a modest number of defenses stands a chance against large numbers of adaptable AI attackers.
Rethinking Cybersecurity Strategies for the AI Era
Some defenses were probably already built according to that combined strategy of block and detect. Many others were not. They did not need to be because the strategies of block or detect alone were sufficient. Now that cybersecurity strategy is under attack—not just its components—security teams need to revisit their entire approach to security, not just their toolkit.
CEOs will see reports like those in this attack and ask their security teams whether they are built to withstand it. Those security teams might look at the relatively rudimentary offensive techniques and say that their defenses would have held. And they would be right. But would those defenses have also held against whichever technique the AI attacker would have tried next after the first attempts failed? The math says maybe not, but it also says that successful defensive strategies do exist. We will see whether defenders will implement AI-resistant strategies in practice.
About the Author: Andrew Lohn
Andrew J. Lohn is a senior fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. He previously served as the director for emerging technology on the National Security Council staff within the Executive Office of the President during the Biden administration, under an Interdepartmental Personnel Act agreement with CSET. He has a PhD in electrical engineering from UC Santa Cruz and a bachelor’s in engineering from McMaster University.
Image: Photo For Everything/shutterstock
The post Anthropic’s AI Attack Threatens the Strategy at the Foundation of Cybersecurity appeared first on The National Interest.