The growing adoption of artificial intelligence (AI) models has led to a new form of intellectual property theft called “model extraction attacks” or “distillation attacks,” Google Threat Intelligence Group (GTIG) said in a Thursday (Feb. 12) blog post.
In these attacks, threat actors use their legitimate access to a large language model (LLM) to extract information that can be used to train a new LLM, according to the post. By doing so, the attackers can develop their AI models at greater speed and lower cost.
There are legitimate uses for distillation, but using this technique without permission is a form of intellectual property theft, per the post.
“As organizations increasingly integrate LLMs into their core operations, the proprietary logic and specialized training of these models have emerged as high-value targets,” GTIG said in the post. “Historically, adversaries seeking to steal high-tech capabilities used conventional computer-enabled intrusion operations to compromise organizations and steal data containing trade secrets. For many AI technologies where LLMs are offered as services, this approach is no longer required; actors can use legitimate API access to attempt to ‘clone’ select AI model capabilities.”
During 2025, GTIG and Google DeepMind identified and disrupted model extraction attacks that came from researchers and private sector companies from around the world, according to the post.
The risk of these attacks is faced by model developers and service providers. Average users are generally not affected, because the attacks don’t threaten the confidentiality, availability or integrity of AI services, per the post.
“Organizations that provide AI models as a service should monitor API access for extraction or distillation patterns,” GTIG said in the post. “For example, a custom model tuned for financial data analysis could be targeted by a commercial competitor seeking to create a derivative product, or a coding model could be targeted by an adversary wishing to replicate capabilities in an environment without guardrails.”
GTIG shared these findings in the quarterly GTIG AI Threat Tracker. In this report, GTIG also found that during the fourth quarter of 2025, threat actors continued to integrate AI to enhance their attacks. They used the technology to support reconnaissance and target development, augment their phishing, and support coding and tooling development, per the post.
In its previous report, issued in November, GTIG said threat actors had begun using AI not only for productivity gains but also for “novel AI-enabled operations.”
The World Economic Forum said in a report released Jan. 12 that 94% of surveyed executives expect AI to be a force multiplier for both defense and offense in the cybersecurity space.
The PYMNTS Intelligence report “COOs Leverage GenAI to Reduce Data Security Losses” found that 55% of chief operating officers said their companies had implemented AI-powered automated cybersecurity management systems.