With that accusation comes the warning that the government will crack down on this practice, the Financial Times (FT) reported Thursday (April 23), citing a memo seen by the news outlet.
“The US government has information indicating that foreign entities, principally based in China, are engaged in deliberate, industrial-scale campaigns to distill US frontier AI systems,” wrote Michael Kratsios, director of the White House Office of Science and Technology Policy.
As the FT noted, the accusation marks an escalation in tensions around Chinese companies allegedly using American AI research in the race to dominate the technology, and is happening ahead of a meeting between the countries’ presidents.
The issue came to attention when China’s DeepSeek was accused of using distillation—or training smaller AI models based on the output of larger ones—to develop a powerful product at a lower cost.
Kratsios’ memo said the Trump administration would share information with American AI companies about “attempts by foreign actors to conduct unauthorized, industrial-scale distillation” and help them prevent attacks.
He added that Chinese campaigns were “leveraging tens of thousands of proxy accounts to evade detection and using jailbreaking techniques to expose proprietary information.”
The White House’s warning follows reports last week about proposed legislation that would crack down on Chinese companies that copy American-made AI models.
Rep. Bill Huizenga’s (R-MI) bill would sanction entities in China and Russia engaged in using improper “query-and-copy” techniques on American AI models.
“Model extraction attacks are the latest frontier of Chinese economic coercion and theft of US intellectual property,” Huizenga, who sits on the House Foreign Affairs Committee, said in a statement included in a Bloomberg News report on the issue.
Several Chinese AI firms—DeepSeek included—have been accused by American AI startup Anthropic of illicitly using the company’s output to train their models.
The company said in February it had found a total of 24,000 fraudulent accounts that the three labs used to generate 16 million exchanges with its Claude model, violating Anthropic’s terms of service and regional access restrictions.
“These campaigns are growing in intensity and sophistication,” Anthropic said in a blog post. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers and the global AI community.”