House Republicans are reportedly seeking sanctions on Chinese companies that copy American-made artificial intelligence (AI) models.
According to a report Thursday (April 16) from Bloomberg News, Rep. Bill Huizenga, R-Mich., has proposed a bill that would direct the government to identify entities in China and Russia employing improper “query-and-copy” techniques on American AI models.
A draft of the legislation seen by Bloomberg calls for the consideration of sanctions on offenders via the Commerce Department’s blacklist and through presidential emergency economic powers created by a 1977 law.
The report added that possible targets of the bill could include Chinese companies that have already been accused of borrowing from U.S. AI models, such as DeepSeek, Moonshot and MiniMax, along with larger companies.
Bloomberg frames the proposal as Congress’ first major step in addressing worries from the likes of OpenAI, Google and Anthropic that users in China are developing less-safe copies of their products that could pull away customers and undercut them in terms of cost.
The legislation, dubbed the Deterring American AI Model Theft Act, is set to be considered by the House Foreign Affairs Committee next week, the report added.
“Model extraction attacks are the latest frontier of Chinese economic coercion and theft of US intellectual property,” Huizenga, who sits on the House Foreign Affairs Committee, said in a statement.
Referring to Anthropic’s high-profile Mythos system, he added that “American AI models are demonstrating transformative cyber capabilities and it is critical we prevent China from stealing these technological advancements.”
Anthropic in February accused Chinese AI startups of creating 24,000 fraudulent accounts to use its tech to improperly train their own models.
“These campaigns are growing in intensity and sophistication,” Anthropic said in a blog post. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers and the global AI community.”
Weeks earlier, Google Threat Intelligence Group (GTIG) said in a blog post that it has witnessed a growing incidence of distillation attacks or “model extraction attacks.”
Last year, GTIG and Google DeepMind identified and disrupted model extraction attacks that came from researchers and private sector companies from all over the world.
“Organizations that provide AI models as a service should monitor API access for extraction or distillation patterns,” GTIG wrote. “For example, a custom model tuned for financial data analysis could be targeted by a commercial competitor seeking to create a derivative product, or a coding model could be targeted by an adversary wishing to replicate capabilities in an environment without guardrails.”