China went crazy for OpenClaw. Now it’s working to ban it
Earlier this week, social media was wowed by images from the streets of Chinese cities showing senior citizens lining up to have OpenClaw, the always-on AI assistant, installed on their laptops, desktops, and other devices. Areas like Shenzhen and Wuxi offered subsidies to try to scale up adoption of the tool and capitalize on its capabilities. An enormous proportion of all OpenClaw instances installed worldwide, as tracked by public dashboards, emanate from China.
But just as quickly as China adopted OpenClaw, it now appears to be shunning it. The country’s internet emergency response center has issued an official warning about the risks the technology poses. The central government has sent out diktats to government agencies and state-owned enterprises, warning them against installing OpenClaw on their systems. The private sector has also responded. The same pop-up providers of installation services are now offering to uninstall unwanted OpenClaw instances for a fee.
“It’s almost a notice from the Department of Stating the Bleeding Obvious,” says Alan Woodward, a cybersecurity professor at the University of Surrey in England. “Everyone has been saying ‘don’t be so silly as to give agentic AI access to any valuable data.’” Yet Woodward points out that China’s response is more than that—they appear to recognize that AI adoption has been so rapid that it presents a prime target for supply chain attacks. “Attackers were bound to produce malicious add-ons and plug-ins,” he says.
China can’t seem to make up its mind about what to make of OpenClaw, says Ryan Fedasiuk, a fellow at the American Enterprise Institute covering China and its tech development. “Beijing is simultaneously banning OpenClaw on government networks while local governments in Shenzhen and Wuxi are subsidizing companies that build on top of it,” he says. That points to a dual focus, Fedasiuk reckons.
“The Chinese government aims to capture the economic upside of agentic AI while keeping it out of the party-state’s own bloodstream,” Fedasiuk says. However, how long that balance can hold is debatable, not least because of the way every private-sector actor is trying to adopt agentic AI, he adds.
“Banning agents in 2026 is like trying to ban spreadsheets in 1985, or Google Sheets in 2013,” he says. “The productivity gains are enormous, and the opportunity cost of abstaining from the use of agents will eventually become untenable.”
Still, Fedasiuk points out that China’s OpenClaw ban seems eminently sensible. “Governments should be alarmed by the cybersecurity implications of AI agents,” he says. “Social norms around the technology are progressing such that many hackers will soon no longer need to crack the encryption that guards valuable files or digital services, but merely gaslight a piece of software that has already been given access to them.” The problem is that it’s out of step with current thinking about AI.
Nevertheless, it appears that China has decided that widespread use of OpenClaw could cause safety headaches in the months to come. “Prompt injections and plug-in poisoning are still the thorn in a chatbot’s side, and it isn’t surprising China is flagging it, when you consider that every layer of the AI stack has a commercial incentive to push the tools far and wide,” says Jake Moore, a cybersecurity expert at ESET. “There are also the same structural risks with agentic AI tools that are granted high-level system permissions before anyone has properly stress-tested what an attacker can do with them.”
Moore says the on-and-off relationship with OpenClaw reflects how different the pace of development is between the bleeding edge of artificial intelligence and those trying to roll it out responsibly. “AI is clearly built to be fast and invasive, but it is outpacing security standards and reviews,” he explains.
For Fedasiuk, that dysfunction between the speed of development and the speed of security patching is evident in how China’s Central Cyberspace Affairs Commission announced its change in policy. “[It] has watched agents proliferate across government networks and moved to restrict their use within days or weeks,” he says. Usually the commission would study the issue as a policy problem, issue a white paper or road map, and then come to a conclusion on which it acted.
The fact that it didn’t “suggests preexisting anxiety within the CCP [Chinese Communist Party] about what autonomous AI means for information security—and possibly a more sophisticated understanding of where the technology is headed than many Western observers give them credit for,” Fedasiuk says.