Attacks were enabled by developers simply browsing the web and accidentally landing on a malicious website, according to the post.
Oasis discovered the vulnerability and reported it to the OpenClaw security team, which classified the vulnerability as high severity and pushed a fix within 24 hours, the post said.
While there are over 1,000 fake plugins on OpenClaw’s community marketplace, ClawHub, that are actually malicious, this incident involved a vulnerability in the AI agent’s core system itself, per the post.
“For many organizations, OpenClaw installations represent a growing category of shadow AI: developer-adopted tools that operate outside IT’s visibility, often with broad access to local systems and credentials, and no centralized governance,” Oasis said in the post.
To mitigate the risk from attacks like this one, Oasis recommended in the post that organizations inventory the AI agents and assistants being used by their developers, immediately update OpenClaw, audit the credentials and capabilities granted to AI agents and revoke those that are not actively needed, and establish governance for non-human identities.
“As AI agents become standard tools in every developer’s workflow, the question isn’t whether to adopt them, it’s whether you can govern them,” Oasis said in the post.
PYMNTS reported in February that OpenClaw demonstrated that an AI agent operating through APIs can browse the web, read email, access files, run software and initiate transactions without a human driving each step. It also highlighted the need for enterprises to prepare for machine-native execution.
The World Economic Forum released a report in January that said AI is expected to be the most consequential factor shaping cybersecurity strategies this year. The technology will be a force multiplier for defense and offense, according to 94% of surveyed executives.
PYMNTS reported in September that as agentic AI transitions to practical use, companies and institutions are compelled to assess its economic potential and its associated risks.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.