A ‘Zombie’ AI Attack Shows How ChatGPT Can Leak Your Secrets
ChatGPT, the world’s most popular chatbot with over 800 million weekly users, just got a stark reminder that convenience can come at a price.
Security firm Radware has revealed a new attack technique, dubbed “ZombieAgent,” that could turn the AI assistant into a stealthy data thief and even a persistent spy inside users’ accounts.
ZombieAgent exploits ChatGPT’s Connectors — the feature that links the chatbot to Gmail, Outlook, Google Drive, GitHub, Teams, Jira, and other external systems. While designed to make ChatGPT more useful, this connection gives attackers a foothold to secretly extract sensitive information.
Radware’s demonstration shows multiple attack types. In a zero-click attack, an attacker sends a malicious email. If the user asks ChatGPT to perform a Gmail-related task, the AI reads the hidden instructions in the email and exfiltrates data via OpenAI’s servers “before the user ever sees the content,” according to Radware.
A one-click variant works similarly but relies on the user sharing a malicious file with ChatGPT. Once read, the chatbot executes instructions to leak data or even modify its memory for persistent exfiltration. This persistence means attackers could continuously harvest sensitive information from every conversation, indefinitely.
The ‘Zombie’ that won’t leave
What makes this attack truly unique is its persistence. Normally, if you close a chat window, the “poisoned” instructions should disappear. But the chatbot now has a “Memory” feature that remembers things about you to be more helpful.
Radware found they could trick the AI into saving malicious rules into its long-term memory. Once those rules are in place, the attacker doesn’t need to send another email. Every time the user starts a new conversation, the AI checks its memory, sees the attacker’s “spy” rules, and continues stealing data.
“ZombieAgent illustrates a critical structural weakness in today’s agentic AI platforms.” Pascal Geenens, VP of Threat Intelligence at Radware, warned, “Enterprises rely on these agents to make decisions and access sensitive systems, but they lack visibility into how agents interpret untrusted content or what actions they execute in the cloud. This creates a dangerous blind spot that attackers are already exploiting.”
Technical bypass
ZombieAgent bypasses OpenAI’s existing protections. ChatGPT now refuses to append parameters to URLs, but attackers can use pre-constructed character-specific URLs to leak data one letter or digit at a time. This clever workaround lets attackers exfiltrate information without ever having the AI “modify” the URLs.
Radware explains, “This combination of broad connector access and invisible or near-invisible prompt injection significantly amplifies the real-world impact and practicality of the attacks we describe.”
The vulnerability was reported to OpenAI via Bugcrowd in September 2025 and patched in mid-December. Radware’s researcher Zvika Babo highlighted that, even with fixes, the core risks of indirect prompt injection remain. The attacks demonstrate how easily AI agents can be manipulated through the content they are designed to read and process.
What this means for you
While the specific “ZombieAgent” loophole has been closed, the underlying problem remains: AI models struggle to distinguish between a user’s helpful command and a hidden malicious instruction buried in a document or email.
As Geenens put it when speaking to Dark Reading: “I sometimes refer to AI as a baby with a massive brain. It’s very naive, and it has the knowledge of the world and has access to all your secrets. So there’s no need to do a buffer overflow, special coding, no. You just talk with it and you convince it to do something it was not supposed to do in the first place.”
For now, the best defense is to be cautious about which apps you connect to your AI assistants and to regularly review what your AI has “remembered” about you in its settings.
Also read: OpenAI’s latest ChatGPT Atlas security update underscores why prompt injection remains a moving target for AI agents.
The post A ‘Zombie’ AI Attack Shows How ChatGPT Can Leak Your Secrets appeared first on eWEEK.