Microsoft Alert: 900K Chrome AI Extensions Are Spying on ChatGPT Conversations
A new investigation has revealed that some browser extensions posing as helpful AI assistants are quietly collecting users’ conversations with large language models.
Researchers from the Microsoft Defender Security Research Team say malicious browser extensions designed to appear as legitimate AI assistant tools have collected significant amounts of user data from interactions with popular AI platforms.
According to the company’s security blog, the campaign has already reached around 900,000 installs across Chrome-based browsers.
Telemetry from Microsoft’s security systems also shows the activity touching enterprise environments. The company said it detected related activity “across more than 20,000 enterprise tenants, where users frequently interact with AI tools using sensitive inputs.”
What was actually being stolen
The malicious add-ons targeted conversations with AI services such as ChatGPT and DeepSeek.
Once installed, the extensions could record browsing activity and parts of users’ AI conversations. Microsoft noted that the captured information included prompts and responses from chats, as well as full URLs of visited pages.
According to Microsoft, the data collection exposed organizations to the risk of leaking sensitive information. The company warned that the extensions gathered content that could include “proprietary code, internal workflows, strategic discussions, and other confidential data.”
How the attack works
The attack starts by exploiting a growing trend: browser extensions that add AI chat assistants to the side of a webpage.
Threat actors studied legitimate tools to mimic their branding and behavior. The malicious versions were then published on the Chrome Web Store with names and descriptions that closely resembled real productivity tools. Because browsers like Google Chrome and Microsoft Edge share the Chromium architecture, a single extension listing could spread across both platforms.
Once installed, the extension stealthily began monitoring user activity. The code logged visited websites and segments of chat messages generated while interacting with AI platforms. Microsoft explained that the extension essentially ran as a background data collector within the browser.
The collected data was stored locally and periodically sent to attacker-controlled servers. Microsoft’s analysis found that the extension transmitted the information through HTTPS requests to domains such as deepaichats[.]com and chatsaigpt[.]com.
Because the communication used common web protocols, the traffic could appear similar to normal browser activity. The report says the extension’s goal was long-term monitoring rather than immediate disruption.
Persistence through normal browser behavior
Unlike many forms of malware, the extension did not need advanced persistence techniques. Once installed, it behaved like a standard browser extension and automatically reloaded every time the browser started.
Local storage kept session identifiers and queued data uploads, allowing the extension to continue collecting information across sessions. The findings highlight a growing security challenge tied to the rapid adoption of AI tools.
Employees often paste code, internal notes, or business information into AI chat services, making those conversations valuable targets for attackers. When a malicious extension can read the webpage content, it can also capture that text directly from the browser’s interface.
This means sensitive information shared with AI tools may be exposed without users realizing it.
Steps organizations should take
Security teams are being urged to review browser extensions installed across corporate devices and remove any that are unverified or unnecessary.
The Microsoft researchers recommend several defensive measures, including:
- Auditing all browser extensions across the organization.
- Restricting extension installations through enterprise policies.
- Monitoring network traffic for connections to known malicious domains.
- Enabling browser protections such as SmartScreen and network filtering.
- Educating employees about the risks of installing unverified AI tools.
Organizations should also implement policies governing how sensitive data is shared with AI platforms.Also read: Another browser-based AI security threat surfaced in fake AI Chrome extensions targeting Gmail.
The post Microsoft Alert: 900K Chrome AI Extensions Are Spying on ChatGPT Conversations appeared first on eWEEK.