AI Fake News Found Its Perfect Home in Google Discover
The way people find information online has shifted. Typed search queries are giving way to algorithmically curated feeds. Google Discover, the personalized content feed used by hundreds of millions, has become a primary entry point for news and purchasing decisions worldwide.
That shift has opened a new attack surface. Threat actors have moved quickly to exploit it.
HUMAN Security researchers disclosed a fraud operation called Pushpaganda, in which attackers use artificial intelligence-generated content to infiltrate Google’s Discover feed, trick users into enabling persistent browser notifications and then use those notifications to deliver scareware, fake legal threats and financial scams. At its peak, researchers observed 240 million bid requests tied to Pushpaganda domains in a single seven-day period. The operation began in India and has since spread to the U.S., Australia, Canada, South Africa and the U.K.
From the Search Bar to the Feed
Traditional search results require intent. A user types a query, scans a list of links and clicks deliberately. Discovery feeds work differently. The feed surfaces content by predicted relevance, not user intent. A user does not have to be looking for something to encounter it.
That gap is where Pushpaganda operates. The feed rewards clicks and dwell time. Sensationalist headlines about tax deposits and government alerts score well on those signals. Generative AI produces them cheaply and fast. According to PYMNTS, attackers now mass-produce synthetic content at a velocity that outpaces platform enforcement.
Google’s own rules draw a clear line. The company’s scaled content abuse policy prohibits generating large volumes of content to manipulate rankings. Its AI content guidelines prohibit automation that produces content with no genuine user value. Pushpaganda violates both.
Google confirmed it deployed a fix after HUMAN Security shared its full list of 113 associated domains. The operation had already reached scale before enforcement caught it.
Where the Scam Takes Hold
Before a user reaches a payment screen, the attack is already complete. Threat actors built 113 domains. They used AI to generate articles optimized for Discovery feed placement. Headlines ranged from fake IRS deposit confirmations to implausible hardware claims.
Once a user clicked through, a notification permission prompt appeared immediately. Users who tapped “Allow” handed the attacker a persistent channel to their operating system. It bypasses ad blockers. It survives browser sessions. Those notifications then delivered fake arrest warrants, impersonated family member calls and false bank deposit alerts.
Each click generated ad revenue. Deepfake video ads impersonating celebrities and medical professionals ran on the same domains. A JavaScript rotation algorithm forced inactive tabs to cycle through actor-owned pages. It inflated ad loads and made the sites appear credible to advertising networks. PYMNTS reported that AI-generated content now passes human review. Fraud analysts can no longer reliably distinguish it from legitimate material.
The Limits of Platform Enforcement
Google deployed its fix. The infrastructure question remains open.
AI-generated content formatted to look like legitimate news is harder to detect using automated signals. Velocity and domain age are useful heuristics. But enforcement lags. By the time a cluster of domains is identified, the infrastructure may have already rotated.
The FBI’s 2025 Internet Crime Report logged 22,364 complaints referencing AI, with reported losses of $893 million. It was the first year in the agency’s 25-year history that it tracked AI-related fraud as a distinct category. The bureau found that AI enables criminals to generate synthetic content convincingly and at scale.
Visa has cautioned that detection alone is insufficient. Modern scams frequently manipulate victims into authorizing transactions themselves rather than exploiting technical vulnerabilities. Pushpaganda moves that manipulation earlier. It operates in the content discovery layer, where no transaction signal exists to trigger a fraud alert.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post AI Fake News Found Its Perfect Home in Google Discover appeared first on PYMNTS.com.