It used to be that when you wanted to find something online, you’d type a question into a search bar. Think of it like walking into a massive library and asking the librarian for a specific book. But things have changed a lot. Now, it’s more like walking into that library, and before you can even open your mouth, the librarian is already handing you books they think you’ll like based on your past interests. This is what’s happening with personalized content feeds, like Google Discover. For hundreds of millions of people around the world, this feed has become the go-to place for catching up on news and even deciding what to buy.
This shift, while convenient for many, has unfortunately opened up a new playground for bad actors – essentially, it’s a new “attack surface.” And these folks, unfortunately, are quick to jump on any opportunity to trick people. Recently, a group of cybersecurity experts called HUMAN Security uncovered a massive fraud operation they’ve named “Pushpaganda.” Imagine a sophisticated group of digital con artists using really clever computer programs, specifically artificial intelligence (AI), to churn out fake articles and headlines. They then strategically place these fabricated stories directly into people’s Google Discover feeds. The goal? To trick you into allowing persistent notifications on your browser, and once they have that permission, they bombard you with scary pop-ups (scareware), phony legal threats, and all sorts of financial scams. At its peak, these researchers saw an astonishing 240 million attempts to display ads linked to Pushpaganda’s fake websites in just one week! This operation started in India but has since spread globally, reaching countries like the U.S., Australia, Canada, South Africa, and the U.K.
Think about the difference between searching and discovering. When you type something into a search bar, you have a clear intention. You’re looking for something specific, and you deliberately click on links that seem relevant. But discovery feeds work differently. They’re designed to surprise you with content they predict you’ll find interesting, even if you weren’t actively looking for it. This is precisely where Pushpaganda thrives. These feeds are designed to keep you engaged, rewarding content that gets a lot of clicks and keeps you reading. Sensational headlines – like fake alerts about tax deposits or urgent government warnings – are perfect for this, as they instantly grab attention. The really concerning part is that generative AI, the same technology behind tools that can write stories or create images, can produce these types of captivating but utterly false headlines cheaply and incredibly fast. It’s like having an infinite army of content creators that can churn out convincing-looking fake news at a speed that enforcement teams can barely keep up with. Google itself has clear rules against this kind of manipulation, specifically prohibiting the mass creation of content to boost rankings and any AI-generated content that offers no real value to users. Pushpaganda, unfortunately, breaks both of these rules. While Google did eventually deploy a fix after HUMAN Security shared a list of 113 related websites, the operation had already grown to an enormous scale before it was effectively stopped.
The moment a user is tricked isn’t when they hand over money; it’s much earlier in the process. These digital con artists built over a hundred fake websites and used AI to craft articles perfectly designed to show up in Google Discover. The headlines were a mix of believable-sounding financial alerts, like fake IRS deposit confirmations, and outlandish claims about technology. Once a user innocently clicked on one of these links, they were immediately hit with a prompt asking for permission to send notifications. If a user, perhaps out of curiosity or habit, clicked “Allow,” they unwittingly gave the attackers a direct and persistent channel to their computer or phone. This channel is sneaky; it bypasses ad blockers and even remains active across different browsing sessions. Through these notifications, the attackers would then send alarming messages – fake arrest warrants, calls impersonating family members in distress, or false bank deposit alerts, all designed to scare or trick the victim into acting without thinking. Every single click on these links generated ad revenue for the criminals. To make things even more convincing, these fake sites also ran deepfake video ads, featuring AI-generated images of celebrities or medical professionals endorsing products. They even used clever code to force inactive browser tabs to cycle through other fake pages they owned, artificially inflating ad views and making their sites appear more legitimate to advertising networks. The concerning reality is that AI-generated content has become so sophisticated that it can now often pass for genuine material, making it incredibly difficult for even trained fraud analysts to tell the difference.
The challenge for platform operators like Google is immense. While Google did eventually roll out a fix for Pushpaganda, the core issue of dealing with AI-generated fake content remains wide open. Content specifically designed to look like real news is incredibly hard for automated systems to detect. While things like the speed at which domains are created and their age can offer clues, enforcement often lags. By the time a cluster of fake websites is identified and shut down, the attackers can simply move to a new set of domains, making it a constant game of whack-a-mole. The scale of the problem is becoming clearer: the FBI’s 2025 Internet Crime Report, for the first time in its 25-year history, specifically tracked AI-related fraud as a distinct category. They logged over 22,000 complaints, with reported losses nearing $900 million, confirming that AI empowers criminals to produce convincing synthetic content at an unprecedented scale. Financial institutions like Visa have also warned that simply detecting these scams isn’t enough. Modern scams often manipulate victims into authorizing transactions themselves, rather than exploiting technical vulnerabilities. Pushpaganda is a prime example of this manipulation happening even earlier in the process. It operates in the content discovery phase, a layer where there isn’t a financial transaction happening yet, meaning there’s no immediate signal for banks or payment processors to trigger a fraud alert.
This new reality requires everyone to be more vigilant. The convenience of a personalized feed comes with the risk of encountering highly sophisticated scams designed to fool even the most internet-savvy individuals. It’s a stark reminder that while technology offers incredible benefits, it also creates new avenues for those with malicious intent.

