In a digital age brimming with information, an intriguing experiment recently pulled back the curtain on how easily misinformation can spread, particularly when amplified by artificial intelligence. Jon Goodey, an SEO professional, stumbled upon an AI hallucination within his newsletter workflow – a fabricated “Google March 2026 Core Update.” Instead of correcting it, Goodey, with a mischievous glint in his eye, decided to publish this piece of fake news as a live experiment. His goal: to observe the ripple effect of unverified information in the often-turbulent sea of search marketing. What he discovered was a worrying testament to the internet’s capacity to amplify falsehoods, even within specialized communities, and a stark reminder of the critical need for human vigilance in the age of AI-generated content.
Goodey’s method was simple yet profound. He purposely included the AI-generated hallucination about the non-existent Google update in a LinkedIn article. His usual workflow, he explained later in a follow-up post, includes a quality control step designed to catch such AI blunders. However, upon identifying this particular fabrication, he seized the opportunity to turn it into an impromptu study. He wanted to see if anyone would challenge the false information, if the community, normally quick to dissect every nuance of Google’s algorithms, would question a seemingly authoritative announcement. This decision set in motion a chain of events that offered a fascinating, albeit concerning, glimpse into the mechanics of online information dissemination. The experiment wasn’t just about testing the waters; it was about exposing the currents.
Remarkably, it wasn’t just human actors who propagated the misinformation; even Google, the very engine of information, played a role. Goodey’s fabricated LinkedIn article began to rank prominently for the search query “Google March update 2026” – not buried deep in search results, but right on the first page. More alarmingly, Google’s nascent AI Overviews feature picked up the concocted details and presented them as fact. This revelation underscored a significant vulnerability in Google’s ecosystem: its struggle with fact-checking, particularly concerning SEO-related queries. As Goodey aptly pointed out, searching for SEO insights on Google can feel like a gamble, with no guarantee of accuracy. This long-standing “black spot” on Google’s search results, where even dubious black-hat tactics can sometimes appear validated, makes the amplification of fake updates less surprising, exposing a fundamental flaw in the promise of objective search results.
The initial ripple from Goodey’s experiment quickly grew into a wave of echoed misinformation. As is often the case in the SEO world, Google core updates are a powerful magnet for traffic and a proven way for agencies to attract potential clients. Historically, the SEO community has shown a proclivity for generating buzz around both real and imagined updates. This established pattern meant Goodey’s fake update was, almost inevitably, picked up and elaborated upon by various SEO websites. These weren’t just fleeting blog posts; Goodey observed “detailed, authoritative-sounding articles” that meticulously outlined specifics like “Gemini 4.0 Semantic Filters,” “Information Gain metrics,” and “recovery strategies,” all presented as confirmed facts. This rapid echoing and embellishment of false information served as a stark demonstration of how quickly an initial seed of untruth can blossom into a seemingly credible, widespread narrative online.
While many reputable search marketing publications, including SEJ (Search Engine Journal) and its competitors, wisely ignored the fabricated March update, one technology site, TechBytes, took the bait. Goodey specifically highlighted an article from this site, titled “Google March 2026 Core Update: Cracking Down on ‘Agentic Slop’,” which not only repeated the core fake news but also inventively added its own layer of specific, technical-sounding details. This piece, credited to Dillip Chowdary, conjured up concepts like a “Gemini 4.0 Semantic Filter,” a “Zero Information Gain” classification system, and a “Discover 2.0 Engine” prioritizing long-form technical narratives. This embellishment went beyond mere repetition; it showcased a willingness to create additional layers of fabrication, further cementing the illusion of a legitimate update and underscoring the creative, yet dangerous, potential of misinformation to evolve and ensnare.
This whole episode shines a harsh light on the broader issue of fact-checking in the digital realm, especially in light of Google’s public stance. While Google’s Danny Sullivan has reportedly indicated that Google doesn’t engage in direct fact-checking, a recent Axios report offered more concrete insights. The report detailed Google’s firm refusal to comply with a proposed EU law requiring fact-checking integration into its search results and YouTube videos. Kent Walker, Google’s global affairs president, argued that such integration was “simply isn’t appropriate or effective for our services.” Google instead points to its existing content moderation efforts, like the contextual notes feature on YouTube and similar initiatives by Meta and X, as sufficient. However, for a user, the notion that Google, the world’s primary gateway to information, actively chooses not to build fact-checking into its core algorithms feels incongruous with its perceived role as a purveyor of truth, and this experiment underscores the very real consequences of that policy.
Goodey’s experiment yielded several critical takeaways that should resonate with anyone navigating the digital landscape. Firstly, and perhaps most importantly, it served as a powerful reminder for individuals to proactively fact-check information they encounter online. The experiment revealed that without active scrutiny, even professionals in a specialized field can fall prey to misinformation. Secondly, for those employing AI in their workflows, the need for robust validation mechanisms is paramount. AI, while powerful, is prone to “hallucinations,” and human oversight remains indispensable. Thirdly, the experiment sadly confirmed that the majority of online readers do not fact-check; only a handful of commenters challenged Goodey’s false claims, indicating a pervasive passive consumption of information. Finally, the role of AI Overviews and traditional search results in amplifying misinformation was undeniably demonstrated, showing how a single piece of fabricated content can be echoed, embellished, and spread across the internet, solidifying its perception as truth. In an era where AI-generated content is becoming ubiquitous, Goodey’s small experiment serves as a profound warning: critical thinking and human vigilance are not just recommended, they are essential.

