Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

How to Build Trust When Patients Believe Misinformation

March 24, 2026

Former defence leaders outline already-present fossil fuel dependence, climate disinformation threats

March 24, 2026

Raghav Chadha Flags Misleading Food Branding In RS, Seeks Crackdown On False Advertising

March 24, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

‘GEO’ Services Are Flooding the Chinese Internet With Misinformation

News RoomBy News RoomMarch 24, 20265 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

It’s a brave new world, and with it come brave new ways to twist signals and manipulate perceptions. Imagine a bustling digital marketplace, where everyone is vying for attention, trying to make their product shine. Now, imagine a new, incredibly powerful tool enters the scene – Artificial Intelligence. These AI systems, capable of sifting through mountains of data and providing instant answers, quickly became the new gatekeepers of information. But what happens when some clever, and perhaps less ethical, entrepreneurs figure out a way to whisper sweet nothings directly into the AI’s “ear,” making it favor their products over others? This isn’t a sci-fi plot; it’s the reality of “Generative Engine Optimization,” or GEO, currently causing a stir in China.

GEO is essentially a sophisticated form of digital marketing, but instead of targeting human eyes, it targets the algorithms of AI models. Companies, eager to boost their visibility in AI-powered search results and recommendations, are paying for this service. We’re talking prices ranging from a few hundred dollars to nearly five thousand dollars for a three-month subscription, available on major e-commerce platforms like Taobao and JD.com. The core idea is simple: flood AI models like DeepSeek, Doubao, and Kimi with so much content about a specific product that the AI can’t help but notice and prioritize it. It’s like stuffing the ballot box, but with digital articles and optimized keywords. What starts as a seemingly innocent way to optimize content distribution and enhance promotional reach can quickly devolve into a systematic campaign of misinformation.

The problem, as highlighted by China’s state broadcaster CCTV, is that some businesses aren’t using GEO to simply inform, but to deliberately mislead. They’re feeding AI models skewed or even fabricated information about their products, tricking the AI into giving users biased answers. Picture a cunning salesperson who has not only mastered the art of persuasion but has also found a way to program the universally respected expert to endorse their product, even if it’s utterly undeserving. One GEO service provider, led by a man named Wang, boasted to CCTV about serving over 200 clients in just a year. Their pitch? They could guarantee top-three results on any AI platform for their clients’ desired content. Wang admitted that because AI algorithms are constantly changing, maintaining that prime visibility requires a relentless, continuous barrage of client-related content – a digital feeding frenzy, if you will.

To truly grasp the insidious nature of GEO, consider this chilling experiment. An industry insider, equipped with a software called the “Liqing GEO Optimization System,” created a completely fictional smartwatch, the “Apollo-9.” They then input fabricated details about this imaginary product into the software. What happened next was astonishing: the system automatically generated over a dozen promotional articles, complete with made-up authors, and then published them across the insider’s social media accounts. Within a mere two hours, asking a major AI model, “How is the Apollo-9 smartwatch?” yielded a response citing these fake articles. The AI, completely fooled, went on to describe the product’s (non-existent) health monitoring features and enthusiastically recommended the fictional device. The insider then escalated the experiment, publishing eleven more articles over the next three days, including fake expert reviews and industry rankings. The result? When asking for “smart health wristband recommendations,” at least two different major AI models proudly listed the non-existent Apollo-9 among their top suggestions. This isn’t just a glitch; it’s a demonstration of how easily AI can be co-opted to spread persuasive falsehoods.

Li, the founder of the Liqing GEO system, openly acknowledged the ethical quagmire his service presented but, perhaps with a touch of cynical pragmatism, justified it by saying, “Every business loves it… They all hope others won’t engage in ‘AI poisoning,’ even as they themselves do it.” He revealed the core of the GEO operation: publishing advertisements or optimized news releases on specific publishing websites. These websites, once struggling to make a profit, are now inundated with publishing orders, each costing dozens of yuan. Li painted a vivid picture of this unprecedented demand, rhetorically asking, “Do you know how many articles some sites publish per day? Hundreds, literally every minute.” This paints a picture of a digital content farm churning out AI-fodder at an alarming rate, all in the service of manipulating algorithms.

The Chinese authorities are aware of this burgeoning problem. Earlier this year, the State Administration for Market Regulation recognized “AI-generated advertising” as a significant challenge in online ad oversight, calling for targeted enforcement. However, concrete regulations specifically addressing GEO are yet to be issued. The public outcry following the Consumer Rights’ Gala did trigger a wave of disavowal from several GEO companies, who, perhaps feeling the heat, publicly condemned “brainwashing” AI and pledged to curb the misinformation flowing from their services. Yet, the genie is out of the bottle. This phenomenon serves as a stark reminder of the ethical tightrope we walk as AI technologies become increasingly integrated into our lives. While the potential for AI to enhance and inform is immense, so too is its vulnerability to manipulation and the spread of deliberate falsehoods. As consumers, we are left to wonder: how much of the “information” we receive from AI is genuinely impartial, and how much is merely a cleverly disguised commercial pitch?

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

How to Build Trust When Patients Believe Misinformation

Rajeev Chandrasekhar Rebukes Congress’ Alleged Misinformation Tactics

Paul Martin Jensen: Dogmatic upbringing helps him fight misinformation

The Rhubarb Poisoning Repeat: Lessons on Misinformation and AI from History

ADHD and autism ‘misrepresented’ on social media, study finds

Truth, media and misinformation | MRU

Editors Picks

Former defence leaders outline already-present fossil fuel dependence, climate disinformation threats

March 24, 2026

Raghav Chadha Flags Misleading Food Branding In RS, Seeks Crackdown On False Advertising

March 24, 2026

‘GEO’ Services Are Flooding the Chinese Internet With Misinformation

March 24, 2026

Bulgaria’s Foreign Minister Sets Up Special Unit to Counter Disinformation Ahead of Elections – Novinite.com

March 24, 2026

Iran fires more missiles at Israel, dismisses Trump’s talk as ‘fake news’ : NPR

March 24, 2026

Latest Articles

United Kingdom: Freedom on the Net 2025 Country Report

March 24, 2026

Rajeev Chandrasekhar Rebukes Congress’ Alleged Misinformation Tactics

March 24, 2026

Why it’s amazing someone didn’t blow up Georgia Guidestones sooner

March 24, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.