It’s a strange and unsettling world we live in, where even our news can’t always be trusted. Imagine scrolling through your Facebook feed, expecting updates from your favorite Olympic swimmer, and instead, you’re hit with outlandish claims about them refusing to compete alongside a transgender athlete. This isn’t some far-fetched dystopian novel; it’s happening right now, and it’s being orchestrated from halfway across the world. In Vietnam, a silent, almost invisible, industry of digital manipulation has taken root, turning Australia’s already heated political landscape into a playground for profit and propaganda.
These aren’t just rogue individuals; we’re talking about a sophisticated operation. Picture a network of Facebook pages, initially masquerading as innocent fan accounts for sports like “Swimming Secrets” and “Tennis Triumph.” They’d offer a mix of genuine athlete news, but subtly interwoven with outright fabrications. One particularly brazen claim suggested Australian swimming star Mollie O’Callaghan would boycott the next Olympics if a trans athlete were allowed to compete – a completely baseless rumor designed to ignite controversy. Soon, these pages, boasting thousands of followers and managed by users in Vietnam, pivoted. Their facade dropped, and they zeroed in on Australian national politics, becoming conduits for AI-generated articles. These articles, often a perplexing blend of real news and total fiction, would then lead to websites brimming with advertisements, all designed to generate revenue. It’s an “industrial level” of misinformation, as one open-source intelligence analyst, Giano Libot, aptly described it, engineered to exploit search engine algorithms and capitalize on the current lack of regulations in places like Southeast Asia.
The good news is that entities like AFP are fighting back. After being alerted to these dubious activities, Meta, the parent company of Facebook, took down 13 of these pages in March, citing violations. But this isn’t an isolated incident. Vietnam, with its low labor costs and cheap electricity, has unfortunately become fertile ground for “click farming” – the practice of generating fabricated online engagement. Last year, AFP uncovered over 30 baseball-themed pages, mostly operating from Vietnam, publishing false political claims right before the World Series, prompting similar removals by Meta. And it’s not just Australia or the US; Dutch politicians have also been targeted by similar disinformation campaigns debunked by AFP fact-checkers, who operate in 26 languages across the globe. This surge of AI-generated political clickbait is a relatively new and concerning phenomenon for Australia, a nation currently grappling with increasing political polarization, which makes it a prime target for such destabilization efforts.
These manipulators are shrewd, tapping into Australia’s internal political strife. The recent infighting within the opposition coalition and the rise of Pauline Hanson’s far-right One Nation party have provided a wealth of material for these pages to exploit. One of the most widespread hoaxes involved a fabricated claim that Hanson had filed a US$12 million lawsuit against Prime Minister Anthony Albanese’s Labor Party. These identical posts would appear on both the sports-themed and political Facebook accounts, all linking to ad-laden websites, some even containing Vietnamese titles, a clear giveaway of their origin despite contact details deceptively pointing to American hotels and casinos. Another alarming post falsely claimed Hanson had publicly exposed Foreign Minister Penny Wong’s record on CNN, prompting online commenters to cheer for the senator and demand the diplomat “go home.” These claims were, of course, entirely false, and analysis using AI detection tools, including one co-developed by AFP, confirmed they were “likely machine-generated.” As a One Nation spokesperson rightly put it, this is a “clear case of foreign interference in domestic Australian politics.”
While Australia’s next federal election isn’t until 2028, local politics are still incredibly vulnerable. State-level elections, like those coming up in Victoria and New South Wales, can be significantly swayed by such polarizing online narratives. As Ika Trijsburg of the Australian National University notes, at the local level, electoral behavior is “much less entrenched,” making it easier to manipulate. Attempts are being made to combat this digital tide. Vietnam, for instance, became the first Southeast Asian country to enact a law regulating AI in March, requiring companies to clearly label AI-generated content. This legislation applies to both local and foreign entities operating within Vietnam, a step in the right direction. However, the sheer volume of “AI slop” suggests this will be an ongoing battle.
Just as Meta was removing the identified pages, new ones like “AU News Today” were already popping up in mid-February, publishing Australian political news that mirrored the disinformation tactics. The Australian Associated Press even uncovered a similar Vietnam-based network, disguised as legitimate news outlets, that continued to operate well into March. Shaanan Cohney, a cybersecurity expert, perfectly captures the essence of this struggle, calling it a “cat-and-mouse game.” The skills of those creating disinformation are constantly evolving, making it increasingly difficult to detect and dismantle these networks. It’s a sobering reminder that in our increasingly digital world, vigilance and critical thinking are more crucial than ever if we are to truly discern fact from fabrication.

