It feels like the internet, once hailed as a beacon of connection and opportunity, has increasingly become a battleground, especially when it comes to money. Australia’s financial regulator, ASIC, has been tirelessly working to swat down fraudulent schemes, but the fight is getting tougher. Just imagine an average of 32 scam websites being pulled offline every single day – that’s 230 in a week! In 2025 alone, they managed to shut down nearly 12,000 such sites, a staggering 90% increase from the previous year. This Herculean effort is a direct response to a frightening trend: Australians collectively lost a heart-wrenching $2 billion to scammers in 2025. It’s a sum that paints a grim picture of the emotional and financial wreckage left behind by these online predators. While there’s a slight silver lining, with a reported 11% dip in losses from investment scams, ASIC Commissioner Alan Kirkland emphasizes that this is no time for complacency. They’re ramping up their game, employing third-party experts who constantly scour the web for suspicious financial schemes. Once identified and verified, these sites are swiftly instructed to be taken down. It’s a relentless, ongoing process that also relies heavily on reports from ordinary people and financial institutions – a testament to the idea that fighting this beast requires a collective effort.
The game, however, has fundamentally changed with the rise of artificial intelligence. Mr. Kirkland points out that AI is now playing a dual role in this alarming landscape. On one hand, it’s making it incredibly easy for fraudsters to churn out convincing-looking websites at an unprecedented pace. Gone are the days when scamming required significant technical know-how or manual effort. Now, as Professor Paul Haskell-Dowland, a cyber security expert, explains, you can “spin up a website, 10 websites, 100 websites, almost unlimited numbers, pretty much at the flick of a switch.” This accessibility to powerful tools has transformed scamming into what he calls a “service industry,” where sophisticated deception is readily available. Scammers no longer need to be coding wizards; they can simply pick and choose from a “supermarket aisle” of AI-powered tools, allowing them to assemble highly effective fraudulent campaigns with alarming ease.
The second, more insidious way AI is being weaponized is in crafting the very narratives of these scams. Scammers are now leveraging the “gloss of AI” to sell their deceptive propositions. Imagine encountering a website that promises incredible, rapid returns on investments, all thanks to some supposedly revolutionary AI trading bot. These aren’t just empty promises; the AI itself is used to generate persuasive content, often featuring fabricated reviews and testimonials that mimic legitimate financial advice. This isn’t just about creating a convincing front; it’s about tailoring the deception to specific individuals. With large language models (like ChatGPT), scammers can craft highly personalized attacks. They can gather information from public sources like LinkedIn and social media, then use AI to generate stories or products designed to specifically appeal to a high-net-worth individual, making the scam feel incredibly relevant and believable. This level of customization makes it exponentially harder for people to spot the red flags, blurring the lines between genuine opportunities and elaborate traps.
The human cost of these scams is truly staggering. Beyond the $2 billion lost, each reported scam, totaling nearly half a million in 2025, represents a person or family whose trust has been betrayed, whose financial security has been threatened, and whose sense of safety has been eroded. The increasing “sophistication” of these scams, fueled by AI, means that the days of easily identifiable, poorly written phishing emails are largely behind us. Even job scams, which often target vulnerable young people, are becoming more convincing. The traditional advice of looking for grammatical errors or awkward phrasing as indicators of fraud is now often insufficient. Scammers are using AI to generate flawless content, making their fake job offers and investment schemes appear legitimate, leading unsuspecting individuals to share sensitive information like bank account details.
Recognizing the escalating nature of this threat, there’s a glimmer of hope on the horizon in the form of new legislation. In February, laws were passed that aim to shift some of the burden onto major tech players, banks, and other institutions, making them liable to repay scam victims. This Scam Protection Framework has the potential to fundamentally alter the landscape. Social media companies will be compelled to verify advertisers, banks will need to confirm the identity of payees, and telcos will be tasked with detecting and blocking fraudulent texts and calls. These measures are crucial, especially considering that many scams originate from advertisements on social media. While the exact timeline for implementing these mandatory codes is still unclear, Mr. Kirkland believes this framework will be a pivotal element in the ongoing fight. It acknowledges that the responsibility isn’t solely on the individual to be vigilant, but also on the platforms and institutions that inadvertently facilitate these crimes.
Despite these efforts, Professor Haskell-Dowland warns that we are in a perpetual “cat-and-mouse game.” The technology won’t slow down, and neither will our adoption of it. The rapid advancement of AI, from niche academic concept to mainstream accessibility seemingly overnight, means that the tools for both good and ill are evolving at an unprecedented pace. He points out that just a few years ago, the criminal use of AI was limited, but now it’s deeply embedded in various illicit activities. This constant push-and-pull between cybercriminals and cyber-defenders means that there will always be a degree of one-upmanship. While a permanent solution might seem impossible, Professor Haskell-Dowland encourages a “watch this space” mentality. Just as AI emerged in ways no one predicted, other transformative technologies could arise that either solve the problem or, conversely, exacerbate it. In the meantime, individuals are urged to adopt basic but critical protective measures: STOP before sharing personal information or acting on unsolicited advice; CHECK for warnings and verify information independently; and PROTECT themselves by immediately contacting their bank or reporting suspicious activity to Scamwatch if anything feels wrong. In this constantly evolving digital battlefield, vigilance and a healthy dose of skepticism remain our most potent defenses.

