Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Arewa forum debunks ‘new vehicle tax’ claim, urges public to ignore misinformation

April 28, 2026

Bangladesh AI Propaganda Exposes Deepfake War Over Army – Pakistan Today

April 28, 2026

Shelby schools ends lockdown after false alarm

April 28, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

AI supercharging online scams as regulator ASIC takes down almost 12,000 sites in a year

News RoomBy News RoomApril 7, 2026Updated:April 8, 20266 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

It feels like the internet, once hailed as a beacon of connection and opportunity, has increasingly become a battleground, especially when it comes to money. Australia’s financial regulator, ASIC, has been tirelessly working to swat down fraudulent schemes, but the fight is getting tougher. Just imagine an average of 32 scam websites being pulled offline every single day – that’s 230 in a week! In 2025 alone, they managed to shut down nearly 12,000 such sites, a staggering 90% increase from the previous year. This Herculean effort is a direct response to a frightening trend: Australians collectively lost a heart-wrenching $2 billion to scammers in 2025. It’s a sum that paints a grim picture of the emotional and financial wreckage left behind by these online predators. While there’s a slight silver lining, with a reported 11% dip in losses from investment scams, ASIC Commissioner Alan Kirkland emphasizes that this is no time for complacency. They’re ramping up their game, employing third-party experts who constantly scour the web for suspicious financial schemes. Once identified and verified, these sites are swiftly instructed to be taken down. It’s a relentless, ongoing process that also relies heavily on reports from ordinary people and financial institutions – a testament to the idea that fighting this beast requires a collective effort.

The game, however, has fundamentally changed with the rise of artificial intelligence. Mr. Kirkland points out that AI is now playing a dual role in this alarming landscape. On one hand, it’s making it incredibly easy for fraudsters to churn out convincing-looking websites at an unprecedented pace. Gone are the days when scamming required significant technical know-how or manual effort. Now, as Professor Paul Haskell-Dowland, a cyber security expert, explains, you can “spin up a website, 10 websites, 100 websites, almost unlimited numbers, pretty much at the flick of a switch.” This accessibility to powerful tools has transformed scamming into what he calls a “service industry,” where sophisticated deception is readily available. Scammers no longer need to be coding wizards; they can simply pick and choose from a “supermarket aisle” of AI-powered tools, allowing them to assemble highly effective fraudulent campaigns with alarming ease.

The second, more insidious way AI is being weaponized is in crafting the very narratives of these scams. Scammers are now leveraging the “gloss of AI” to sell their deceptive propositions. Imagine encountering a website that promises incredible, rapid returns on investments, all thanks to some supposedly revolutionary AI trading bot. These aren’t just empty promises; the AI itself is used to generate persuasive content, often featuring fabricated reviews and testimonials that mimic legitimate financial advice. This isn’t just about creating a convincing front; it’s about tailoring the deception to specific individuals. With large language models (like ChatGPT), scammers can craft highly personalized attacks. They can gather information from public sources like LinkedIn and social media, then use AI to generate stories or products designed to specifically appeal to a high-net-worth individual, making the scam feel incredibly relevant and believable. This level of customization makes it exponentially harder for people to spot the red flags, blurring the lines between genuine opportunities and elaborate traps.

The human cost of these scams is truly staggering. Beyond the $2 billion lost, each reported scam, totaling nearly half a million in 2025, represents a person or family whose trust has been betrayed, whose financial security has been threatened, and whose sense of safety has been eroded. The increasing “sophistication” of these scams, fueled by AI, means that the days of easily identifiable, poorly written phishing emails are largely behind us. Even job scams, which often target vulnerable young people, are becoming more convincing. The traditional advice of looking for grammatical errors or awkward phrasing as indicators of fraud is now often insufficient. Scammers are using AI to generate flawless content, making their fake job offers and investment schemes appear legitimate, leading unsuspecting individuals to share sensitive information like bank account details.

Recognizing the escalating nature of this threat, there’s a glimmer of hope on the horizon in the form of new legislation. In February, laws were passed that aim to shift some of the burden onto major tech players, banks, and other institutions, making them liable to repay scam victims. This Scam Protection Framework has the potential to fundamentally alter the landscape. Social media companies will be compelled to verify advertisers, banks will need to confirm the identity of payees, and telcos will be tasked with detecting and blocking fraudulent texts and calls. These measures are crucial, especially considering that many scams originate from advertisements on social media. While the exact timeline for implementing these mandatory codes is still unclear, Mr. Kirkland believes this framework will be a pivotal element in the ongoing fight. It acknowledges that the responsibility isn’t solely on the individual to be vigilant, but also on the platforms and institutions that inadvertently facilitate these crimes.

Despite these efforts, Professor Haskell-Dowland warns that we are in a perpetual “cat-and-mouse game.” The technology won’t slow down, and neither will our adoption of it. The rapid advancement of AI, from niche academic concept to mainstream accessibility seemingly overnight, means that the tools for both good and ill are evolving at an unprecedented pace. He points out that just a few years ago, the criminal use of AI was limited, but now it’s deeply embedded in various illicit activities. This constant push-and-pull between cybercriminals and cyber-defenders means that there will always be a degree of one-upmanship. While a permanent solution might seem impossible, Professor Haskell-Dowland encourages a “watch this space” mentality. Just as AI emerged in ways no one predicted, other transformative technologies could arise that either solve the problem or, conversely, exacerbate it. In the meantime, individuals are urged to adopt basic but critical protective measures: STOP before sharing personal information or acting on unsolicited advice; CHECK for warnings and verify information independently; and PROTECT themselves by immediately contacting their bank or reporting suspicious activity to Scamwatch if anything feels wrong. In this constantly evolving digital battlefield, vigilance and a healthy dose of skepticism remain our most potent defenses.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Amazon blocked millions of fake products, reviews using AI: new report – CTV News

South Africa Withdraws AI Policy Over Fake AI-Generated Sources – 2oceansvibe News

Dwayne Johnson’s Wife Lauren Hashian Shuts Down Rumors She Welcomed Another Baby After AI Photos Go Viral

Kim Kookjin Exasperated by AI's Fake News Claims – 조선일보

Kim Kook Jin rebukes AI fake news, denies manipulation – Chosunbiz

OpenAI’s super PAC allegedly funded a fake news site staffed by AI reporters – Startup Fortune

Editors Picks

Bangladesh AI Propaganda Exposes Deepfake War Over Army – Pakistan Today

April 28, 2026

Shelby schools ends lockdown after false alarm

April 28, 2026

CMPA disputes Online Streaming Act misinformation in Ottawa » Playback

April 28, 2026

Press groups condemn Turkey’s ‘weaponization’ of disinformation law against journalists

April 28, 2026

Record number of false bomb alerts in Croatia, fears of ‘hybrid war’ – Politics

April 28, 2026

Latest Articles

Gambia, ECOWAS Launch Strategic Response To Misinformation –

April 28, 2026

CPJ joins call for Turkey to repeal disinformation law for its use against journalists – Committee to Protect Journalists

April 28, 2026

Don Lemon Blasts ‘False’ MAGA Spin On Trump Shooting: ‘It Drives Me ******* Crazy’

April 28, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.