It’s understandable to feel a shiver of unease when new technologies emerge, especially when they touch on our personal safety and privacy. This is precisely the sentiment rippling through Australia, where a recent study reveals that more than half of adults are wrestling with worries about artificial intelligence being wielded for ill intent. Imagine logging onto your favorite app, only to wonder if some unseen AI is tracking your every move, or if a perfectly crafted deepfake of a loved one might be used to trick you out of money. These aren’t far-fetched science fiction plots anymore; they are the very real anxieties that everyday Australians are grappling with. From the fear of location tracking to the chilling thought of someone impersonating them online, the Australian Institute of Criminology’s report paints a vivid picture of a nation cautiously embracing a powerful new tool while simultaneously bracing for its potential dark side.
Interestingly, despite these pervasive fears, Australians are far from shying away from AI. In fact, nearly three-quarters of adults have dabbled with at least one AI-powered application in the past year, with many using three or more in their daily routines. Think about it: Google Maps guiding you through unfamiliar streets, an AI translation service helping you understand a foreign language, or even ChatGPT assisting with your online queries – these are all examples of AI seamlessly woven into our lives. We’re spending a significant chunk of our day online, around three to four hours, suggesting a comfortable familiarity with digital tools. What’s curious is that the very people who use AI the most also tend to be less concerned about its potential for harm. It’s almost as if the more we engage with these technologies, the more we understand their benefits and perhaps, the less we fear their misuse. As Griffith University criminology lecturer Andrew Childs points out, AI is quickly becoming a normal part of life, helping with everything from work productivity to creative endeavors. It’s often integrated into platforms we already use, sometimes with our explicit knowledge, and other times as a subtle, behind-the-scenes upgrade. Monash University’s Abhinav Dhall attributes this widespread adoption to aggressive marketing and the ease of access to many free or affordable AI tools, tempting users to try them out in hopes of simplifying their tasks.
However, beneath this veneer of integration and growing familiarity lies a deep-seated apprehension about specific types of AI-driven harm. The report delves into the “common” AI crimes Australians fear most, revealing a disturbing landscape of potential misuse. Topping the list for over half of adults is the fear of AI location tracking tools monitoring their whereabouts and behavior. Imagine the chilling thought of an invisible digital stalker, always knowing where you are and what you’re doing. Equally concerning to nearly half of Australians are deepfakes – those incredibly realistic but fabricated videos of public figures, or even altered images and voices used for catfishing. The idea that AI can convincingly mimic someone to manipulate public opinion or deceive individuals for personal gain is a profound worry. Many also believe AI is being used to create fake online identities for criminals, making it easier for them to impersonate others to swindle money or information. Even more shockingly, a significant portion of Australians fear AI being used for revenge pornography or for online grooming, where individuals could use AI to pretend to be a child. This catalogue of fears paints a picture of a society acutely aware of AI’s potential to amplify existing harms and create entirely new ones, undermining trust and safety in the digital realm.
It’s natural to wonder how often these AI-related crimes actually occur in Australia, and this is where the picture becomes somewhat nuanced. While the fears are widespread, actual reported incidents specifically involving AI are, for now, less frequent. For instance, less than one percent of respondents reported experiencing AI-generated fake videos or photos of them in the past year. However, it’s crucial to acknowledge that the lines between traditional cybercrime and AI-enhanced cybercrime are blurring. For example, a significant number of Australians have experienced technology-enabled stalking or harassment, and many have had their location tracked without consent. While these incidents may not always be explicitly attributed to AI in reports, the technology undoubtedly strengthens the tools available to perpetrators. Dr. Childs warns of a stark reality: the rise of “dark AI tools” specifically designed for malicious purposes, openly traded in illicit online marketplaces. These are not merely accidental misuses but deliberately engineered weapons, often lacking any safety mechanisms. Meanwhile, Professor Dhall emphasizes that this trend isn’t unique to Australia; negative uses of AI are on the rise globally due to a lack of user training. The dark web, he notes, is teeming with services that allow users to generate synthetic identities, images, and voices, making impersonation and deception easier than ever before.
Beyond the general concerns, the report highlights how demographics play a significant role in shaping Australians’ anxieties about AI. Interestingly, men are generally more likely than women to believe that AI-driven impersonation, spam, harassment, revenge pornography, or account hacking are common occurrences, and they are also more likely to believe they or someone they know will be targeted. Age also presents an intriguing divide. Those between 35 and 49, and individuals over 50, are the most apprehensive, expressing both a belief in the commonality of AI misuse and a personal fear of falling victim. This suggests a worry about their vulnerability, even if they sometimes perceive AI-enabled crimes as less pervasive than younger generations. Conversely, younger adults, aged 18 to 34, are less likely to fear AI misuse targeting them but are more inclined to believe that smart home devices and personal assistants could be used to confuse or terrorize their users. The presence of children in the household also magnifies these concerns, with parents aged 25 to 49 more likely to believe in widespread AI misuse and fear becoming a victim. This shift in concern, where traditionally older individuals were more susceptible to online scams, highlights how AI could fundamentally change the landscape of victimisation, as warned by AI expert Toby Walsh.
Ultimately, the Australian Institute of Criminology’s report serves as a crucial wake-up call. It’s a reminder that while AI promises immense benefits and is rapidly integrating into our lives, it also carries a significant shadow of potential harm. The fears expressed by Australians are not irrational; they reflect a growing awareness of the technology’s dual nature and the urgent need for proactive measures. As we navigate this increasingly AI-driven world, the challenge lies in balancing innovation with robust safeguards, ensuring that the incredible power of artificial intelligence is harnessed for good, and that individuals are protected from its darker applications. The report underscores the imperative for ongoing education, clear societal guidelines, and robust regulatory frameworks to help Australians – and indeed, people worldwide – embrace AI with confidence rather than fear.

