Alright, let’s humanize and summarize this fascinating, yet troubling, story about the “Bush Legend.”
Imagine scrolling through your social media feed, and there he is: Jarren, a rugged Aussie bloke with a mop of dark curls and kind brown eyes. He’s standing chest-deep in the Australian outback, the sunburnt red dirt under his feet, a serpent coiling gracefully in front of him. In other videos, he’s trekking through dense, ancient forests or cruising down sun-drenched, deserted roads, his gaze scanning the skies for majestic wedge-tailed eagles. The soundtrack is often a pulse-pounding rhythm of percussion and the deep, resonant hum of a yidaki, or didgeridoo. His voice? It’s a comforting blend of Costa Georgiadis’s earthiness and Steve Irwin’s infectious passion, peppered with “mate” and “crikey” as he shares astonishing facts about Australia’s incredible wildlife – from venomous snakes and snapping crocs to tiny, potent redback spiders and even the elusive night parrot, a bird once thought lost to time. His thousands of followers are utterly captivated, leaving comments full of adulation, marveling at his bravery, and half-jokingly suggesting he deserves his own TV show. He’s the Australian wildlife hero we never knew we needed.
But then comes the gut punch: none of it is real. The captivating wildlife encounters, the earnest, charismatic presenter, the stunning backdrops – it’s all a meticulously crafted illusion, brought to life by artificial intelligence. This revelation transforms the admiration into a thorny ethical dilemma. The “Bush Legend” account, initially created in October 2025 (yes, a future date, as pointed out by Meta itself), traces its roots to New Zealand. It started with an AI-generated satirical news account, “Nek Minute News,” before shifting its focus to wildlife. Early iterations of this AI character even sported white body paint, reminiscent of traditional Indigenous ochre, and a string of beads around his neck, further deepening the problematic imitation. Despite the artificiality, this character has rapidly gained a massive following, with 90,000 on Instagram and 96,000 on Facebook, presenting itself as a platform for Australian wildlife education and awareness. This meteoric rise, however, is shadowed by the creator’s silence – a South African living in New Zealand who has not responded to inquiries from Guardian Australia.
The core of the issue lies in the deliberate choice to create an avatar that strongly resembles an Indigenous Australian person, a decision that has sparked significant ethical alarms and accusations of “cultural flattening.” Dr. Terri Janke, an Indigenous lawyer and expert in cultural and intellectual property, admits that even she was initially fooled. “You think it’s real,” she recounts, “I was just scrolling through and I was like, ‘How come I’ve never heard of this guy?’ He’s deadly, he should have his own show.” She playfully likened him to a “Black Steve Irwin,” a figure combining the adventurous spirit of Irwin with the gravitas of David Attenborough. While acknowledging the videos’ potential as an “incredible” educational tool, the Wuthathi, Yadhaigana, and Meriam woman expresses deep concern about the creation of such a seemingly Indigenous avatar. “Whose personal image did they use to make this person? Did they bring together people?” she asks, feeling genuinely “misled by it all.” She argues that this isn’t just a harmless creative endeavor; it’s an insidious form of “theft” that inflicts significant “cultural harm.”
Dr. Janke highlights that AI-generated content poses a particularly acute risk to marginalized communities. It’s not merely about intellectual property; it’s about the potential for cultural appropriation and the erasure of authentic voices. This AI creation, she argues, could inadvertently steal opportunities from real Indigenous creators, such as the vast network of Aboriginal rangers who authentically share their knowledge and connection to the land. She emphasizes that while AI can be used ethically to create content about First Nations people, it unequivocally requires their explicit consent and active involvement in the process. Tamika Worrell, a senior lecturer in critical Indigenous studies, goes a step further, labeling the AI avatar as “digital blackface.” She explains that in the absence of robust legislative safeguards for AI tools, there’s an alarming possibility that Indigenous images, cultural knowledge, and stories can be appropriated without any consent.
Worrell paints a stark picture of a future where AI becomes a wild west, a platform where “we have no control or no say in it.” She worries that not only can stories and language be co-opted, but the actual visual likenesses of Indigenous people – even those who have passed away – can be blended to create AI avatars with “no kind of accountability to the communities that these people are from.” This “AI blackface” allows non-Indigenous creators to generate “artworks, generate people” without any genuine engagement with Indigenous communities, creating a false veneer of representation. The harm, she asserts, is twofold: such AI accounts often default to showcasing “palatable” or “comfortable” aspects of Indigenous culture, sidestepping the complex realities, and, disturbingly, they can amplify existing racism. She notes that even on the Bush Legend page, the comments section features the “same racist comments that we know mob online get. We see it again applied to an AI person as well,” proving that AI’s artificiality doesn’t inoculate it from real-world prejudice.
Toby Walsh, a leading AI expert, explains that AI models are trained on vast datasets, and if those datasets contain inherent biases, the AI will inevitably reproduce and perpetuate them. “They are going to carry the biases of that training data,” he warns, meaning that if online images or videos of certain groups are stereotypical, the AI will project those stereotypes forward. In a defensive move, the Bush Legend account has recently attempted to address some of the criticism through its AI avatar, claiming it doesn’t “seek to represent any culture or group” and is “simply about animal stories.” It further stated that it’s not “asking for money, donations or support” and that content is “free to watch,” suggesting critics should simply “scroll on.” This stands in contrast to earlier calls for followers to subscribe for a monthly fee. Walsh concludes with a sobering thought: while digital literacy can help us identify AI-generated content for now, the “tells” are rapidly disappearing. Soon, he predicts, it will be “next to impossible” to distinguish reality from fabrication. We’re entering an era where our very perception of truth, once rooted in what we could see, is being fundamentally challenged, as faking convincing content becomes alarmingly easy.

