Imagine a world where telling truth from fiction feels like trying to catch smoke. That’s the unsettling reality artificial intelligence is creating in our information landscape, according to Ioanna Kostarella, a media scholar from the sun-drenched city of Thessaloniki. She’s not talking about simple photo trickery; she’s describing a sophisticated new era where AI churns out “synthetic information” so flawlessly that it blurs the lines between what’s real and what’s a masterful fabrication. This isn’t just an academic concern; it’s about the very foundation of how we understand our world, how we make decisions, and how vulnerable we become to expertly crafted deceptions.
Kostarella, an Associate Professor of Journalism at Aristotle University, shared her insights on the “Homo sAIence” vidcast, painting a vivid picture of how generative AI is extending the shelf life and deepening the impact of misinformation. Think about it: remember the good old days (just a few years ago!) when a false claim could be picked apart and debunked within minutes, perhaps a few hours at most? It was like a quick cleanup crew sweeping away the lies. But now, she warns, the game has changed entirely. That same debunking process, for a truly convincing piece of AI-generated content, can stretch to as long as three days. Three days! For experts, with all their training and tools, to fully dismantle a synthetic piece of content. What does that mean for the rest of us, who don’t have that specialized knowledge, who are just trying to navigate our daily lives? It means we’re more exposed, more susceptible, and more likely to be swayed by narratives that are, frankly, not real. It’s like being caught in a swirling fog where you can’t quite trust your own eyes.
This isn’t about AI being a little mischievous; it’s about a fundamental shift in the nature of deception. Kostarella calls it “synthetic information”—material forged by AI that mimics genuine reporting with uncanny precision. It’s not just a doctored image; it could be an entire news article, a compelling video, or an audio clip that sounds perfectly authentic, even though it’s completely fabricated. This level of sophistication makes it incredibly difficult, even for seasoned professionals, to discern authenticity. Imagine a crime scene where the evidence looks perfectly real, but every piece of it was cleverly planted. That’s the challenge we face. And if experts are struggling to unravel these digital illusions for days on end, what hope do ordinary people have? As she poignantly put it, “When experts need three days to debunk a false claim, we understand very well what this means for people who do not have relevant knowledge and may be more vulnerable to misinformation.” It’s a stark reminder that while technology advances, it often leaves segments of the population more exposed.
While the landscape appears daunting, Kostarella isn’t without hope. She emphasizes that resources and verification tools do exist. She points to initiatives like the European Digital Media Observatory and its regional affiliates as vital lines of defense. These organizations are working tirelessly to identify and combat misinformation, acting as digital watchdogs in a constantly evolving environment. However, and this is where she really drives home her point, the most crucial shield against this onslaught isn’t some fancy AI algorithm or a complex piece of software. It’s something far more fundamental, something deeply human: our own critical thinking. “We need to return to the basics,” she urges – to the bedrock of how we use our intelligence and judgment to evaluate the information we encounter. It’s about pausing, questioning, and applying that internal filter before accepting something as truth. In a world awash with persuasive fakes, our inherent ability to think critically becomes our superpower.
The conversation naturally pivoted to the future of journalism itself, a profession that often feels like it’s perpetually on the ropes. The rise of AI undoubtedly casts a long shadow over newsrooms. Consider this: recent research by the BBC and the European Broadcasting Union reveals that 7% of news consumers are already getting their information from AI assistants. For the younger demographic, those under 25, that number jumps to a significant 15%. This suggests a growing reliance on automated sources for news, which, on the surface, might seem efficient. However, a separate study delivered a sobering counterpoint: out of 3,000 chatbot responses, a shocking 45% contained at least one error. This isn’t just a minor slip-up; it’s almost half of the information being served up by AI potentially being incorrect. This dual reality—convenience versus accuracy—presents a significant dilemma for both consumers and professional journalists.
Despite these challenges, Kostarella remains steadfast in her belief in the enduring power of human journalism. She acknowledges the pressure from automated systems and “content farms” – operations designed to mass-produce cheap, often unreliable content. Yet, she firmly believes that at its core, journalism will always be fundamentally human. It brings a unique blend of critical judgment, emotional intelligence, and ethical responsibility that machines, no matter how advanced, cannot fully replicate. “Journalism has at its centre the human being,” she declares, “it continues to carry a social, humanitarian and emotional dimension.” This isn’t just about reporting facts; it’s about understanding context, empathizing with stories, and holding power accountable in a way that resonates with human experience. She is confident that for many years to come, the human element in news production will not just be vital but irreplaceable. Journalists, she argues, should embrace AI for its practical benefits – summarizing lengthy reports, converting speech to text, restoring old audio, tracking audience engagement, and improving audiovisual quality. But this adoption must be tempered with an unwavering commitment to ethical responsibility. In an age where technology leaps forward at lightning speed, leaving regulations trailing far behind, journalists must be the moral compass, ensuring these powerful tools are used always in service of truth, accuracy, and rigorous source verification. It’s a call to arms for the profession, reminding them that their unique human touch, combined with responsible tech usage, is the ultimate safeguard against the rising tide of digital deception.

