It feels like we’re all drowning in information these days, doesn’t it? Endless streams of updates, opinions, and “facts” hitting us from every angle. And often, it’s hard to tell what’s real, what’s biased, and what’s just plain made up. That feeling of being overwhelmed, or even a little bit lost when it comes to trusting what we find online, is something many Australians are grappling with. We’re seeing a real shift away from traditional news sources – those reliable, often local voices we once turned to – and instead, people are increasingly gravitating towards social media, influencers, and now, even clever AI chatbots and their instant summaries. It’s like we’ve traded the trusted, albeit sometimes imperfect, gatekeepers of information for a wild, digital frontier where opaque algorithms are the new sheriffs in town. These algorithms, which are essentially complex sets of rules that decide what we see, are notorious for caring little about accuracy, journalistic quality, or the kind of evidence-based reporting that truly underpins a healthy, functioning community.
This digital deluge is happening at a time when, ironically, the very foundations of good journalism are crumbling. Local newspapers are disappearing, and a growing distrust in mainstream news is simmering, fueled by a feeling that they often miss the mark or have their own agendas. This whole situation has been supercharged by the rise of “zero-click” AI search results. Instead of simply pointing us to a news article, AI now offers direct, instant answers right at the top of our search pages. While it might seem convenient, it’s a huge problem for news organizations. This means fewer people are clicking through to their websites, fewer eyes on their content, fewer opportunities for subscriptions, and ultimately, less revenue. It’s a vicious cycle that’s pushing an already fragile news ecosystem – one that really struggles for funding and relevance – even closer to collapse. Imagine if your local baker suddenly had a machine outside that gave away free bread recipes, so no one bought their loaves anymore. It’s a similar, existential threat.
The urgency of this situation recently brought together a diverse group of over 45 leaders at a “News Futures: Media Policy Roundtable.” This wasn’t just a chat; it was a gathering of minds from various sectors – industry giants, government officials, non-profits, powerful digital platforms, and academics – all wrestling with this very problem. The consensus was striking and clear: the sheer opaqueness of algorithms on social media, search engines, and AI platforms is a major threat. These hidden mechanisms decide what information is elevated, what’s buried, and what’s simply omitted, all with very little public accountability or explanation. It’s like a secret council deciding what we’re allowed to know. The report born from this roundtable, published recently, isn’t just suggesting minor tweaks; it’s calling for a complete paradigm shift in how we understand, support, and define journalism in Australia, advocating for fundamental changes to protect the very idea of a well-informed public.
One of the biggest concerns voiced was the absolute explosion of misinformation. It’s like a weed that thrives in fertile ground: when people desperately want information but there isn’t enough reliable evidence readily available, misinformation rushes in to fill the void. The good news is that a consistent supply of high-quality, trustworthy news and information can act like a strong herbicide against this. Our research actually shows a clear connection: the more people consume good news, the better they become at identifying and pushing back against misinformation. However, the legal and educational frameworks haven’t caught up with the rapid pace of AI content. We’re seeing “deepfakes” – incredibly realistic but fabricated videos and audio – and there are no clear rules about where online content originates from, or even basic guidelines for checking its authenticity. And since many AI systems operate like “black boxes” – we can see the input and output, but not truly understand how they arrive at their conclusions – it’s incredibly difficult to pinpoint responsibility when errors occur or biases creep in. This technological complexity only adds to a widespread feeling of uncertainty.
It’s no wonder then that Australians themselves have a really low sense of confidence in their ability to sort through the noise. Only about 40% feel confident they can actually tell if a website or social media post is trustworthy, and just 43% believe they can reliably confirm if information they find online is true. This already shaky confidence is being further eroded by the increasing amount of “AI slop” and “hallucinations” – basically, low-quality, often nonsensical, or downright false information churned out by AI. This isn’t a minor inconvenience; it’s a significant concern that cuts deep. In fact, Australians are among the most worried people globally when it comes to online misinformation. When everything starts to feel unreliable, and the lines between fact and fiction blur, it’s understandable that many people just want to switch off. And a significant number are doing just that, with 69% admitting to avoiding news often, sometimes, or occasionally. It’s like giving up on trying to find a needle in a haystack when you suspect the whole haystack is actually a pile of straw.
The experts at the roundtable were deeply concerned about the low levels of media and AI literacy among ordinary citizens. If people struggle to verify information and don’t know where to turn for trusted sources, they become vulnerable. This isn’t just about personal choice; it’s a systemic issue. Digital platforms, through their secret algorithms, act as unreliable intermediaries for news. They make invisible, unaccountable decisions that fundamentally reshape how we access information. These choices create “winners” and “losers” in the online information space, elevating some content while burying others, often with little regard for actual quality or accuracy. They’re like biased editors operating in the shadows, constantly changing the rules without telling anyone. There’s currently no real incentive for these platforms to explain how their algorithms work, when they change, how news is prioritized (or deprioritized), or how AI-generated content is produced. This lack of transparency is a critical roadblock. There’s an urgent need for clear transparency in how algorithms curate content and a mandatory requirement to label all AI-generated content so we, the users, know what we’re engaging with.
So, where do we go from here? The roundtable participants identified five key priorities that, if implemented, could dramatically improve our entire information ecosystem. Critically, three of these specifically target the challenges posed by AI. First, we need greater transparency from big tech platforms. Australians deserve to know how algorithms on search engines, social media, and AI chatbots are curating their news feeds. We need to know when AI is involved in creating the content we see. Clear labeling and disclosure rules are essential to rebuild trust and empower users to make informed choices. Second, fair rules for AI’s use of news are crucial. AI companies shouldn’t be allowed to freely take valuable journalistic content without compensation. Industry-wide licensing agreements, copyright reform, and stronger competition laws are needed to ensure news organizations are fairly paid when their work is used to train these powerful generative AI tools. It’s about valuing the labor and investment that goes into creating reliable information.
Third, prioritizing media and AI literacy education across the nation is a no-brainer. Teaching people how algorithms work, how to identify bias, and how to spot misinformation is one of the fastest and most cost-effective interventions available. This isn’t just for kids in school; adults need ongoing opportunities to develop these crucial skills too. We need to empower everyone to be more critical consumers of information. Fourth, journalism funding needs to reflect its vital role as a public good. One-off grants are simply not enough to sustain this essential service. Sustainable alternatives, like a tax offset for journalists’ salaries, could directly support newsrooms, especially crucial small and regional outlets, while maintaining accountability. This would provide a more stable foundation for quality journalism to thrive. Finally, journalism training for news influencers, content creators, and digital-first outlets is essential. To ensure the quality of the entire news ecosystem, a common industry code of ethics and best practices is required, and the industry needs to collaborate to develop and implement this. We can no longer afford to live in an information environment where invisible AI dictates what we see and believe. Without decisive action, the public interest journalism that forms the bedrock of our democracy and social cohesion will continue to erode, leaving us vulnerable and uninformed.

