The Spectre of the Maxim Gorky: Disinformation in the Digital Age

In 1934, the Soviet Union unveiled the Maxim Gorky, a colossal aircraft designed to disseminate propaganda across the vast Russian landscape. Equipped with a printing press, a film projector, and a "Voice from the Heavens" loudspeaker system, this airborne behemoth embodied the state’s ambition to control information and shape public opinion. Nearly a century later, in a world dominated by digital platforms and algorithmic amplification, the specter of the Maxim Gorky looms large, raising concerns about the spread of mis- and disinformation and the power dynamics shaping the modern information ecosystem. The Australian government’s recent attempt to address these challenges through the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023 highlights both the urgency and the complexity of this issue.

The proposed legislation, while aiming to enhance media literacy and require greater accountability from social media companies, has sparked debate about its potential for overreach. The bill’s core provision revolves around establishing codes of conduct for handling mis- and disinformation, with financial penalties for non-compliance. Critics argue that this approach risks capturing unintended content and exacerbating political divisions. The bill’s inconsistencies and exceptions, reflecting the contradictions of contemporary public life, further complicate the matter. The challenge of regulating online information flows is not new. Historically, political leaders have sought to influence public discourse, employing various strategies to persuade or coerce. In democratic societies, this often involved a delicate dance with the free press, which acted as gatekeepers of information. The rise of the internet and social media has fundamentally disrupted these traditional power structures, creating a decentralized and often chaotic information landscape.

The current definitions of mis- and disinformation revolve around verifiably false or misleading content shared online with the potential to cause harm. However, the distinction between misinformation and legitimate dissent can be blurry, and the debate over what constitutes harmful content often becomes a political battleground in itself. The proliferation of online platforms has fostered a resurgence of independent journalism and citizen reporting. However, the early utopian vision of a decentralized and democratized internet has not fully materialized. Instead of liberation from traditional power structures, we find ourselves grappling with new forms of digital tyranny, where algorithms and profit motives dictate the flow of information.

The shift in power from traditional media gatekeepers to tech companies has profound implications for the spread of mis- and disinformation. Unlike state-sponsored propaganda, the spread of false or misleading information online is often a secondary consequence of the business models driving social media platforms. These platforms are designed to maximize user engagement, which in turn attracts advertising revenue. Algorithmic systems amplify content that generates the most interaction, regardless of its veracity or potential for harm. This creates a perverse incentive for content creators to produce sensationalist and polarizing material, even if it is based on falsehoods.

The commercial pressures of the digital media landscape extend beyond social media platforms, influencing traditional media outlets as well. The competition for clicks and views incentivizes news organizations to prioritize engagement over accuracy, sometimes amplifying misleading or sensationalized content. This dynamic creates a symbiotic relationship between old and new media, where each reinforces the other’s tendency towards sensationalism and misinformation. The consequences of this dynamic can be devastating. Social media platforms have been implicated in facilitating real-world violence, including genocide in Myanmar. Authoritarian regimes and state-sponsored actors have exploited the vulnerabilities of these platforms to spread disinformation and manipulate public opinion.

The challenge of combating mis- and disinformation requires a deeper understanding of the political economy of the web. Simply fact-checking or debunking misleading content is insufficient. The root of the problem lies in the commercial exploitation of personal data that fuels the engagement-driven business models of social media platforms. Meaningful reform requires addressing this underlying issue. Strengthening privacy laws and restricting data collection would force these companies to re-evaluate their strategies and prioritize accuracy over engagement. Such reforms, while potentially facing opposition from powerful media and tech interests, are essential for creating a more democratic and informed public sphere. The fight against mis- and disinformation is not just about debunking false claims; it is about reclaiming control over the information ecosystem and ensuring that it serves the public interest, not just the pursuit of profit.

Share.
Exit mobile version