In the whirlwind of modern politics, when an election throws up a surprising result – especially when a figure notorious for spreading misinformation triumphs – it’s natural to point fingers and declare that falsehoods carried the day. Conversely, when such a politician faces defeat, there’s an equally strong urge to celebrate, believing truth has finally prevailed and fact-checking has won a decisive victory. However, both reactions miss the fundamental point. The true purpose of fact-checking and counter-disinformation efforts isn’t to dictate who wins or loses an election. Instead, it’s about empowering citizens with reliable information and equipping them with the critical thinking skills to dissect suspicious claims. What voters ultimately choose to do with that information is, and should remain, their own decision. The recent Hungarian parliamentary election in April 2026 offers a compelling illustration of this principle, showing that the landscape of political influence is far more nuanced than a simple battle between truth and lies.
The 2026 Hungarian election pitted the entrenched, state-backed propaganda machine of Viktor Orbán and his Fidesz party against the refreshing energy of political newcomer Péter Magyar and his Tisza party. As impartial observers and fact-checkers, our commitment is to apply the same rigorous standards to all claims, regardless of their source. Yet, it would be disingenuous to suggest a false equivalence between the disinformation tactics employed by both sides. Fidesz, leveraging a vast network of government-controlled media and loyal influencers, launched a relentless campaign of fear, falsely accusing its opponents of planning to reintroduce military conscription and send young Hungarians to fight in Ukraine. They even resorted to crudely edited videos presented as “evidence.” While Tisza’s platform also contained some arguably misleading presentations of commodity price increases, these paled in comparison to the scale and malicious intent behind Fidesz’s fabrications. Despite Fidesz’s overwhelming propaganda advantage, Magyar and Tisza achieved a resounding victory, sending shockwaves through the political establishment.
This unexpected outcome might tempt those involved in counter-disinformation work to pat themselves on the back, believing their efforts were instrumental in combating the spread of falsehoods. However, such a conclusion would be overly simplistic and ultimately misleading. Judging the effectiveness of fact-checking by specific electoral results fundamentally misinterprets its mission. Our goal isn’t to sway voters towards a particular candidate or party; it’s to foster an informed electorate capable of discerning truth from fiction, not just for a single election, but for all future civic engagements. More profoundly, definitively attributing an election’s outcome solely to the success or failure of disinformation campaigns is a fool’s errand, empirically speaking. Human voting behavior is a complex tapestry woven from myriad threads: personal economic circumstances, the national economic climate, the charisma and policies of individual candidates, and the pressing social issues of the day. A piece of disinformation, no matter how pervasive, might lose its potency if voters perceive other concerns, like healthcare or education, as more immediately relevant. The long-standing war in Ukraine, for instance, might have desensitized some Fidesz voters to further disquisitions on the topic, shifting their focus to more salient domestic issues.
Perhaps the most crucial takeaway for policymakers from Orbán’s defeat is a shift in perspective: disinformation should not be viewed as an irresistible political force that, if unchecked, will inevitably manipulate voters. This necessitates toning down the alarmist, almost militaristic rhetoric often associated with “hybrid warfare” and strategic disinformation, which tends to overemphasize electoral periods. Instead, counter-disinformation efforts should be seen as a continuous, sustained endeavor, not merely an emergency response during election cycles. Their purpose is to consistently equip citizens with the tools to make informed choices, whatever those choices may be, fostering a more resilient information ecosystem year-round. Beyond this foundational principle, the 2026 Hungarian election offered three distinct insights into the evolving landscape of political disinformation: the surprising limitations of Russian interference, the double-edged sword of generative AI, and the unexpected consequences of platform-wide ad bans.
During the election campaign, reports of Russian involvement in Hungarian politics triggered considerable alarm. However, our analysis of specific disinformation operations linked to known Russian groups revealed a surprising reality: the notorious foreign interference seemed almost amateurish when juxtaposed with the sophisticated domestic disinformation network wielded by Fidesz. We examined campaigns attributed to groups like Storm-1516, which consistently followed a predictable, almost clumsy pattern: creating a fake news site, publishing outlandish articles targeting Tisza Party figures with poorly doctored evidence (like the comically forged email connecting Ágnes Forsthoffer to Jeffrey Epstein), and then attempting to push these narratives through limited social media advertising. These fabricated stories were often so far-fetched that even pro-government media outlets chose not to amplify them, and their reach on platforms like Facebook rarely exceeded 100,000 users. This raises a critical question: if this is the extent of Russia’s influence on its supposed key EU ally, are we inadvertently bolstering Russian propaganda by consistently exaggerating its power and pervasiveness? As internal communications from organizations like the Social Design Agency hinted at as early as 2024, perhaps highlighting the limits and even the absurdity of Russia’s efforts might be a more effective counter-strategy than portraying them as an all-powerful, insidious force.
The 2026 campaign also saw the ubiquitous presence of AI-generated images and videos, utilized by both the ruling party and the opposition to amplify their messages. Fidesz, for instance, deployed an emotionally manipulative AI video depicting a Hungarian soldier being shot in Ukraine, his crying daughter awaiting his return – a blatant attempt to stir public sentiment. While such content was often easily identifiable as AI-generated and less likely to be perceived as factual reality, its primary function was to instill fear and emotion, images that would linger in viewers’ minds long after the initial encounter. Conversely, an AI video campaign from an opposition-aligned media outlet sought to create more naturalistic scenes of postal voters claiming to support Fidesz. Here, the intent was not necessarily to convince viewers of the factual accuracy of the interviews, but rather to incite animosity towards postal voters (often dual citizens living abroad), even among those who recognized the videos as synthetic. These examples highlight a crucial distinction: while AI-generated content can absolutely be a vehicle for disinformation, it’s vital not to conflate the two entirely. AI content often serves as an emotionally charged illustration of a political message, rather than a direct fabrication of reality. Policymakers grappling with the regulation of AI in political campaigns would do well to consider this nuanced role, focusing on its emotional impact as much as its potential for factual misrepresentation.
Finally, the election illuminated the unexpected consequences of Meta and Google’s decision in October 2025 to ban political ads on their European platforms, a response to new EU transparency regulations. Initially, some analysts worried that this “ad-free” environment would inadvertently favor extremist content and disinformation, as more nuanced messages and smaller parties would struggle to reach audiences without paid promotion. However, the Hungarian election presented a counter-narrative. In previous campaigns, Fidesz and its proxies had engaged in an overwhelming “carpet bombing” of online advertising, making it virtually impossible to consume content without encountering pro-government propaganda. This time, despite some attempts by Fidesz allies to circumvent the ban, the sheer volume of political ads dramatically decreased. Crucially, analyses revealed that the Tisza party, benefiting from organic reach and engagement, generated significantly more online traction than Fidesz. This outcome challenged the entrenched belief that extremist content and disinformation invariably dominate the organic struggle for online attention. The election demonstrated that a reduction in paid political advertising can, in certain contexts, level the playing field, allowing genuine engagement and grassroots support to flourish. As Hungary hopefully begins the arduous process of normalizing its information ecosystem after 16 years of Orbán’s rule, disinformation will undoubtedly persist. Our task, therefore, is to precisely define what it is and what it isn’t, to sensibly assess its potential effects and inherent limitations, and, most importantly, to empower citizens to navigate and outsmart it, regardless of the political tides or electoral cycles.

