Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Disinformation leads Chișinău conference

March 21, 2026

Carmichaels school officials push back against ‘false’ GOP attack mailer

March 21, 2026

Neha Suratran: ‘Hinduism does not convert’: Indian-origin Frisco resident speaks against H-1B hate, misinformation about Indian-Americans

March 21, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»United Kingdom
United Kingdom

The challenges of studying visual misinformation during election campaigns

News RoomBy News RoomMarch 19, 20269 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Here’s a humanized summary of the provided text, aiming for approximately 2000 words across six paragraphs (though hitting a precise word count of 2000 words with this source material is challenging without significant expansion and additional creative flair):

The year 2024 was dubbed the “year of elections,” and it truly earned that title. Billions of people, a staggering 3.7 billion to be exact, had the chance to cast their votes across the globe. It was a monumental democratic exercise, but it unfolded against a backdrop of simmering anxiety and apprehension. For many years now, there’s been a growing sense of unease about the very foundations of our democratic institutions. It feels like we haven’t questioned the stability and sustainability of these systems so openly since the dust settled after the Cold War. This underlying tension wasn’t just a political talking point; it seeped into the fabric of daily life, making every election feel like a critical juncture. At the heart of this growing concern, and often at its most insidious, is the relentless rise of “fake news” and misinformation. This isn’t a new phenomenon – the term “post-truth politics” has been bandied about for a while, even bagging the Oxford English Dictionary’s “word of the year” in 2016. But what’s truly shifted in recent times, and what’s caught the attention of researchers like LSE’s Nick Anstead and Bart Cammaerts, is the explosive growth of visual misinformation. It’s no longer just about misleading headlines or twisted narratives; it’s about images and videos that can fool the eye and stir powerful emotions. Thanks to incredible leaps in technology, we now have powerful yet surprisingly easy-to-use AI tools that can conjure up photo-realistic images from thin air and create “deepfakes” – videos so convincing they make you question reality itself. This technological leap has added a whole new, deeply unsettling layer to the misinformation problem. The ability to generate such persuasive visual content, to literally create a false reality with just a few clicks, has intensified the threat to democratic discourse and made the search for truth all the more challenging. It’s like navigating a labyrinth where every corner could hide a cleverly disguised illusion, and the stakes are nothing less than the clarity and integrity of our shared understanding of the world.

For academics and researchers like Nick and Bart, trying to get a handle on this ever-evolving beast of visual disinformation and the myriad problems it unleashes is like trying to catch smoke. It’s incredibly difficult, primarily because of the very nature of how we consume information today. Unlike the old days of broadcast television or print media, where everyone largely saw the same content, social media is a deeply personal and customized experience. Each of us is presented with a unique, bespoke feed, tailored by algorithms to our individual preferences and past interactions. We consume this content privately, often on our phones, tablets, or laptops, in the quiet intimacy of our own digital bubbles. This personalized consumption makes it incredibly hard to get a broad, representative view of what’s circulating. You can’t just flip through a newspaper and know what everyone else is seeing. Furthermore, recent years have seen social media platforms, perhaps ironically, making it increasingly challenging for researchers to access the very data needed to understand these trends. Think of Twitter, now X, which has severely restricted and hiked the cost of accessing its Application Programming Interface (API), essentially making it a much more exclusive club for data access. And then there’s Meta, the parent company of Facebook, which famously shut down CrowdTangle, a widely used tool that offered researchers valuable insights into public content. These actions, whether intended or not, have created significant hurdles for anyone trying to conduct systematic studies on online information flows, especially when it comes to something as nuanced and pervasive as visual misinformation. It’s like trying to study an elusive animal in its natural habitat, but the animal keeps changing its camouflage and the forest rangers keep cutting off your access to the trails. The research landscape has become a complex and often frustrating terrain, demanding innovative approaches and a willingness to adapt to these new, restrictive environments.

Faced with these formidable obstacles, Nick and Bart and their team knew they couldn’t just follow the usual research playbook. To really get a sense of visual misinformation during election campaigns, they decided they had to get creative, to think outside the traditional research box. Their approach was both ingenious and pragmatic. They focused their efforts on four key countries: Belgium, the United Kingdom, and the United States, and then, due to an unexpected snap parliamentary election called in June 2024, they swiftly added France to their data collection. This wasn’t just about observing from afar; they decided to immerse themselves in the social media ecosystem. They set up “dummy accounts” – essentially fake profiles – on the four major social media platforms: Facebook, Instagram, TikTok, and X. This was their innovative way of slipping past the algorithmic gatekeepers and directly observing the content. In each of these countries, they meticulously configured these dummy accounts to “follow” either high-profile left-wing or right-wing figures. The idea was to create two distinct informational ecologies for each country, mirroring the polarized political landscape. Then, armed with a dedicated team of research assistants, they embarked on a rigorous monitoring process. They didn’t just passively collect data; they actively watched these feeds, meticulously identifying and gathering examples of visual misinformation that popped up. This hands-on, almost ethnographic approach to data collection was crucial, allowing them to circumvent many of the data access issues and gain first-hand insight into the visual narratives unfolding within these specific political spheres. It was a painstaking process, but it reflected their commitment to understanding this challenging phenomenon despite the barriers.

After all that careful setup and diligent monitoring, the team gathered a total of 402 instances of visual misinformation across the four countries. While this might seem like a solid collection, Nick and Bart are the first to admit that this dataset isn’t meant to be exhaustive or perfectly representative. And frankly, with the constantly shifting landscape of social media and the unique, personalized feeds everyone experiences, it’s almost impossible to define what a truly “representative” sample would even look like in this context. It’s like trying to capture the entirety of a vast ocean with a single bucket; you’ll get some interesting samples, but you’ll never get everything. However, what this dataset does offer is incredibly valuable: a comprehensive and illuminating snapshot of the kinds of visual misinformation that were actively circulating online during these critical election campaigns. It provides a tangible glimpse into the shadowy corners of online political discourse, showing the creative and often insidious ways in which images and videos are manipulated to mislead. The distribution of this data across the countries is illuminating: Belgium recorded 24 instances, France 76, the United Kingdom 161, and the United States 141. While these raw numbers alone don’t tell the whole story, they begin to paint a picture of where visual misinformation might be more prevalent or perhaps where certain types of political engagement lend themselves more to its spread. This careful acknowledgment of both the strengths and limitations of their data underscores the scientific integrity of their work, recognizing the inherent complexities of studying such a dynamic and opaque environment.

One of the most striking findings from their initial analysis, though not entirely surprising given previous research, was how disproportionately visual misinformation circulates on the political right. This wasn’t just a hunch; their dataset strongly reinforced what other studies on fake news and visual misinformation have observed. It echoed recent work by Petter Törnberg and Juliana Chueri (2025), who shrewdly noted that “current political misinformation is not linked primarily to populism, but specifically to the populist radical right.” And indeed, Nick and Bart’s findings aligned perfectly with this. Their dummy accounts, specifically those configured to follow prominent right-wing social media figures, consistently encountered significantly more visual misinformation than the equivalent accounts set up to track left-wing users. This trend held true across all four countries in their study. Even in the UK, where the figures for left and right accounts were relatively closer, the right-facing accounts still saw a majority (59%) of the misinformation compared to 41% on the left. This closer margin in the UK led the researchers to a fascinating speculation: perhaps the British left has a greater propensity to use AI to generate satirical content, which while not strictly “misinformation” in its intent, might appear as such in certain contexts or be easily misconstrued. This observation highlights the nuance required when studying these phenomena – sometimes what looks like one thing might have a different underlying purpose. The fact that this pattern of right-wing prevalence was so consistent across diverse political landscapes suggests a deeper structural or cultural predisposition within certain political factions for the creation and dissemination of visually deceptive content.

This initial sweep of data, as revealing as it is, is truly just the tip of the iceberg for Nick and Bart’s research. They view their dataset not as an end in itself, but as a robust foundation for digging deeper into the labyrinthine world of visual misinformation. It’s a stepping stone to answering even more intricate and critical questions that lie ahead. For instance, they’re keen to unpick the technological sophistication of the misinformation they found. What role do cutting-edge techniques like deepfakes actually play, compared to simpler, more rudimentary forms of visual deception? Is it often just a matter of content being slyly edited, subtly cropped, or deceptively mislabeled, rather than a full-blown AI fabrication? This distinction is vital because it speaks to the resourcefulness and intent behind the misinformation. Furthermore, they want to understand the origins of this visual trickery. Does it typically emanate from official party accounts – the very heart of political organizations – or does it bubble up from the wider, more amorphous “social media milieu” of anonymous users, influencers, and fringe groups? Pinpointing the source is crucial for understanding accountability and developing effective countermeasures. Finally, they aim to conduct a comparative analysis: how does the landscape of visual misinformation shift and change across their four case study countries? And, perhaps most importantly, how do these variations connect to the unique political contexts and cultural nuances within which these elections are fought? Only by systematically dissecting these complex questions, by peeling back these layers of digital deceit, can we truly hope to grasp the profound and increasingly impactful role that visual disinformation plays in shaping contemporary politics and, by extension, the future of our democracies. Their work is a crucial beacon in understanding this often-invisible battle for truth.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

More than half of TikTok ADHD content is misinformation, new research finds

Reform MS Laura Anne Jones’s children identifying as cats speech labelled misinformation

“We Are Not Taking Back Foreigners”, Presidency Clarifies Nigeria-UK Migration Pact

UK Considers Mandatory Labels for AI-Generated Content in Copyright Reforms, ETBrandEquity

New O3C Survey Report: News Sharing on UK Social Media: Misinformation, Disinformation & Correction | Online Civic Culture Centre

Overview and key findings of the 2024 Digital News Report

Editors Picks

Carmichaels school officials push back against ‘false’ GOP attack mailer

March 21, 2026

Neha Suratran: ‘Hinduism does not convert’: Indian-origin Frisco resident speaks against H-1B hate, misinformation about Indian-Americans

March 21, 2026

SSU exposes large-scale Russian disinformation operation targeting Hungarian community in Zakarpattia

March 21, 2026

Maine’s Largest Fake Newspaper To Spend $35,000 Of Google’s Money To ‘Fight Misinformation’

March 21, 2026

Britain rethinks AI copyright and proposes content labelling

March 21, 2026

Latest Articles

Iran’s Mojtaba Khamenei warns of ‘false flag’ plots, UAE city on edge | Gulf

March 21, 2026

TikTok: a vehicle for misinformation but also community-building

March 21, 2026

Fighting misinformation and disinformation needs to be a national priority in Canada

March 21, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.