Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Lying to win

July 15, 2025

The TikTok generation needs rabble!

July 15, 2025

EU Targets Kremlin-Linked Disinformation Campaigns in Moldova With New Sanctions — UNITED24 Media

July 15, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»United Kingdom
United Kingdom

Automated Social Media Posts Reach 150 Million Views Prior to UK Elections

News RoomBy News RoomDecember 8, 20244 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Unmasking Potential Bots in the UK Election Discourse: A Deep Dive into Climate Change and Migration Debates

The digital landscape of the 2024 UK general election is awash with opinions, arguments, and hashtags related to key issues like climate change and migration. However, beneath the surface of seemingly organic online discussions lurks the possibility of manipulation by automated accounts, commonly known as bots. This investigation delves into the prevalence and potential impact of these bots on the electoral discourse. By analyzing tweets related to specific hashtags, we uncovered a network of accounts exhibiting suspicious behavior, raising concerns about the integrity of online political conversations.

Our investigation focused on hashtags spanning a wide range of perspectives on climate change and migration, from #welcomerefugees to #stoptheboats and #climatecrisis to #endnetzero. We analyzed tweets posted since the election announcement, searching for indicators of bot activity. These red flags include an exceptionally high volume of tweets, a predominance of retweets over original content, generic usernames, lack of personalized profile pictures, and low follower counts. While these indicators individually don’t necessarily confirm bot activity, the presence of multiple red flags, especially coupled with excessive tweeting, raises strong suspicions.

Our analysis uncovered ten accounts exhibiting potential bot-like behavior within a sample of up to 500 tweets per hashtag. While this number might seem small, the potential impact of these accounts is significant. Collectively, these ten accounts have posted over 60,000 tweets since the election was called, generating an estimated 150 million impressions. This highlights how a small number of prolific accounts can disproportionately influence online narratives.

The majority of the identified accounts (8 out of 10) displayed overt political affiliations, aligning themselves with or against specific political parties. Some used party logos as profile pictures, frequently retweeted party content, or employed hashtags promoting or opposing particular parties. For example, two accounts using #stoptheboats promoted Reform UK, while an account using #climatecrisis actively discouraged voting for the Conservative Party. All five accounts identified through #Labourlosing promoted Reform UK. Interestingly, our investigation found no evidence to suggest that any UK political party is directly involved in paying for, using, or promoting these potential bots.

Beyond political partisanship, some of these accounts spread alarming content, including extreme Islamophobia, homophobia, anti-Semitism, transphobia, and disinformation about climate change and vaccines. One account even expressed admiration for President Putin. The dissemination of such harmful content raises serious concerns about the potential for these accounts to exacerbate existing societal divisions and manipulate public opinion.

The question of who is behind these potential bots remains unanswered. While we cannot definitively identify the individuals or groups responsible, the nature of the content suggests a vested interest in disrupting the democratic process and promoting specific political agendas. The potential for malicious actors to exploit social media platforms for political manipulation underscores the urgent need for stricter regulations and greater platform accountability.

The proliferation of bots and the spread of disinformation represent a significant threat to the integrity of democratic elections. Social media platforms bear a responsibility to address this issue and ensure their platforms are not weaponized to manipulate public discourse. The EU’s Digital Services Act sets a precedent for holding platforms accountable for mitigating risks to electoral processes, and similar measures are necessary globally. We urge X (formerly Twitter) to thoroughly investigate the accounts identified in this report and strengthen their efforts to protect democratic debate from manipulation. The future of free and fair elections depends on it. We contacted X for comment on these findings but received no response. Our methodology involved using specific criteria, or "red flags," to identify potentially automated accounts. These criteria included excessive tweeting, a high proportion of retweets, generic usernames, lack of personalized profile pictures, and low follower counts. These indicators collectively suggest a low investment in genuine user engagement and a high likelihood of automated activity. We acknowledge that individual red flags are not definitive proof of bot activity. However, the combination of multiple red flags, particularly the high volume of tweets, warrants further investigation. We also used Information Tracer, a tool designed to analyze online information and identify patterns of inauthentic behavior, to assist in our analysis. Furthermore, we investigated hashtags such as #migrantcrisis, #smallboatscrisis, #ltn, and #climatescam but did not find evidence of bot-like activity based on our defined criteria.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

an opportunity to protect our democracy from misinformation and disinformation – Full Fact

Portsmouth expert helps shape UK Government report with critical evidence on social media’s role in Southport riots

UK’s largest fact-checking firm, with India-born founder, folds up

UK’s online safety laws won’t stop a repeat of Southport riots, MPs warn – POLITICO

UK Online Safety Act ‘not up to scratch’ on misinformation • The Register

Social media incentivised spread of Southport…

Editors Picks

The TikTok generation needs rabble!

July 15, 2025

EU Targets Kremlin-Linked Disinformation Campaigns in Moldova With New Sanctions — UNITED24 Media

July 15, 2025

Abdu Rozik lashes out at his former management team for spreading ‘false’ news about his arrest at Dubai Airport; says ‘Ridiculous and Unfair’

July 15, 2025

Bacon Disputes Misinformation on Medicaid Provisions in OBBB

July 15, 2025

Presidency slams DA for 'disinformation' around Special Envoy to the US, Mcebisi Jonas – Primedia Plus

July 15, 2025

Latest Articles

Child vaccination progress at risk from aid cuts and misinformation

July 15, 2025

Kyrgyzstan tightens control over media with new false news laws

July 15, 2025

Weaponising Truth: A critical analysis of the Karnataka Misinformation and Fake News (Prohibition) Bill, 2025

July 15, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.