Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

AI-generated content fuels misinformation after Air India crash

July 1, 2025

When misinformation blinds religion in the Philippines

July 1, 2025

Make errant police pay for filing false cases

July 1, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Homemade political deepfakes can fool voters, but may not beat plain text misinformation

News RoomBy News RoomApril 30, 20253 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

The study in question, “An Average Joe, a Laptop, and a Dream: Assessing the Potency of Homemade Political Deepfakes,” explores how simplistic or innovative ideas for creating artificial gathers (deepfakes) can influence public perception and decision-making. This research is conducted in Ireland and involves undergraduates who use publicly available tools to inject themselves or others into videos or speeches, aiming to simulate the speech of political figures. The study aims to identify the impact of these deepfakes on voters’ willingness to vote, missteps in memory, and their political opinions.

The methodology involves 443 participants, mainly university students in Ireland, who were divided into different groups based on deepfake types and timestamps. Participants were presented with simulated news scenarios, including descriptions of a politician’s appearance, text, or video content. After viewing these scenarios, they were asked to rate their agreement with the politician and their voting intentions, assess their ability to recall the events, and determine if they suspected the content was faked.

The study revealed that paid deepfake tools produced significantly more accurate and convincing images compared to free tools. This suggests that while tools are crucial for creating atmosphere-graphs, human ingenuity plays a role in evaluating and enhancing deepfakes. The findings also show that the timestamps of deepfake videos, such as appearing in which city or period, influence how easily viewers recall and process the events. Theory for remembering, such as closer memory, is more pronounced when real events and deepfake events are closely related in time or geography.

Another significant observation was the impact on political opinions. Participants’ voting intentions were moderately affected, with a modest decrease in support for ambitious politicians. However, this reduction was more pronounced when the fake news was presented in text-only format rather than as part of a video or audio stream. This underscores the importance of the content’s context and delivery when creating deepfakes.

The research also evaluated how the appearance of fake content affects voter behavior and trust. Participants were more likely to suspect faked information when it was presented in text-only or limitedProduces volume formats, and they had better recall of details in video and audio forms. This indicates that the type of information (text vs. video) and its presentation can significantly influence the effectiveness of deepfakes.

The authors emphasize that the study serves as a cautionary exercise, urging researchers to remain cautious and evidence-based in their claims about emerging technologies. They argue that deepfakes are still a challenge for the future, even though they may be used more frequently in the near future. The study concludes by highlighting the importance of recognizing the limitations of deepfakes and the need for balanced approaches to misinformation. It cautions against projecting the potential of deepfakes into the future without considering the practical constraints of detecting and properly curating them.

In summary, this study provides valuable insights into the potential of homemade deepfakes to influence public opinion, particularly among university students using digestible tools. It highlights the need for balancing creativity with responsibility in the creation of artificial gauges, which are becoming increasingly relevant in political and social contexts.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

AI-generated content fuels misinformation after Air India crash

Only 37% of Gen Z uses sunscreen as misinformation spreads on social media

Video doesn’t show Muslim men celebrating Zohran Mamdani’s primary victory in NYC

Indian state proposes seven-year jail term for spreading ‘fake news’

The harsh penalty one Indian state wants for spreading ‘fake news’ – The Independent

The Atlantic: Progressives duped by transgender treatment misinformation

Editors Picks

When misinformation blinds religion in the Philippines

July 1, 2025

Make errant police pay for filing false cases

July 1, 2025

Only 37% of Gen Z uses sunscreen as misinformation spreads on social media

July 1, 2025

EU-funded ChatEurope news chatbot delivers outdated and incorrect answers

July 1, 2025

Welcome to the Gray War

July 1, 2025

Latest Articles

iciHaïti – Registration open : Sticker and GIF creation competition against disinformation

July 1, 2025

Downtown apartment evacuation turns out to be false alarm | Local News

July 1, 2025

POLICE ARREST WOMAN FOR FALSE BOMB THREAT – 3B Media News

July 1, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.