Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Prosecutors say Charlie Kirk shooting defense misled public with ATF claim

May 2, 2026

Army arrests activist over video of soldiers alleging poor feeding

May 2, 2026

Video: Cox covers misinformation on Box Elder County’s data center proposal: Part 2

May 2, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

‘They feel true’: political deepfakes are growing in influence – even if people know they aren’t real | AI (artificial intelligence)

News RoomBy News RoomMarch 28, 2026Updated:May 1, 20266 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

The digital world is becoming increasingly complex, challenging our ability to discern what’s real and what’s not. It’s not just about images of famous people anymore; sophisticated artificial intelligence (AI) is creating entirely new, fictional individuals and placing them in various, often politically charged, situations. These AI-generated personas can be used to make money, but more concerningly, they can also serve as incredibly effective propaganda, blurring the lines between playful satire and manipulative disinformation. This rapid evolution of AI-generated content is creating a new frontier in the battle for truth and informed public discourse, where the sheer volume and convincing nature of fake media threaten to reshape our understanding of reality.

The distinction between a political cartoon and what some perceive as factual reality is dissolving at an alarming rate. As Daniel Schiff, an assistant professor at Purdue University, aptly puts it, “A lot of people feel like these images or videos or the stories they convey, feel true.” This sentiment is amplified by an explosion in political “deepfakes” – hyper-realistic fake images, videos, and audio. A database maintained by Schiff’s Governance and Responsible AI Lab (Grail) reveals a staggering increase: since the beginning of 2025 alone, over 1,000 English-language social media posts containing fake content of political figures and important events have been cataloged. To put this in perspective, the previous eight years combined saw just 1,344 such incidents. This dramatic escalation is largely thanks to the accessibility and sophistication of generative AI, which now allows anyone to create incredibly convincing fake scenarios with surprising ease. Sam Gregory, executive director of Witness, an organization dedicated to human rights and combating deceptive AI, highlights this, stating that it has become “trivially easy to generate a scene that looks pretty realistic and to place real individuals into scenes.” This shift from difficult, specialized work to readily available tools has democratized the creation of deepfakes, making their proliferation a significant societal challenge.

Beyond manipulating images of real public figures, the AI landscape has taken an even more intricate turn with the creation of entirely fabricated individuals. Consider the case of “Jessica Foster,” an AI-generated blonde woman often depicted in a US military uniform. Her Instagram account, which launched in December 2025, garnered over a million followers with posts showing her in barracks, in an office chair with her feet on the desk, or even walking a tarmac in high heels beside Donald Trump. The creators intentionally emphasized her feet, leading to a lucrative venture on OnlyFans, where users could purchase “foot photos” supposedly from Foster. This exemplifies how AI-generated personas can be used to generate clicks, money, and drive traffic to more profitable platforms. While Foster’s account has since been removed, her brief virality underscores the potential for financial exploitation through these digital fictions.

However, the implications extend far beyond mere monetization. These AI tools are also being wielded for powerful political objectives. During the conflict in Iran, social media was flooded with videos featuring fake female Iranian soldiers seductively inviting viewers with phrases like “Habibi, come to Iran.” The blatant inaccuracy – Iran prohibits women from combat roles – was a clear giveaway to discerning eyes, but the emotional appeal and propaganda value were undeniable. Similarly, an AI-generated female police officer with over 26,000 TikTok followers was featured in a video celebrating former president Trump’s deportation policies, garnering hundreds of likes and approving comments such as “absolutely yes.” The 2024 election also saw Trump utilize AI-generated images of Taylor Swift fans endorsing him. The Grail database indicates that since 2024, Trump and the White House have shared at least 18 deepfakes on social media. This trend isn’t confined to one side of the political spectrum; California governor Gavin Newsom has also engaged in deepfake sharing, including one depicting Trump smiling at a hologram of Jeffrey Epstein. These examples demonstrate how AI-generated content can be strategically deployed to shape public opinion and further political agendas, even if the content’s artificial nature is somewhat apparent.

The unsettling truth, as AI researchers point out, is that political deepfakes can remain persuasive even when viewers are aware they aren’t real. Sam Gregory highlights the absurdity of the Jessica Foster images: “Foster is walking in high heels, in a military uniform, her military badge is completely wrong. There is no reason she would be hanging out with President Trump and Nicolás Maduro. None of this, if you think about it, makes much sense or bears up to scrutiny.” Yet, the images resonated. This phenomenon is explained by Valerie Wirtschafter, a Brookings Institution fellow, who suggests that people aren’t necessarily seeking absolute truth; rather, they’re looking for content that validates their existing beliefs. Deepfakes then serve as “just another layer added on in terms of this process of reinforcing, rather than revisiting, what people believe is true.” This creates a dangerous feedback loop where biases are solidified and the critical examination of information is eroded, making it harder for individuals to consider alternative perspectives and potentially leading to a more polarized and misinformed society.

Looking ahead, researchers express significant concern that the situation will only worsen. The technology behind “Jessica Foster” could easily be scaled up to create what they call “AI swarms.” A recent study in Science describes these as AI entities capable of “coordinating autonomously, infiltrating communities, and fabricating consensus efficiently.” Wirtschafter likens this to “a troll farm without actually having to have people any more,” portending a future where coordinated disinformation campaigns could be launched with unprecedented speed and scale, further destabilizing societal trust and democratic processes. While the challenge is daunting, the researchers emphasize that humans are not powerless. Initiatives like the Coalition for Content Provenance and Authenticity are developing technical standards to embed “cryptographically signed metadata” within digital content, essentially creating a digital fingerprint that can track its origin and any AI-driven edits. This would allow technology companies to label AI-generated content proactively. While platforms like LinkedIn, Pinterest, TikTok, and YouTube have committed to such labeling, current implementation is inconsistent. An investigation found that even the most diligent platforms only labeled a fraction of AI-generated content, with Instagram lagging significantly. Meta’s oversight board has expressed concern over its inconsistent adherence to these standards. Gregory attributes this inconsistency to a “failure of political will at the senior levels” of major tech companies. He stresses the urgency: “We don’t need to give up on the ability to discern what is real from synthetic, but we do need to act fast.” The future of information integrity hinges on a swift and coordinated effort from tech giants, policymakers, and individuals to establish robust mechanisms for transparency and accountability in the age of AI.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Jabalpur boat tragedy: Viral mother-child photo ‘AI-generated or unrelated’, admin says it’s mislinked to Bargi Dam incident

No, Trump hasn’t just doubled down on his AI Jesus post

No, the man arrested at the White House Correspondents’ Dinner did not work for the Canadiens – CTV News

Azerbaijan talks growth in fake news, hybrid threats and abuses of AI – deputy minister

AI hallucination scandal: DA ministers ordered to ‘urgently’ roll out verification after fake research bombshells

Russia has launched a new wave of fake content on TikTok featuring AI-generated videos of “Orthodox priests.” | Ukrainian News

Editors Picks

Army arrests activist over video of soldiers alleging poor feeding

May 2, 2026

Video: Cox covers misinformation on Box Elder County’s data center proposal: Part 2

May 2, 2026

Expert urges tagging ‘lie agents’

May 2, 2026

Ambassador Warns: Misinformation endangers national cohesion

May 2, 2026

CFB roundtable discussion: False narratives, disinformation propaganda threat to national security ; establishing all-party national unity thru’ JS stressed

May 2, 2026

Latest Articles

JEP Invites Former Presidents Uribe and Santos to Testify on False Positives

May 2, 2026

Misinformation spreading like an ‘epidemic’, warn speakers at PIB seminar

May 2, 2026

Türkiye: IFJ and partners condemn escalating use of “disinformation law” against journalis…

May 2, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.