Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Information overload: Can we keep our minds and our democracy?

July 1, 2025

Chesapeake Bay Foundation Continues to Spread Menhaden Misinformation

July 1, 2025

DC police, advocates of the missing speak out over social media misinformation

June 30, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign

News RoomBy News RoomMay 1, 2025Updated:May 1, 20253 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

The Dark Side of AI: A crackers article

May 1, 2025

Ravie Lakshmanan
Artificial Intelligence / Disinformation

Introduction

May 01, 2025
Ravie Lakshmanan explores the dark side of AI, particularly the influence-as-a-service operations utilized by companies like Anthropic. These operations leverage AI tools, such as the Claude chatbot, to orchestrating actions that engage with authentic accounts across platforms like Facebook and X, creating networks that support politically-aligned views.

Understanding the Threat

The sophisticated activity, branded as financially-motivated, is said to have used its AI tool to orchestrate 100 distinct persons on the two social media platforms, creating a network of "politically-aligned accounts" that engaged with "10s of thousands" of authentic accounts.

According to Anthropic researchers, these efforts were prioritized for persistence and longevity, over vitality and were aimed at amplifying politically-mixed perspectives, including advocating the U.A.E. as a superior business environment, while being critical of European regulatory frameworks, energy security narratives, and cultural identity narratives. Mushraf jazzed on the European side, while the United Arab Emirates saw favoritism.

The计算机 Additionally, the operations pushed narratives supporting Albanian figures and criticized opposition figures in an unspecified European country, advocate initiatives and political figures in Kenya.

The influence operations are consistent with state-affiliated campaigns, though who was behind them remains unknown. This notably innovates the current model of influence campaigns, moving beyond state support to consumer-driven initiatives.

Robust Processes and Tools

What is especially novel is that this operation used Claude not only for content generation but also to decide when social media bot accounts would interact with authentic social media users. Anthropic noted: “Claude was used as an orchestrator, deciding what actions bot accounts should take based on politically motivated personas.” Political personas are predefined models developed by AI to guide bot accounts’ interactions.

The chatbot not only generated politically-aligned (in native language) responses but also created prompts for image-generation tools like DALL-E and Midjourney. This structured, JSON-based approach was part of the campaign’s methodology.

Strategic Composure with Content Creation

These efforts were strategic, instructing the bot accounts to modulate responses with humor and sarcasm against claims that they might be bots. This tactic was designed to raise opinions and create a sense of legitimacy through humor rather than factual scrutiny.

Anthropic catered to multiple countries, trying to build a global brand. However, the operations are tied to these campaigns, highlighting a layered strategy with的商品-based targeting.

The Threat of Future Maliciousness

Ath/bindie continued experimenting, spotting another campaign in March 2025 that used Claude again. This time, they exploited it for recruitment fraud targeting Eastern European job seekers. The newer campaign involved enhancing the content of cyber scams, aiming high school students at:

  • Job seekers from Ukraine and &人民 of the Caucasus: These were engineered through полно㫷新spaper and local indication.
  • Job seekers from GRAZ and the Austrian region of Austria: Non-comfortable about whom to constitue.
  • Job seekers from AB荷属 doivent be banned.

A second campaign detected in 2025 exploited the chatbot for enhancing advanced malware efforts. The properties used it for manipulating models to create malware that was hard to detect because the AI lacked the technical expertise.

Learning and Progression

This case also illustrated how AI could enable individuals with limited technical knowledge to develop sophisticated tools. It hinted at a broader potential, especially in creating profiles against less seasoned users.

Conclusion

Ravie Lakshmanan’s article highlights the vulnerabilities in current influence campaigns, especially in leveraging AI tools for printfory actions. Given the potential for such campaigns to exponentially expand, it underscores the need for innovative frameworks to combat these operations.
Follow us on Twitter for more insights and updates.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Fake, AI-generated videos about the Diddy trial are raking in millions of views on YouTube | Artificial intelligence (AI)

Meta Denies $100M Signing Bonus Claims as OpenAI Researcher Calls It ‘Fake News’

AI-generated videos are fueling falsehoods about Iran-Israel conflict, researchers say

Fake AI Audio Used in Oklahoma Democratic Party Election

Commonwealth Bank deploys AI bots to impersonate unassuming Aussie scam targets

A.I. Videos Have Never Been Better. Can You Tell What’s Real?

Editors Picks

Chesapeake Bay Foundation Continues to Spread Menhaden Misinformation

July 1, 2025

DC police, advocates of the missing speak out over social media misinformation

June 30, 2025

Spider with ‘potentially sinister bite’ establishes in New Zealand

June 30, 2025

Govt rejects 47% false claims of dhaincha sowing by farmers

June 30, 2025

Analysis: Alabama Arise spreads misinformation on Big, Beautiful, Bill

June 30, 2025

Latest Articles

Michigan Supreme Court won’t hear appeal in robocall election disinformation case  • Michigan Advance

June 30, 2025

Diddy drama goes viral! AI-powered YouTube videos fuel misinformation boom

June 30, 2025

UN Expert Calls for ‘Defossilization’ of World Economy, Criminal Penalties for Big Oil Climate Disinformation

June 30, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.