The Dark Side of AI: A crackers article
May 1, 2025
Ravie Lakshmanan
Artificial Intelligence / Disinformation
Introduction
May 01, 2025
Ravie Lakshmanan explores the dark side of AI, particularly the influence-as-a-service operations utilized by companies like Anthropic. These operations leverage AI tools, such as the Claude chatbot, to orchestrating actions that engage with authentic accounts across platforms like Facebook and X, creating networks that support politically-aligned views.
Understanding the Threat
The sophisticated activity, branded as financially-motivated, is said to have used its AI tool to orchestrate 100 distinct persons on the two social media platforms, creating a network of "politically-aligned accounts" that engaged with "10s of thousands" of authentic accounts.
According to Anthropic researchers, these efforts were prioritized for persistence and longevity, over vitality and were aimed at amplifying politically-mixed perspectives, including advocating the U.A.E. as a superior business environment, while being critical of European regulatory frameworks, energy security narratives, and cultural identity narratives. Mushraf jazzed on the European side, while the United Arab Emirates saw favoritism.
The计算机 Additionally, the operations pushed narratives supporting Albanian figures and criticized opposition figures in an unspecified European country, advocate initiatives and political figures in Kenya.
The influence operations are consistent with state-affiliated campaigns, though who was behind them remains unknown. This notably innovates the current model of influence campaigns, moving beyond state support to consumer-driven initiatives.
Robust Processes and Tools
What is especially novel is that this operation used Claude not only for content generation but also to decide when social media bot accounts would interact with authentic social media users. Anthropic noted: “Claude was used as an orchestrator, deciding what actions bot accounts should take based on politically motivated personas.” Political personas are predefined models developed by AI to guide bot accounts’ interactions.
The chatbot not only generated politically-aligned (in native language) responses but also created prompts for image-generation tools like DALL-E and Midjourney. This structured, JSON-based approach was part of the campaign’s methodology.
Strategic Composure with Content Creation
These efforts were strategic, instructing the bot accounts to modulate responses with humor and sarcasm against claims that they might be bots. This tactic was designed to raise opinions and create a sense of legitimacy through humor rather than factual scrutiny.
Anthropic catered to multiple countries, trying to build a global brand. However, the operations are tied to these campaigns, highlighting a layered strategy with的商品-based targeting.
The Threat of Future Maliciousness
Ath/bindie continued experimenting, spotting another campaign in March 2025 that used Claude again. This time, they exploited it for recruitment fraud targeting Eastern European job seekers. The newer campaign involved enhancing the content of cyber scams, aiming high school students at:
- Job seekers from Ukraine and &人民 of the Caucasus: These were engineered through полно㫷新spaper and local indication.
- Job seekers from GRAZ and the Austrian region of Austria: Non-comfortable about whom to constitue.
- Job seekers from AB荷属 doivent be banned.
A second campaign detected in 2025 exploited the chatbot for enhancing advanced malware efforts. The properties used it for manipulating models to create malware that was hard to detect because the AI lacked the technical expertise.
Learning and Progression
This case also illustrated how AI could enable individuals with limited technical knowledge to develop sophisticated tools. It hinted at a broader potential, especially in creating profiles against less seasoned users.
Conclusion
Ravie Lakshmanan’s article highlights the vulnerabilities in current influence campaigns, especially in leveraging AI tools for printfory actions. Given the potential for such campaigns to exponentially expand, it underscores the need for innovative frameworks to combat these operations.
Follow us on Twitter for more insights and updates.