It’s like a spy thriller, but instead of secret agents, we have sneaky disinformation campaigns, and the battleground isn’t a shadowy alley, but our very own social media feeds. The European Union’s diplomatic service, the EEAS, has just released a report that pulls back the curtain on this digital drama, and what they found is quite striking.
Imagine this: you’re scrolling through your social media, catching up with friends, maybe a bit of news. But lurking beneath the surface, there’s a concerted effort to mislead you, to sow doubt, and to undermine trust in the very institutions that govern our lives. The EEAS report, aptly titled “Threats of foreign interference and information manipulation,” reveals that the social network X (formerly Twitter, now owned by the enigmatic Elon Musk) is the primary hotbed for these deceptive activities. Out of a staggering 43,000 disinformation pieces analyzed in 2025, a whopping 88% of them were found on X. That’s a colossal number, dwarfing other platforms like Telegram (3%) and Facebook (2%). It’s as if X has become the main stage for these digital puppeteers, pulling strings and orchestrating narratives that serve their own agendas.
So, why X? The EEAS offers some compelling reasons. Think about it: the ease of creating fake accounts, the prevalence of coordinated inauthentic behavior networks, and the comparatively easier access to data on X, all contribute to its dominance. It’s like a playground for those who want to manipulate information, where they can operate with a certain level of anonymity and reach. Interestingly, the report also highlights a frustrating reality: most major social media platforms are pretty tight-lipped about their data, making it incredibly difficult to truly grasp the full extent of information manipulation. It’s like trying to solve a puzzle when half the pieces are missing.
But don’t be fooled into thinking that these digital mischief-makers stick to just one platform. Oh no, they’re far more cunning than that. The report notes that in most disinformation campaigns, these actors are like digital chameleons, operating across multiple platforms simultaneously. They’ll sprinkle their deceptive posts on social media, then amplify them through messaging apps like WhatsApp or Telegram. Their goal? To infiltrate every corner of our information space, to boost the visibility and credibility of their fabricated stories, and to target specific audiences based on their demographics and locations. It’s a sophisticated, multi-pronged attack, designed to get under our skin and influence our perceptions.
Now, let’s talk about the new kid on the block, or rather, the powerful tool that’s supercharging these disinformation campaigns: Artificial Intelligence. The report reveals a truly alarming trend: a 259% increase in the use of AI in disinformation targeting the EU, compared to 2024. This isn’t just about making things a little easier; it’s a game-changer. The report specifically points fingers at “Russian and Chinese actors,” stating that they have “fully deployed AI tools to accelerate content production and increase interference activities with fewer resources.” Imagine the implications: AI can churn out convincing fake news, deepfakes, and propaganda at an unprecedented scale, making these operations significantly cheaper and more efficient. It’s like having an army of tireless, creative, and highly effective disinformation agents at your fingertips, all powered by algorithms.
So, who are the unlucky recipients of all this digital mudslinging? The EEAS’s analysis shows that a whopping 66% of these attacks are aimed directly at politicians. It’s a targeted assault on leadership, and some prominent figures bear the brunt of it. Ukrainian President Volodymyr Zelensky, French President Emmanuel Macron, German Chancellor Friedrich Merz, and European Commission President Ursula von der Leyen are specifically mentioned. The campaigns against these individuals aren’t just personal attacks; they’re often designed to undermine what these leaders represent – their democratic values, their principles, and the platforms they use to communicate with the public. It’s an attempt to silence their voices, to chip away at their credibility, and to turn public opinion against them.
Beyond individual politicians, entire organizations are also in the crosshairs. Political entities once again take the lead, absorbing 36% of these attacks, followed closely by media organizations (23%) and military or security organizations (22%). It’s a broad assault on the very pillars of democratic society. And when do these attacks tend to surge? During electoral periods, of course. Elections are fertile ground for disinformation, as they present a critical opportunity to influence public opinion and sway outcomes. But it’s not just elections; popular protests and disturbances are also exploited, as these moments of unrest provide the perfect backdrop to “feed perceptions of chaos, fear and disorder,” often aimed at local administrations. It’s a cynical manipulation of public sentiment, designed to generate instability and erode trust.
Finally, a crucial point from the EEAS: this report, while eye-opening, isn’t the complete picture. They openly admit that it “should not be interpreted as exhaustive” because it’s based on monitoring that doesn’t cover “all regions and languages” and “only represents a small portion of these actors’ activities.” This disclaimer is important because it tells us that what we’re seeing is likely just the tip of the iceberg. The scale of disinformation is probably far vaster and more intricate than even this comprehensive report can fully capture. It’s a constant, evolving threat, and one that requires our collective vigilance and critical thinking as we navigate the complex landscape of information in the digital age.

