Here’s a humanized summary of the provided content, aiming for a conversational tone while retaining the core information.
The Shifting Sands of Online Influence: When Ordinary People Become Digital Soldiers
Imagine a world where the information you consume every day isn’t just shaped by news organizations or official government statements, but by an invisible, orchestrated effort to sway your thoughts and feelings. This isn’t science fiction; it’s the reality of “influence operations” (IOs), large-scale campaigns, often hidden, run by both governments and other groups to sculpt public opinion. These aren’t just about making up lies out of thin air. They often cleverly mix genuine facts with carefully selected, or “curated,” information to push a specific agenda, making them incredibly hard to distinguish from real news. Think of it like a chef meticulously choosing ingredients – some fresh, some a little past their prime – to create a dish meant to evoke a very particular taste, even if it’s not entirely wholesome. What’s even trickier is that these campaigns thrive on social media, using clever tricks and leveraging the very structure of these platforms to spread their messages far and wide, often making them go “viral” before you even realize what you’re seeing.
In Southeast Asia, this game of influence has become particularly potent, acting both as a silent broker during elections – pulling strings behind the scenes to favor certain candidates – and as a tool for authoritarian regimes to keep a tight grip on power. Traditionally, when we talked about these operations, we pictured shadowy figures, fake profiles, and automated bots churning out posts. Researchers called this “inauthentic behavior” – think of it as a puppet show where you can clearly see the strings. But something new is brewing. Recent studies, particularly from the RAIDAR project at Thailand’s Chulalongkorn University, have uncovered a startling shift: influence operations are now being driven by ordinary people, folks just like you and me, often with transparent identities and seemingly genuine motivations. This makes the already messy world of “information disorder” – where facts get blurred with fiction – even more complicated. It’s like trying to navigate a dense fog after the streetlights have gone out. However, the good news is that experts believe a broad, multi-faceted approach, addressing the entire system rather than just individual fakes, is our best bet to fight back.
One of the most striking changes identified by the RAIDAR project is the rise of everyday people as key players in these influence games, often overshadowing official state-backed efforts. Take for example, the worrying surge of anti-Rohingya sentiment in Indonesia leading up to their 2024 election. It wasn’t just governments or political parties spreading rumors. Ordinary individuals, often fueled by genuine, albeit misplaced, concern, helped spread stories alleging that Rohingya refugees in Aceh were exploiting local communities. These stories, depicting refugees as ungrateful and even criminal, intensified rapidly online. The tragic outcome? A mob attack on a refugee shelter, highlighting the real-world danger of online narratives. What’s truly unsettling is that interviews with these “buzzers” – the Indonesian term for online influence workers – revealed that many weren’t just in it for the money. These were homemakers, single mothers, and political volunteers who genuinely felt a “moral obligation” to “defend their homeland” from perceived foreign threats. Their sincere belief in what they were doing, even when spreading inconsistent information, challenges the typical ways social media platforms identify and shut down coordinated inauthentic behavior. It’s like trying to catch a whisper in a hurricane – the sheer volume and emotional charge make it incredibly difficult to discern the truth.
Adding another layer of complexity, the very algorithms that suggest what movies you watch or what products you buy are also indirectly fueling these “bottom-up” influence operations. These algorithms, designed to personalize our online experience, create echo chambers where political “fans” can rally around their chosen leaders, strengthening what’s called “political personalism” – where power revolves around an individual rather than a party or ideology. We saw this vividly in Thailand during the 2023 general election. Pita Limcharoenrat, the opposition candidate, became a sort of political pop star for the youth, channeling pro-democracy sentiments. His supporters on platforms like X and TikTok, much like dedicated fan clubs, used these platforms’ algorithms to boost his image and spread his messages, often comparing him to a K-pop idol. Meanwhile, his opponents, equally leveraging fan pages on apps like LINE and Facebook, spread manipulated images and misleading information about him. It’s a digital battleground where political figures are treated like celebrities, and their supporters act as their devoted fan armies, battling it out for online supremacy.
The Philippines offers a similar, established pattern of this “political fandom.” Former President Duterte, during his presidency, cultivated a “strongman” image, justifying brutal policies like his “war on drugs.” While he employed political influencers, it was his “die-hard fans,” both at home and abroad, who fiercely defended his actions against international criticism. Now, his successor, Ferdinand Marcos Jr., continues this trend, using algorithmically-driven networks to solidify support. This phenomenon of “political fandom” is making us question the very essence of modern political campaigns. Are we moving away from policy debates and towards a popularity contest, where charismatic leaders are idolized, and party politics become less about institutions and more about personal appeal? It’s a global trend, but it’s becoming increasingly normalized and powerful in Southeast Asia, turning elections into a popularity contest for digital “idols.”
Finally, while older platforms like Facebook and X were ripe for traditional influence campaigns, newer platforms like TikTok present a unique challenge. TikTok’s visual focus, its emphasis on creative, self-produced content, and its algorithms that prioritize “authenticity” make it harder for traditional, often faceless, propaganda to gain traction. Malaysia provides a great example. “Cybertroops,” often linked to ruling parties, who used to simply copy-paste messages, struggle to adapt to TikTok’s vibe. The app’s “For You” feed and livestream features favor genuine, amateur-style presentations, hindering anonymous attempts to engage audiences. This means that successful influence on TikTok requires personality, visibility, and brand-building – qualities that lend themselves to influencers rather than anonymous accounts. This shift ultimately reinforces the bigger picture: ordinary people, whether through fandom or perceived moral obligation, are becoming central to these influence operations, and charismatic “idols” are capturing the hearts and minds of digital audiences.
All of these shifting patterns paint a clear picture: combating online influence operations isn’t about simple fact-checking anymore. It’s about understanding a complex ecosystem where technology, human psychology, and economic incentives intertwine. The recent RAIDAR project survey, involving over a hundred experts, reinforced this. When asked about the impact of tactics like “astroturfing” (fake grassroots support), organized harassment, and microtargeted advertising, the experts agreed: all three significantly harm public discourse. They felt harassment was most damaging to individuals, while microtargeted ads posed the biggest threat to fair elections. A staggering 80% agreed that manipulated content deepened political and social divides, fueled prejudice, and eroded trust in elections and traditional media. Most also believed financial incentives for viral content were a major driver of extreme views. Interestingly, while the lack of oversight on social media platforms was seen as the most damaging overall, there was a strong wariness (nearly 90%) about government-led regulation. People fear that authoritarian regimes could weaponize such measures to suppress free speech and dissent. This highlights the delicate balance we must strike: how do we foster trust and accountability without stifling democratic freedoms?
The big takeaway from all this research is the need for a “system-based” approach. We can’t just play whack-a-mole with individual falsehoods. We need to address the entire interconnected web of actors, motivations, and platform designs that fuel these operations. The recommendations are clear: governments, international organizations, civil society, and the tech platforms themselves must work together. A core strategy is to “demonetize” and “disincentivize” harmful influence operations. This means scrutinizing political advertising during elections and continually researching new influence tactics. Regional organizations like ASEAN should push platforms, leveraging their access to vast Southeast Asian markets, for greater transparency in how their algorithms work and prioritize societal well-being over profit. Beyond platform governance, we must also tackle the underlying societal issues that make people vulnerable to these campaigns – things like economic hardship, distrust in traditional institutions, and widening social divides. Any new regulations must be carefully crafted with democratic safeguards, ensuring accountability and citizen participation, to avoid inadvertently restricting freedoms. Ultimately, the goal is to make malign influence less appealing by strengthening a digital environment that truly supports accurate, trustworthy, and diverse information for everyone.

