The digital landscape, a place where information flows freely and opinions are shaped, has a darker, more intricate side than many of us imagine. What appears to be organic conversations and grassroots movements can sometimes be a carefully orchestrated ballet of misinformation, driven by motives as diverse as genuine belief, profit, and even political manipulation. We’ve seen this play out in the form of “2050 point-of-view videos” – content that purports to represent a particular vision of the future, often aligning with nationalist or anti-immigrant narratives. Our investigation into these types of videos, and the people behind accounts that promote or interact with them, has unveiled a complex web where engagement and financial gain often intertwine with, or even overshadow, genuine political conviction.
One particularly revealing conversation was with an individual who candidly admitted their primary motivation: “I mostly post to get a reaction for the sake of engagement which boosts my followers and money.” It’s a stark reminder that the digital realm, for many, is a marketplace. This individual benefits from Instagram’s monetization scheme, where ad revenue is shared based on video views. The more engagement their content receives – likes, comments, shares – the more visibility it gets, and consequently, the more money they earn. This transforms the sharing of information, however divisive or controversial, into a transactional act. It’s not about the accuracy of the information or the validity of the viewpoint, but about its ability to ignite a reaction, to become a talking point, and in doing so, to generate profit. This individual, like many others, isn’t necessarily driven by a fervent political ideology, but by the financial opportunities presented by the attention economy.
Another person we spoke with articulated a similar ambition for reach, though framed differently. They described coordinating with other accounts that are “raising voice against similar issues,” but insisted, “their online activity is ‘not politically motivated in any way.'” This statement, while seemingly contradictory at first glance, highlights a crucial aspect of this digital ecosystem. The goal, they explained, is for other accounts to promote their content “to get as much attention as possible.” It’s less about a specific political outcome and more about amplifying a message, any message, that resonates with a certain audience. This approach can be likened to a digital echo chamber, where similar voices reinforce each other’s narratives, regardless of their political alignment. The desire for “attention” becomes the driving force, a valuable commodity in a crowded digital space. While they might claim no political motivation, the content they promote invariably carries political implications, shaping public discourse and influencing perceptions, even if a direct political agenda isn’t their personal primary driver.
What’s even more intriguing is the cross-border nature of this content creation and amplification. While some of the accounts engaging with these “fake” British patriot narratives are indeed based in the UK, a significant portion of the network extends far beyond. For instance, we heard from an individual in the West Midlands who runs a profile focused on “the restoration of Britain’s former greatness.” He openly discussed coordinating with other accounts to push a shared political goal. His method? A group chat on Instagram where they decide “what to post and when.” But the truly eye-opening revelation was the geographical spread of his collaborators: accounts based in India, Pakistan, Singapore, as well as Australia and New Zealand. This illustrates how geopolitical boundaries often blur in the digital space, allowing for the widespread dissemination of narratives that might originate from a specific region but find amplification and support from distant, seemingly unconnected, sources. This global coordination lends an artificial sense of widespread endorsement to particular viewpoints, making them appear more legitimate and impactful than they might actually be.
This phenomenon aligns perfectly with Professor van der Linden’s observations from the University of Cambridge, who points to the booming “disinformation-for-hire industry.” He describes a world where “paid actors and influencers [are] pretending to be ordinary citizens to manufacture support for an agenda.” This often involves the use of AI-generated content and bots designed to drive traffic and increase visibility. Imagine an army of seemingly authentic individuals, each playing a carefully crafted role, flooding social media with specific narratives. These aren’t necessarily real people with genuine beliefs, but rather carefully curated digital personas, often powered by AI, designed to manipulate public opinion. This makes it incredibly difficult for the average user to differentiate between genuine grassroots movements and highly sophisticated, commercially driven disinformation campaigns. The ease with which technology can create believable, yet fabricated, content further blurs the lines of authenticity, making critical evaluation an increasingly challenging task.
The implications for public trust are profound. Professor Yvonne McDermott Rees, a law professor at Queen’s University Belfast who has extensively studied the impact of deepfakes, highlights a concerning reality: the public’s accuracy in spotting fakes is a mere 55%. What’s more, people tend to vastly overestimate their own ability to discern real from fake. We’re all prone to falling into the trap of believing we’re more digitally savvy than we are, leaving us vulnerable to manipulation. This combination of low detection rates and overconfidence creates a fertile ground for these “disinformation-for-hire” schemes to thrive. When we can’t reliably identify what’s real and what’s manufactured, the very foundation of informed public discourse begins to erode. It undermines our ability to make decisions based on accurate information and fosters an environment of suspicion and division, a worrying prospect for the future of our digital interactions and, indeed, our societies.

