The air in the parliamentary committee room was thick with frustration. A group of British Members of Parliament, tired of what they perceived as empty promises, convened a rather heated session with representatives from some of the world’s most influential social media giants: X (formerly Twitter), TikTok, and Meta (the company behind Facebook and Instagram). The MPs didn’t mince words, painting a picture of “complacent” tech companies whose platforms were, despite their assertions, still hotbeds of harmful content, misinformation, and even threats to democratic processes. The core of their argument was a stark one: these companies claimed to be doing a lot, but the reality on the ground, they insisted, wasn’t changing “a jot of difference.” It felt like a parent scolding a child who insists they cleaned their room, while toys remain strewn everywhere and clothes cover the floor. The MPs’ message was clear: your efforts aren’t enough, and the consequences of your inaction are becoming increasingly dire for real people.
One particularly unsettling exchange highlighted the apparent disconnect between policy and reality. Alistair Law, TikTok’s director of public policy for northern Europe, confidently stated that their platform prohibited pornography, nudity, and harassment. However, Freddie van Mierlo MP swiftly countered, revealing that he had, just that morning, easily found “numerous examples” of TikTok videos instructing users how to leverage Elon Musk’s AI tool, Grok, to “nudify young girls.” This wasn’t some obscure corner of the internet; this was allegedly happening on a platform that claimed to be strictly regulating such content. It showcased a disturbing loophole, a dark corner enabled by rapidly advancing AI technology, and a chilling indication that the platforms were not only failing to prevent harm but were, in some cases, indirectly facilitating its creation. This wasn’t just about abstract policies; it was about the very real potential for exploitation and harm to vulnerable young individuals, a deeply concerning prospect for any parent or guardian.
Another area of contention centered on political bias and the spread of misinformation, particularly on Elon Musk’s X. Wifredo Fernández, X’s director of global government affairs, tried to maintain that the platform was “politically agnostic.” However, Emily Darlington MP immediately challenged this, citing research suggesting X actively promoted right-wing content. The exchange took a highly personal turn when she brought up Musk’s recent, public endorsement of the far-right UK political party, Restore, calling it “the only way to save Britain.” Fernández, in a move that struck many as an attempt to distance the platform from its owner’s personal views, argued that “Mr Musk posts and participates in the public conversation individually.” Committee chair, Dame Chi Onwurah, was quick to retort with a dry and pointed observation: “I think many might dispute that.” This exchange underscored the inherent tension when a platform’s owner uses their influence to endorse specific political viewpoints, inevitably blurring the lines between personal opinion and platform neutrality, and raising serious questions about the fairness and impartiality of the digital public square.
The issue of political deepfakes further amplified the MPs’ concerns about the upcoming elections. George Freeman MP recounted a profoundly disturbing personal experience: a faked video circulated on X, Facebook, and YouTube last September, falsely depicting him defecting from the Conservatives to the Reform party. “I’m thick skinned,” the former minister explained, “but it was seriously disruptive.” When he pressed Fernández on whether X had taken any action, the response was a vague “I’d have to check with the teams,” to which Freeman, clearly anticipating the answer, simply said: “The answer’s no.” This wasn’t just about an individual’s reputation; it was about the integrity of the democratic process itself. Freeman articulated a profound fear: the “complacency of the platforms” could lead to the serious disruption of the “forthcoming elections” in May. The thought of voters being swayed by deliberately manipulated content, created to sow confusion and division, was a chilling prospect that highlighted the urgent need for robust safeguards.
Beyond political manipulation, the hearing delved into the alarming exposure of young people to harmful content. Dr. Lauren Sullivan MP presented Meta with the results of a chilling experiment conducted by the National Education Union. When setting up accounts for 13-year-olds, these accounts were swiftly “populated with violent and misogynistic self-harm, extremist content.” Dr. Sullivan, visibly appalled, stated, “I’ve seen it, it’s appalling… We can’t show it today, but that is being fed to 13-year-olds.” Meta’s UK public policy director, Rebecca Stimson, offered a standard corporate response: “We will look at it very closely and take that very seriously.” But for the MPs and, indeed, any concerned citizen, this response felt inadequate in the face of such clear and present dangers to children. It depicted a terrifying online world where the innocence of youth is quickly eroded by algorithms pushing increasingly extreme and damaging content, raising profound questions about parental control and platform responsibility.
At the heart of the committee’s frustration was the perception of a fundamental lack of accountability. Martin Wrigley MP encapsulated this sentiment perfectly, telling the tech executives, “You came in this morning really complacent… you started off by saying everything’s fine. We’ve gone through and demonstrated a number of different occasions when things are not fine and things are not fine on your platforms.” Committee chair Dame Chi Onwurah then rattled off a litany of recent online harms: misinformation about the Bondi Beach victim, political elections influenced by false narratives, fake photos of burning US aircraft carriers as part of Iranian propaganda, and fabricated evidence about a missile attack on a school in Iran. Her conclusion was damning: “The basic fact is that all the work that you tell us that you are doing on online harms and to make your platforms safe in this country is not working… I think that’s the consensus of most of the British people.” She delivered a stark ultimatum: show demonstrable progress within months in making their products safe for British citizens, or “we need further legislation to make it safe, because the first duty of any government is to protect its citizens.” It was a clear warning that the era of self-regulation and mere promises was drawing to a close, and a demand for concrete action to safeguard the well-being of the populace in an increasingly digital world.

