Governor Spencer Cox of Utah is stirring up a storm in the world of technology and politics, and it’s all about how we interact with social media. Imagine a passionate dad, fed up with what he sees as a harmful influence on his children and society, deciding to take on giant corporations. That’s essentially what Cox is doing, drawing a bold parallel between today’s tech titans and the tobacco companies of yesteryear. He believes these companies have knowingly hooked us, especially our kids, on addictive algorithms, much like nicotine hooked generations. It’s a powerful statement, and he’s not just talking; he’s pushing for real changes, including new taxes on online advertisers and lawsuits against social media giants like Meta and TikTok. He’s not trying to shut down free speech, but rather to rein in the unseen forces—those clever algorithms—that he argues are manipulating our minds and dividing us, sometimes even fueled by foreign adversaries.
Cox recently took his message to a global stage at the Cambridge Disinformation Summit in England. Here, he walked a tightrope, defending the fundamental American ideal of free speech against calls for more internet censorship, while simultaneously launching an offensive against the very platforms that host so much of that speech. He’s essentially saying, “Sure, say what you want, but these companies shouldn’t be allowed to manipulate you into addiction with their sneaky algorithms, feeding you information designed to outrage and ensnare.” He believes current legal rulings, which are starting to acknowledge the addictive nature of social media, provide a strong foundation for his argument. This isn’t just a local Utah issue; it’s gained national attention, especially after the tragic assassination of Charlie Kirk, where Cox observed how foreign bots exploited the event to sow further division. He’s become a champion for those concerned about mental health and the corrosive effects of online rhetoric, even earning nods from prominent figures like psychologist Jonathan Haidt.
One of Cox’s most striking claims is that the government’s approach to Big Tech should mirror its historical battles with the tobacco and opioid industries. He declared, “We’re treating this the way we treated the tobacco companies in the 1950s and 1960s in the United States. The way we’ve looked at the opioid companies in the ’90s and the early 2010s.” This comparison isn’t about physical addiction in the same way, but rather about the perceived manipulation and harm caused by powerful corporations that knowingly exploit human psychology for profit. Recent court decisions against companies like Meta and YouTube, which found their platforms were designed to be addictive for adolescents and harmful to mental health, offer crucial legal backing to Cox’s stance. He’s not only focused on holding these companies accountable for the outcomes of their algorithms but also on empowering individuals with more control over their online data.
A significant concern for Cox revolves around the proliferation of automated accounts, or “bots,” especially those from foreign adversaries. He firmly believes that while free speech is paramount, allowing these bots to run rampant and spread divisive messages is a different matter. He pointed to the aftermath of Charlie Kirk’s assassination, where a “vast majority” of the online reactions, designed “to help divide us,” were traced back to bots employed by foreign adversaries. This highlights a crucial distinction in Cox’s philosophy: he’s not against individuals expressing themselves, but against manipulation and interference, particularly when it comes from external, hostile sources aiming to destabilize and polarize. He argues that government intervention is justified not to police individual speech, but to ensure a fair and authentic online environment, free from automated manipulation.
However, Cox’s strong stance has not been without its critics. At the Cambridge summit, he pushed back against the desire of some researchers and journalists for governments to define and remove “disinformation.” He drew a clear line, stating that government should not be the arbiter of truth. Citing examples like the Biden administration’s policies that pressured companies to censor information about COVID-19, even if true, he warned that such heavy-handed approaches can backfire, making conspiracy theories seem more credible. Figures like Nina Jankowicz, who led President Biden’s Disinformation Governance Board, argued their goal was only to share “good information,” but Cox remains steadfast in his belief that the government’s role should be to foster a competitive social media market where diverse viewpoints can be debated, rather than to dictate what is true or false.
The debate around Cox’s approach raises fundamental questions about free speech and the power of algorithms. While he and his supporters view algorithms as a dangerous drug, some, like David Inserra from the Cato Institute, argue that regulating algorithms is akin to regulating speech itself. Inserra, who has experience with Meta’s content policy, believes that the online “curation” of speech, which algorithms are a part of, is protected under the First Amendment, just like news articles or books. He finds the comparison between Big Tech and Big Tobacco “deeply, deeply flawed” because social media, unlike nicotine, doesn’t have the same physical effects and offers many positive benefits like community building. For Inserra, the solution isn’t to regulate algorithms into submission, but to educate individuals and families on how to use social media responsibly. Yet, even Cox, despite his push for public policy changes, acknowledges that individual responsibility plays a crucial role. He shared a personal anecdote about deleting a social media platform from his phone after realizing its negative impact on his own mental health, demonstrating that while he advocates for systemic change, he also believes in the power of individual choice to create a healthier digital life.

