It was an honor to join a diverse group of thinkers and doers at a recent policy conference, where we tackled a topic that feels incredibly urgent: how artificial intelligence is reshaping our elections. As someone deeply involved with Morocco’s National Human Rights Council, this wasn’t just an academic debate; it was a conversation about the very fabric of our democratic future, especially with Morocco’s next general elections just around the corner.
We used to think of elections as pretty straightforward, institutional affairs – organized, watched over, and certified by official bodies. But that picture is woefully out of date. Today, our elections are woven into a vast digital tapestry of algorithms, platforms, and AI systems. What citizens see, what they share, what they believe, and how they ultimately cast their vote isn’t just a side note anymore; it’s the main act. We’re seeing a seismic shift, where digital infrastructures, powered by AI, are increasingly capable of molding public opinion on a massive scale. It’s not just interference; it’s a gradual, fundamental change in how we exercise our democratic choices. Think about it: from automated bots spreading messages to highly sophisticated deepfakes blurring the lines of reality, the digital world is a powerful, often unseen, actor in our politics.
We’ve already seen glimpses of this future. Remember the Brexit referendum? Turns out, a significant chunk of the online chatter during that period came from automated accounts, many of which conveniently disappeared after the vote. It raised serious questions about who was really participating in the digital debate. More recently, elections like Slovakia’s have been flagged by experts as a chilling preview of how AI can be weaponized to manipulate democratic processes with incredible speed and finesse. Deepfakes and highly targeted misinformation campaigns aren’t just theoretical threats anymore; they’re operational tools. They erode trust in everything – institutions, candidates, even the very idea of objective truth. This isn’t just about technical glitches; it’s about the soul of our elections.
What truly hit home for me, speaking from a human rights perspective, is that any attempt to mess with elections, whether through fraud or AI-driven interference, isn’t just a minor irregularity; it’s a fundamental human rights violation. When disinformation muddies public debate, when AI systems manipulate what people see, or when personal data is exploited to sway voters, the damage goes far beyond the integrity of the election itself. It directly infringes upon our basic rights: the right to speak freely, the right to access accurate information, the right to privacy, and, most crucially, the right of every person to participate in public life. The Universal Declaration of Human Rights makes it clear: genuine and regular elections are a fundamental right. So, any attempt to manipulate them is an assault on basic freedoms and the very legitimacy of democracy. Disinformation, in this context, isn’t just misleading content; it’s a direct threat to our ability to form informed opinions and make meaningful choices at the ballot box.
This gets tricky, though. While we need to fight disinformation, our responses can’t become another form of censorship. Poorly designed regulations, like overly broad content moderation or flawed automated detection systems, risk suppressing legitimate speech. Studies have shown that attempts to identify bots and harmful content can make significant errors, sometimes silencing dissenting voices that may be provocative but are nonetheless protected. This tension highlights a crucial point: any governance of AI in elections absolutely must be rooted in human rights. These aren’t optional; they’re binding obligations under international law. Governments have a duty to protect individuals from violations, even by powerful tech companies. These platforms, often global in their reach, can’t simply claim neutrality when their systems actively mold public discourse. A human rights-based approach to AI means building safeguards into every step, from design to deployment, and establishing accountability mechanisms that can cross borders. The infrastructure of AI is global, but its impacts are deeply local, meaning no single country can effectively manage this beast alone.
This challenge is particularly poignant for Africa. The continent, while rich in human potential, lags behind in AI development and infrastructure. The economic benefits of AI are projected to concentrate largely in the Global North, leaving Africa potentially further behind. This isn’t just about resources; it’s about sovereignty. The digital infrastructures that shape political discourse are largely controlled by a handful of global tech giants, mostly based outside Africa. This raises a fundamental question: who truly controls the flow of information that shapes public opinion and political outcomes on the continent? While the African AI strategy leans towards national approaches, relying solely on 54 different national strategies risks weakening bargaining power and reinforcing dependency. A more unified regional or continental strategy would not only strengthen Africa’s voice but also help ensure that AI systems are developed and deployed in ways that reflect local values and address local problems. This kind of collaborative approach could empower African nations to set standards, negotiate with global platforms, and ensure that technological development genuinely serves the interests of their people, ultimately helping to bridge the global AI divide. Another crucial blind spot, gaining increasing importance, is the growing influence of encrypted messaging platforms. These aren’t just for private chats anymore; they’ve become central to political communication. Disinformation here might not spread as fast, but it often spreads with far greater credibility because it circulates within trusted networks, making it incredibly difficult to monitor or challenge. Unlike public social media, these spaces are opaque, yet they play a central role in how information flows during elections. Detecting, tracing, and countering disinformation in these channels is a critical and emerging challenge for researchers and policymakers alike.
Ultimately, the issue isn’t just that technology is influencing elections; it’s that it’s fundamentally redefining the terms under which our democratic rights are exercised. AI dramatically accelerates the creation and dissemination of information, both true and false, at a pace that institutions struggle to keep up with. It magnifies existing vulnerabilities in political systems while introducing entirely new ones that we’re only just beginning to understand. For Morocco, with elections on the horizon, this isn’t a distant threat. Political actors, supporters, and external players will undoubtedly leverage AI tools for content creation, micro-targeting, and potentially manipulating public perception. The key question isn’t whether AI will be used; it’s how it’s used, how quickly harmful applications can be spotted, and how swiftly stakeholders can respond. Safeguarding our fundamental rights in this new digital landscape requires constant vigilance, thorough preparation, and an unwavering commitment to a human rights-centered approach. During elections, anticipating and addressing these challenges will be absolutely critical to protecting both the integrity of the electoral process and the rights of the citizens it’s meant to serve.

![AI, elections, and the fragility of rights in the digital (AI) age [op-ed]](https://webstat.net/wp-content/uploads/2026/04/bce32603509a52b064f4550350df817a20260423191408-1024x683.webp)