The discussions at the UM6P Policy Conference, specifically the panel on “Integrity and Governance: AI in the Electoral Cycle,” proved to be an eye-opening and deeply relevant experience, especially from my vantage point representing the National Council for Human Rights (NHRI) of Morocco. As Morocco stands on the cusp of its next general elections, the intersection of artificial intelligence and our democratic processes isn’t just an abstract academic exercise; it’s a very real and immediate concern that demands our attention. We used to think of elections as formal, institutional affairs, neatly organized and certified by national authorities. But that old-fashioned view simply doesn’t cut it anymore. Today, elections are inextricably woven into a complex digital tapestry, shaped by the unseen hand of algorithms, social media platforms, and advanced AI systems. What citizens see, what they choose to share, what they come to believe, and ultimately, how they decide to cast their vote – these aren’t peripheral details; they are absolutely central to the functioning and integrity of our democracies.
Across the globe, we’re seeing more and more evidence that these algorithmic systems and their amplification effects are profoundly swaying political discourse. From coordinated networks of automated accounts, often called bots, to the AI-driven spread of divisive and polarizing content, our digital infrastructures have evolved into powerful tools capable of shaping public opinion on a massive scale. This isn’t just about external interference, as we might have understood it in the past. What we’re truly witnessing is a gradual, yet profound, transformation of the very environment in which democratic choices are made. Previous electoral cycles have already given us a stark preview of how automated accounts and coordinated networks can subtly, and sometimes not so subtly, distort online debates. Think back to the Brexit referendum, for instance, where a significant portion – over 5% – of user accounts tweeting about the referendum mysteriously vanished after the vote. It later emerged that thousands of these automated accounts had played a critical role in amplifying specific political messages, raising serious questions about the authenticity and organic nature of digital participation.
More recently, scholars have pointed to elections like those in Slovakia as a critical turning point, a chilling preview of how AI can be weaponized to manipulate democratic processes with unprecedented speed, scale, and sophistication. The tools are no longer hypothetical risks; deepfakes, synthetic media, and highly targeted disinformation campaigns are now operational realities. They are capable of eroding trust not only in our institutions and candidates but also in the very notion of shared truth. It’s crucial to understand that any form of fraud or interference in elections isn’t just a technical glitch or an administrative irregularity; it is, at its core, a human rights issue. When disinformation clouds public debate, when AI-driven systems subtly manipulate what citizens see and consume, or when personal data is exploited to influence voter behavior, the consequences extend far beyond the integrity of the electoral process itself. These actions directly undermine fundamental rights: the right to freedom of expression, the right to access accurate information, the right to privacy, and ultimately, every individual’s fundamental right to participate meaningfully in public affairs. Each of these rights can be severely compromised by how digital technologies are deployed during election cycles.
As enshrined in Article 21 of the Universal Declaration of Human Rights, genuine and periodic elections are themselves a fundamental human right. Therefore, any attempt to manipulate or interfere with these elections isn’t just an electoral concern; it’s a direct assault on the exercise of basic freedoms and the very legitimacy of democratic governance. In this context, disinformation isn’t merely misleading content; it is a direct threat to freedom of expression and the right to seek information, actively hindering the ability of citizens to form informed opinions – an absolute right. It poisons the information environment, making it incredibly difficult for citizens to make well-reasoned choices at the ballot box. However, we must also tread carefully. Poorly conceived responses to disinformation, such as overly broad content moderation policies, blanket restrictions, or flawed automated detection systems, carry their own risks – the risk of inadvertently infringing upon legitimate freedom of expression. Studies have shown that efforts to identify bots and harmful content can make significant errors, sometimes wrongly flagging legitimate political speech that might simply be unpopular, controversial, or even intentionally provocative but not inherently harmful.
This inherent tension highlights a crucial principle: the governance of AI in elections must be firmly anchored in human rights. These rights are not optional; they are not something we can negotiate away. They represent binding obligations under international law. States bear a direct responsibility not only to respect these rights themselves but also to actively protect individuals from violations perpetrated by private actors, including the powerful technology companies that dominate our digital landscape. These platforms cannot be seen as mere neutral conduits when their systems are actively shaping and influencing public discourse. A human rights-based approach to AI demands that safeguards be built into every stage, from the initial design and development of these systems to their deployment and ongoing oversight. It also necessitates accountability mechanisms that transcend national borders. The infrastructure supporting AI – including data centers, algorithms, and corporate headquarters – is globally distributed, while its impacts are intensely local. No single country can effectively or safely regulate this complex ecosystem in isolation.
This global reality is particularly pertinent for African countries. The continent currently lags structurally in terms of AI development resources and infrastructure, while the economic benefits of AI are overwhelmingly projected to accrue to the Global North. Addressing this imbalance necessitates confronting a deeper, underlying reality. The digital infrastructures that define and shape our political discourse are anything but neutral. A handful of global technology companies, largely headquartered outside the African continent, wield decisive influence over what information circulates, how it is amplified, and ultimately, who gets to see it. For African nations, this raises a fundamental question of “digital sovereignty.” While the recently adopted African AI strategy (2024) leans towards national approaches for governance and regulation, from a strategic perspective, fragmented national approaches risk weakening regulatory leverage and perpetuating dependency. Africa could end up with 54 disparate approaches to AI governance, undermining its collective strength.
Conversely, a more unified regional or continental strategy would not only bolster Africa’s collective bargaining power but also help ensure that AI systems are developed to reflect local values and provide genuine solutions to local problems. Such an approach could empower African states to set standards, negotiate more effectively with global platforms, and ensure that technological development truly aligns with the continent’s priorities. This would, in turn, contribute significantly to narrowing the global AI divide. At the same time, when we talk about democratic and electoral processes, there remains a critical “blind spot” that is still largely unexplored: the rapidly growing influence of encrypted messaging platforms. These platforms have become central channels for both political and general communication. In these more private digital spaces, disinformation might not always spread faster than on public social media, but it often spreads with far greater credibility. It circulates within trusted networks, making it significantly harder to monitor, challenge, or counter.
Unlike public social media, these encrypted environments are exceptionally difficult to monitor or study, yet they play a pivotal role in how information flows during elections. Disinformation within these channels is harder to detect, trace, and even harder to mitigate, making it a critical frontier for both research and policy development. Ultimately, the core challenge isn’t simply that technology is influencing elections. It’s that technology is fundamentally redefining the very conditions under which democratic rights are exercised. AI accelerates the production and dissemination of information—both true and false—at a pace that our institutions struggle to match. It amplifies existing vulnerabilities within political systems while simultaneously introducing new ones that we are only just beginning to comprehend. In Morocco, this conversation is no longer distant or purely theoretical. With general elections on the horizon, the role of AI in shaping our information landscape is likely to be significant and pervasive. Political actors, their supporters, and even external stakeholders will have access to powerful tools capable of generating highly persuasive content, micro-targeting specific audiences, and potentially manipulating public perception. Generally speaking, during election processes, the use of AI itself may not always be overtly visible, but its effects will undoubtedly be deeply felt. The key question, therefore, isn’t whether AI will be used during an election. The crucial questions are: how is it being used, how quickly can harmful uses be detected, and how swiftly and effectively can stakeholders respond to mitigate those harms? Ensuring that AI deployment does not undermine fundamental rights demands unwavering vigilance, proactive preparedness, and an unshakeable commitment to a human rights-based approach. During upcoming elections, anticipating and effectively addressing these complex challenges will be absolutely essential to preserving both the integrity of the electoral process and, more importantly, the fundamental rights of the very people it is designed to serve.

