In a startling turn of events that has sent ripples through the international tech and policy communities, South Africa has been forced to retract the initial draft of its national Artificial Intelligence (AI) policy. The reason? A significant portion of its reference list was found to contain fictitious sources, strongly suggesting they were the product of an AI hallucination. This embarrassing revelation has not only cast a shadow over South Africa’s ambitious plans to become a continental leader in AI but also served as a stark reminder of the critical need for human oversight in the age of increasingly sophisticated AI tools. The Minister of Communications and Digital Technologies, Solly Malatsi, did not mince words when addressing the blunder, stating, “The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened.” His candid admission underscored the gravity of the situation, highlighting that this “failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy.” The incident serves as a powerful cautionary tale, emphasizing that while AI can be an invaluable tool for generating content and conducting research, its output must always be subjected to rigorous human scrutiny and verification to maintain credibility and avoid such egregious errors.
The now-withdrawn draft policy was a cornerstone of South Africa’s strategy to position itself at the forefront of AI development and implementation across Africa. It envisioned a comprehensive framework designed to foster innovation, attract investment, and ensure the responsible deployment of AI technologies. The policy outlined grandiose plans for establishing new, dedicated institutions, including a National AI Commission tasked with strategic oversight, an AI Ethics Board to navigate the complex moral and societal implications of AI, and an AI Regulatory Authority to enforce standards and ensure compliance. Furthermore, the draft aimed to incentivize private-sector collaboration through an array of financial stimuli, such as tax breaks, grants, and subsidies, in a bid to stimulate growth and create a vibrant AI ecosystem. The ambition behind these proposals was commendable, reflecting a genuine desire to harness the transformative potential of AI for national development. However, the revelation of fabricated sources fundamentally undermined the very foundation of this ambitious vision, raising serious questions about the diligence and thoroughness of the policy’s development process.
Minister Malatsi, in his public statements, not only acknowledged the error but also framed it as a crucial learning experience. “This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It’s a lesson we take with humility,” he wrote in a post on X. This statement is particularly poignant as it comes from a government spearheading an AI policy; the irony of an AI policy being undermined by AI itself is not lost on observers. The incident unequivocally demonstrates that while AI can augment human capabilities, it cannot yet replace the critical thinking, verification, and ethical judgment that human experts bring to policymaking. The initial enthusiasm surrounding AI’s potential might have led to an over-reliance on its generative capabilities without adequate checks and balances. The episode forces a re-evaluation of how AI tools are integrated into sensitive government processes, underscoring the indispensable role of human intelligence in validating information and ensuring the integrity of official documents.
The extent of the error was quite significant, as reported by Daily Maverick. It was revealed that over a third of the policy document, which had been published for public comment, contained these fake source materials. Delving deeper, the problem was concentrated within three of the six core pillars upon which the entire policy was constructed. These critical pillars were “Capacity and Talent Development,” “Economic Transformation,” and “Responsible Governance.” These are not peripheral aspects but fundamental sections that would guide how South Africa develops its AI workforce, integrates AI into its economy, and establishes ethical guidelines for its use. The presence of fabricated sources in such crucial areas is deeply concerning, as it suggests that the foundational research and evidence underpinning these policy directives were flawed from the outset. Adding another layer of complexity, Daily Maverick also reported a previous statement from Dumisani Sondlo, the Department of Communications and Digital Technologies’s AI policy lead, who, in 2025, reportedly stated that the development of the National AI Policy was “an act of acknowledging that we don’t know enough.” This earlier statement, if accurately reported, presents a paradoxical situation, where the very act of acknowledging limited knowledge led to a policy document riddled with unverified, potentially AI-generated, information.
Looking ahead, Minister Malatsi confirmed that there would be consequences for those responsible for drafting the flawed policy. While he did not specify the nature of these consequences, nor did he provide a timeline for the release of a new, rectified policy, the implication is clear: accountability is paramount. The incident serves as a stark warning to other nations and organizations embarking on AI policy development. In an era where AI tools are becoming increasingly sophisticated and accessible, the temptation to leverage them for rapid content generation is high. However, South Africa’s experience vividly illustrates the perils of doing so without stringent human oversight. The humorously poignant observation that “An AI policy written by a hallucinating AI is the perfect ouroboros metaphor, and something even AI couldn’t come up with” encapsulates the ironic nature of this mishap. It underscores the ongoing need for human critical thinking, ethical judgment, and diligent verification, especially in the creation of policies that will shape the future trajectory of a nation in the digital age.
Finally, the incident with South Africa’s AI policy is more than just a bureaucratic stumble; it’s a modern-day fable echoing through the digital halls of power. It’s a vivid illustration that while AI promises efficiency and innovation, it also demands rigorous human vetting, especially when dealing with foundational policy documents. The idea that a policy meant to govern AI was itself sabotaged by AI is a profound irony, a circular narrative that highlights the nascent stages of our co-evolution with artificial intelligence. The lesson for South Africa, and indeed for the global community, is unequivocally clear: as we embrace the capabilities of AI, we must simultaneously reinforce the irreplaceable value of human intellect, discernment, and ethical responsibility. The next iteration of South Africa’s AI policy will undoubtedly be a testament to this hard-earned wisdom, hopefully double-checked by diligent human eyes, or at the very least, by a more reliable and less “hallucinatory” AI assistant. The path forward requires not just technological advancement, but also a renewed commitment to human oversight and critical verification in an increasingly automated world.

