In the bustling heart of South Korea, in a sun-drenched office in Gwacheon, a dedicated team of individuals is on the front lines of a new, invisible war. They are the disinformation monitors of the National Election Commission (NEC), and their mission is to safeguard the integrity of local elections from the insidious creep of AI-generated content. As South Korea rapidly embraces artificial intelligence, its darker side – the effortless creation of hyper-realistic deepfakes and plausible but fabricated narratives – has become a formidable challenge. The country, known for its rapid adoption of AI to the point where it boasts the most paid ChatGPT subscribers outside the US, is also grappling with the unprecedented proliferation of low-quality, AI-generated content, often referred to as “AI slop.” The stakes are incredibly high, as reports of false AI-created content have surged a staggering 27-fold between the 2024 general election and the following year’s presidential campaign. This alarming trend has prompted the government to strengthen its laws in 2023, allocating hundreds of staff to track and counter manipulated content, but the feeling on the ground is that of an uphill battle, a constant chase against an ever-evolving adversary.
Imagine a scene where Choi Ji-hee, one of these diligent disinformation monitors, shares her experience with a palpable sense of urgency. “We can literally see how fast this technology evolves,” she explains, her voice reflecting both awe and concern. She describes how each new iteration of AI tools makes videos and audio sound even more convincing, blurring the lines of reality. Her job, along with that of her 18 colleagues, is a tireless exercise in digital detective work. They pore over Instagram, YouTube, online chatrooms, and even “fan clubs” dedicated to local politicians, sifting through the digital noise in search of AI-concocted content. Their recent discoveries paint a vivid picture of this new threat: a fabricated TV news report claiming a mayoral candidate made Time magazine’s list of rising political leaders, and even a slick, AI-produced K-pop song that praises one politician while subtly mocking his rivals. When such content is confirmed as AI-generated, the authorities swing into action, demanding its removal and, in extreme cases, imposing severe punishments, including jail time. It’s a continuous, arduous process, where every click and every analysis is a step in protecting the democratic process from unprecedented manipulation.
In another corner of this vital office, the complex layers of misinformation are being meticulously unraveled. Colleagues huddle over a suspicious video, discussing the best approach to dissecting it. Should they isolate the audio, extract key frames, analyze facial images, or scrutinize the background footage? Each element offers a clue, a potential tell-tale sign of AI manipulation. Nearby, data analyst Kim Ma-ru provides crucial support, mapping the distribution networks of fake materials – where, when, and by whom they were spread. This strategic intelligence helps Choi’s team to more quickly pinpoint and address dubious content. The upcoming local elections on June 3rd mark a significant milestone: they are the third major ballot in South Korea since the amended law combating AI-fueled election falsehoods was enacted in 2023. Despite the gravity of their work, Kim Ma-ru concedes, “It’s an exhausting job that can feel like a (game of) whack-a-mole.” Yet, there’s a strong undercurrent of purpose and civic duty that drives them. Their efforts have already led to the debunking of significant AI-generated election disinformation, including a video of the current leader, Lee Jae Myung, purportedly faking a hunger strike, highlighting the tangible impact of their vigilance.
The challenge, however, extends beyond just fake content about candidates; it also encompasses a persistent erosion of public trust fueled by conspiracy theories. The specter of vote-rigging claims in recent years has left its mark on the South Korean electorate. One particularly jarring incident involved former president Yoon Suk Yeol, who, during a short-lived attempt to impose martial law in late 2024, sent hundreds of armed troops to the NEC, repeating widely disproven far-right claims of vote hacking. The echoes of these claims are still visible, with pro-Yoon protesters displaying banners outside the office demanding investigations into “rigged elections.” This atmosphere of mistrust and hostility has taken a personal toll on the election workers. Both Choi Ji-hee and Kim Ma-ru, for instance, declined to be photographed or filmed, citing growing threats and online bullying – a stark reminder of the personal risks involved in their public service.
Jung Hui-hun, a digital forensic specialist in the NEC’s cyber investigations unit, articulates the core dilemma faced by voters: “In such a short time, it has become so difficult for voters to tell what is real and what is not.” He demonstrates this challenge by running videos through state-developed software tools designed to detect AI imagery. These programs, which boast an impressive 92% accuracy, still require human experts to review the most sophisticated and nuanced material. Once confirmed as AI-generated, the rules stipulate that either the poster or the platform must remove the content for violating the 2023 law. This law is quite strict, banning AI material that involves candidates and appears realistic enough to confuse voters within three months of an election. The consequences for non-compliance are severe: repeat offenders or those creating particularly harmful content can face up to seven years in jail or a hefty fine of 50 million won (approximately S$43,600).
Dr. Kim Myuhng-joo, director of the Korea AI Safety Institute, offers a broader perspective on why South Korea has embraced such stringent regulations. He acknowledges that these rules might seem excessive to those outside the country, particularly in places like the US where freedom of expression is highly prioritized. However, he explains that South Koreans, having rapidly embraced AI, quickly became keenly aware of its dangers. He cites not only the election conspiracy theories but also a public scandal involving deepfake pornography targeting women and girls, which profoundly shaped public opinion. “Public consensus has formed that we need tough regulations over the use of AI when it comes to election transparency,” Dr. Kim emphasizes. This sentiment is strongly supported by the public, with a survey last year revealing that 75% of South Koreans believe AI-generated content can sway election results, and nearly 80% support stronger efforts to detect and punish its use. While Jung, the digital forensic specialist, admits that the country’s response has “many limits,” he remains hopeful that these pioneering efforts will spark a global debate on how to effectively tackle AI-fueled disinformation. “We’re still trying to figure out what is the best solution… but I think we are moving forward – slowly but surely,” he concludes, encapsulating the measured optimism and unwavering determination of those at the forefront of this critical fight for truth and trust in the digital age.

