In a world increasingly awash with information, both true and false, a national seminar for World Fact-Checking Day 2026 brought together vital organizations to tackle the rising tide of misinformation. Led by the Thai Health Promotion Foundation (ThaiHealth), Cofact Thailand, Thai PBS Verify, and a network of 20 partner organizations, the forum was more than just a meeting; it was a rallying cry, a collective effort to equip ordinary people with the tools to navigate the confusing digital landscape. It was a moment to reflect on the battles fought and the challenges that still loom large, recognizing that in an age where a retweet can spread a lie faster than wildfire, the fight for truth is more critical than ever. We’re talking about real people, real lives, and real consequences, making this a human story of resilience and adaptation in the face of an ever-evolving threat.
Thai PBS Verify, a digital media department, revealed staggering statistics that paint a vivid picture of the misinformation problem. Since its inception in October 2024, this crucial unit has sifted through a mountain of 421 reports, uncovering patterns in the digital chaos. The data shows that “social issues” are the most fertile ground for misinformation, making up a significant 33% of cases. Close behind are “politics” and “around the world” categories, accounting for 28% and 22% respectively. What’s truly human about this is how misinformation often exploits human vulnerabilities and anxieties, surging in direct response to major events that touch people’s lives: border disputes, devastating floods, or conflicts roaring far away. And then there’s the insidious rise of “AI-slops,” where artificial intelligence, meant to help, is instead twisted to craft fake content, muddy truths, and blend unrelated events into a misleading narrative. Even gaming technology, a source of entertainment for so many, is now being weaponized to create deceptive content, turning fun into a potential minefield of lies. The lessons learned by Thai PBS Verify resonate deeply, highlighting how our existing tools are struggling to keep up with the lightning-fast evolution of AI. They also powerfully illustrate the disheartening reality that a lie often travels much faster and farther than the truth, amplified by a deluge of shares before corrections even have a chance to take root. Even more chilling is the revelation that malicious actors can exploit reputable media outlets, using their credibility as a Trojan horse to legitimize and amplify fake news. And while misinformation is a constant background hum, it explodes into a deafening roar during times of crisis, preying on fear and uncertainty. The human burden on fact-checking teams is immense; they must walk a tightrope, exercising extreme caution to avoid accidentally spreading the very misinformation they are passionately fighting to extinguish.
The “Sure And Share” initiative offered a crucial framework for understanding the nature of these untruths, categorizing them into three distinct groups that speak to the heart of human perception and technological trends. First, there’s “truth without context,” a particularly treacherous form of misinformation because it’s built on facts, but presented in a way that deliberately or inadvertently leads to a false conclusion. Imagine a snippet of a conversation taken out of its full context, making an innocent statement sound like a confession – that’s the essence of it. By carefully omitting key details, whether by accident or design, the content distorts reality and misleads us, the readers. Then, there’s the complex issue of “trust in AI,” a major contemporary challenge. As AI collects and summarizes information from vast sources, it can both unintentionally mislead and be intentionally misused. When many platforms present similar, yet inaccurate, AI-generated summaries or “top search results,” it can create a false sense of consensus, lulling us into believing misinformation as fact. This ‘unintentional misguidance’ preys on our natural tendency to trust what seems widely accepted. On the flip side, ‘intentional negligence’ occurs when creators, in their haste, rely on unverified AI outputs. This “easy-path” approach ignores AI’s inherent limitations, spreading misinformation that can have real, tangible consequences on people’s lives. Finally, “media manipulation or source hacking” involves tricking news outlets themselves into amplifying false information. Once a reputable media source is deceived, the misinformation gains a powerful “seal of approval,” allowing it to spread like wildfire and gain unwarranted credibility before anyone recognizes the deception. These insights remind us that in this landscape, a “healthy level of skepticism” and a “Zero Trust” mentality are not just good habits, but essential survival skills.
Despite these formidable challenges, there’s a powerful counter-movement gaining momentum, particularly within online media. The Society of Online News Producers (SONP) has reported a successful year in elevating media standards, showcasing a powerful commitment to integrity. Jeerapong Prasertpolkrung, SONP Vice President, proudly shared that 54 member outlets have completed rigorous fact-checking training, a testament to their dedication. More significantly, these newsrooms have embedded internal verification systems to vet every daily report, a monumental step towards ensuring the public receives accurate information. Since January, SONP has spearheaded the “Stop Fake Spread Fact” initiative, an ambitious project aiming to verify 120 news reports over nine months. To date, 40 reports have already been fact-checked, garnering an impressive 1.5 million views – rapidly closing in on their target of 1.8 million. The strength of SONP lies in leveraging the diverse expertise of its member outlets to cross-verify news across various sectors. They’ve also been forward-thinking, hosting an “AI Newsroom” workshop to provide digital reporters with comprehensive AI training. Beyond the technical, SONP fosters a culture of excellence and education, celebrating achievements through annual Fact-Checking Awards and extending its impact by training students to become media literacy ambassadors at universities, empowering the next generation to be critical consumers of information. These initiatives are more than just programs; they are human-driven efforts to build a more informed and resilient society, demonstrating a proactive stance against the tide of misinformation.
The conversation then turned to the highly technical, yet deeply human, aspect of combating AI deception. Nattakorn Ploddee, Southeast Asia Digital Verification Editor at Agence France-Presse (AFP), described the past year as exceptionally challenging, highlighting the profound global influence of U.S. politics. He noted a decrease in AFP’s fact-checking output, a direct consequence of budget cuts to Facebook’s verification programs, a shift that intensified after President Donald Trump took office. This illustrates how policy decisions, even far away, can have a tangible impact on the front lines of the information war. In Thailand, the past year saw a deluge of false information, particularly around major events like earthquakes and military tensions at the Thai-Cambodian border. AFP observed that doctored media played a pivotal role in amplifying these narratives, a formal finding that underscores the cunning and evolving nature of misinformation. Globally, AFP published 6,800 fact-check reports, with a significant 11% (around 600-700 cases) involving AI-generated content. This upward trend, driven by the increasing accessibility of AI tools to the general public, signals a future where AI-fueled deception is a growing concern for everyone. Nattakorn, however, offered a glimmer of hope: “While AI has advanced by leaps and bounds over the past year, our experience at AFP demonstrates that AI-generated content remains manageable. By utilizing specialized tools alongside rigorous evidence-based verification, we can effectively identify misinformation.” He stressed that these two core principles remain their primary defense, expressing confidence in their continued effectiveness even as technology becomes more sophisticated. He also acknowledged the increasing complexity of fact-checking, where truth and falsehood are often intricately intertwined, making verification rarely a black-and-white affair. This battle often involves navigating complex “gray areas” where facts mingle with misinformation. He pointed to the global and Thai rise of the “information control industry,” coordinated networks that systematically manipulate narratives. To safeguard the credibility of fact-checking, he emphasized “transparency, a steadfast commitment to evidence, and adherence to rigorous verification principles,” recognizing these as the human bedrock of trust in an increasingly uncertain world.
Adding another layer of global perspective, Ethan Tu, an engineer at Taiwan AI Lab, revealed fascinating insights into how AI responses are intricately tailored to a user’s location and language. His research on models like ChatGPT showed inconsistent answers, with a striking linguistic bias: AI often modifies its responses to align with the cultural and regional perspectives associated with the specific language used. This isn’t just a technical quirk; it has profound human implications. As Ethan explained, “Queries posed in Traditional Chinese (Taiwanese usage) versus Simplified Chinese (Mainland China usage) yield noticeably different results, as each AI model is trained on data curated by its respective native speakers.” This discrepancy is a fertile ground for “foreign-led disinformation,” as AI outputs are subtly shaped by the linguistic and cultural biases inherent in their training data. This means that a seemingly neutral AI could inadvertently reflect or amplify biases simply based on the language in which a query is posed. Further East, in Japan, Noa Horiguchi, CEO and co-founder of Classroom Adventure, highlighted the rising threat of online scammers and AI-generated misinformation on social media, especially within the influencer-driven landscape. He noted that while influencers increasingly use AI to generate content, much of it contains errors, leading to widespread public misunderstanding. A particularly insidious risk is AI’s ability to “fabricate links or create non-existent websites” to complete its responses, creating “AI-generated assets” that could be exploited for fraudulent or malicious purposes. The human toll of these advanced “deepfake scams” is significant, as they gain credibility and make it “increasingly difficult to distinguish between a genuine individual and a scammer impersonating them.”
Finally, Cofact Thailand, represented by Ms. Kulthida Samapuddhi, provided a sobering look at why misinformation stubbornly persists. Having completed 283 verifications and reached 100,000 views, Cofact doesn’t just focus on outright falsehoods but also on “gray area” information that remains vague or unconfirmed, recognizing that clarifying these uncertainties is a vital human service. Their year-long analysis exposed three worrying trends: First, misinformation, once verified, often continues to persist. Out of numerous cases, only 46 were removed after fact-checking, while 171 remained active, their engagement and share counts steadily climbing. This highlights the immense difficulty in truly eradicating false narratives once they take root. Second, mainstream media, surprisingly, can still play a role in spreading misinformation. Even after content is thoroughly debunked, other outlets might continue to share fake images and reports to “further incite hatred and division,” revealing a disturbing level of intentional malice or carelessness. Third, a significant segment of the public still fails to grasp the value of fact-checking. Many individuals remain indifferent, choosing to spread misinformation simply for their “personal gratification or emotional satisfaction,” underscoring the deep-seated psychological motivations behind sharing falsehoods. Yet, in the face of these challenges, human ingenuity and engagement shine through. The “World Fact-Checking Day 2026” event also featured interactive activity zones, like Thai PBS Verify’s “5 questions to prove your fact-checking skills” quiz, offering practical guidance for navigating online misinformation. And the “CHECK DOO” resource hub, developed by students from Mahasarakham University, stands as a testament to grassroots efforts: a centralized platform aggregating news and verified fact-checking reports from various credible sources. These initiatives, driven by young minds and dedicated professionals, represent the human spirit’s unwavering commitment to truth, offering accessible tools and fostering media literacy to empower everyone in the ongoing fight against digital deception.

