In today’s fast-paced digital world, where information spreads at lightning speed, a concerning new trend has emerged: the proliferation of fake natural disaster videos on social media, often crafted with the help of artificial intelligence (AI). These videos, with their sensational titles and hyper-realistic imagery, are not just creating a buzz; they’re actively sowing panic, anxiety, and distrust among unsuspecting viewers. Imagine scrolling through your feed, minding your own business, when suddenly you see a video claiming a “super typhoon makes landfall” or “the whole city is submerged in a sea of water.” The visuals are convincing, perhaps showing streets deeply flooded, trees uprooted, houses swept away, and people stranded on rooftops. Your heart races; you immediately think of your loved ones. This is the emotional roller coaster many are experiencing, only to later discover that these dramatic scenes are often fabricated, stitched together from old footage, foreign events, or, increasingly, generated entirely by AI. It’s a cruel deception that leverages our natural human concern for safety and well-being during times of crisis.
These deceptive practices are not without their tell-tale signs, though they are becoming increasingly sophisticated. Many of these fake videos incorporate images and clips from past natural disasters, sometimes from other countries, and then misleadingly present them as current events in a local context. To further enhance their credibility, some even go as far as to insert fake weather agency logos, giving the impression of official information. The creators of these videos also employ live-streaming tactics, using AI-generated voices to narrate apocalyptic scenarios and AI-constructed visuals to create a sense of urgency and interaction. The sheer authenticity of these AI-generated scenes, as noted by experts, is what makes them so dangerous. They tap into our deepest fears, exploiting the very human need to stay informed and connected, especially during potential crises.
The human impact of these fake videos is profound and deeply unsettling. Take, for instance, Mr. Nguyen Van Tung, a worker in Hanoi. Late one night, after seeing a barrage of videos on social media depicting hailstorms and thunderstorms, he was consumed with worry for his family in his hometown. The internet connection was sporadic, making it difficult to reach them, which only amplified his anxiety. He saw clips of roofs flying off and car windows shattering, all seemingly happening in his hometown. It was only later that he discovered many of these clips were doctored or taken from other locations entirely. This is not an isolated incident. The comment sections of these videos are tell-tale signs of the widespread panic they induce. Many viewers genuinely believe the images of hail, flash floods, or thunderstorms are occurring in their own localities. People are frantically asking about the disaster’s location, calling relatives, and sharing these distressing videos with family groups, all driven by a genuine concern that is being ruthlessly exploited. Mr. Do Manh Cuong from Hung Yen experienced a similar deception when he was fooled by a clip of ice rain covering the streets, its realism so convincing that he immediately sent it to friends, asking if the disaster was near his area. The collective belief in these fabrications underscores the power of these AI-generated deceptions.
Experts are sounding the alarm about the escalating danger posed by AI in the realm of misinformation. Vu Thanh Thang, Director of AIZ Joint Stock Company and an AI expert, highlights that AI is making fake news far more perilous due to its ability to create increasingly realistic images and videos. He explains that within minutes, anyone can use AI tools to generate convincing videos of storms, floods, collapsed houses, or scenes of people fleeing for their lives. These emotionally charged contents, designed to trigger fear, spread like wildfire across social media platforms. The very algorithms of these platforms, unfortunately, often prioritize content that evokes strong emotions. This creates a perverse incentive for some accounts to operate under a “timely view-baiting” model, where they deliberately capitalize on unfolding events like storms, earthquakes, and explosions to create shocking content. Their motivations are varied: to increase interactions, boost sales, or simply to earn money from the platform. The more sensational the video, the more likely it is to be proposed by the algorithm, further fueling this cycle of misinformation and exploitation.
Recognizing these AI-created videos is becoming a growing challenge for the average user. Mr. Thang acknowledges that simply scrolling through one’s feed makes it incredibly difficult to distinguish genuine content from AI-generated fabrications. While there are some subtle clues – unnatural movements, inconsistent audio, or unusual weather details – these are often missed by those who are not actively looking for them, or by individuals less familiar with technology, such as older generations. This difficulty in discernment makes the information landscape even more treacherous. The implications extend beyond individual distress to broader societal concerns. The ability of AI to generate such convincing fake content undermines trust in media, erodes public confidence during crises, and can even hinder genuine disaster response efforts by diverting attention and resources.
In light of these pressing concerns, there’s a growing need for collective action and a multi-faceted approach to combat this new wave of digital deception. The Party and State have repeatedly emphasized the importance of improving disaster prevention, response, and recovery strategies, and this includes addressing the spread of misinformation. Official plans highlight the crucial role of information and propaganda in raising public awareness and skills for preventing, responding to, and overcoming natural disasters. Crucially, these plans also underscore the need to combat false views and information that threaten security, order, and people’s lives during such events. Furthermore, there’s a strong push to leverage science and technology, including digital transformation, artificial intelligence, and big data, for more effective monitoring, forecasting, early warning systems, and disaster risk management. Ultimately, a combined effort of technological innovation, media literacy education, responsible platform governance, and strong leadership from all levels of government and society is essential to safeguard individuals and communities from the emotional and practical dangers posed by AI-generated fake news in the context of natural disasters.

