Taiwanese Trust AI-Generated Data More Than Human-Created Information, Raising Concerns About Disinformation
A recent study conducted by National Taiwan University (NTU) has revealed a concerning trend: a significant number of Taiwanese citizens place more trust in information generated by artificial intelligence (AI) than in data created by humans. This finding highlights a worrying susceptibility to disinformation and fake news, particularly given the increasing prevalence of AI-generated content. The study, which has been ongoing since 2022, found that over 90% of Taiwanese respondents reported exposure to disinformation, primarily originating from scammers. This widespread exposure underscores the urgent need for effective strategies to combat the spread of false information.
The study also explored public attitudes towards fact-checking and anti-disinformation measures. Encouragingly, over 70% of respondents reported utilizing fact-checking platforms and expressed confidence in their credibility. Furthermore, a significant majority – between 80% and 90% – voiced support for laws targeting social media platforms to curb disinformation, indicating widespread public frustration with online fraud and manipulation. This year, in response to the growing popularity of generative AI, the survey incorporated questions specifically addressing AI-related concerns.
The results of these new inquiries painted a concerning picture of public perception and usage of AI-generated content. Seventy percent of respondents admitted to consuming AI-generated content, yet a significant portion seemed unaware of the technology’s potential to fabricate and misrepresent information. Furthermore, a majority of frequent generative AI users expressed greater trust in the objectivity and accuracy of machine-generated information compared to human-generated data. This blind faith in AI, despite its known limitations and potential for misuse, is particularly troubling.
This unwavering trust in AI persists even in the face of demonstrable errors. While 80% of respondents acknowledged encountering inaccuracies in AI-generated content, a majority of frequent AI users continued to place more trust in machine-generated data than in human-created information. This disconnect between acknowledging the fallibility of AI and continuing to trust it suggests a fundamental misunderstanding of the technology’s capabilities and limitations.
The study also probed respondents’ confidence in their ability to identify AI-generated disinformation. A mere 10% believed they could always detect such falsehoods, while 30% reported usually being able to do so, and 40% admitted they could only sometimes identify AI-created fake news. Alarmingly, approximately 15% rarely identified AI-generated disinformation, and a small percentage claimed they never detected any falsehoods in AI-generated media. This overconfidence in one’s ability to discern truth from falsehood, coupled with the inherent trust in AI-generated content, creates a fertile ground for the spread of misinformation.
Experts warn that the unchecked and uncritical consumption of AI-generated content could have serious consequences. Professor Hung Chen-ling, who led the NTU study, cautioned that careless use of generative AI is likely to facilitate the dissemination of fake news and amplify existing societal biases. Hu Yuan-hui, chairman of Public Television Service and former head of the Taiwan FactCheck Center, echoed these concerns, labeling the study’s findings as an alarming sign of media illiteracy. He stressed that individuals with unfounded confidence in their ability to distinguish truth from falsehood are particularly vulnerable to manipulation by malfunctioning AI algorithms, which can fabricate information, a phenomenon often referred to as "hallucinating." This highlights the urgent need for increased media literacy and critical thinking skills to navigate the increasingly complex information landscape shaped by AI. The study’s findings underscore the need for public awareness campaigns to educate individuals about the potential pitfalls of AI-generated content and the importance of verifying information from multiple sources.