The Liar’s Dividend: How AI-Generated Content Threatens Political Accountability
The rapid advancement of artificial intelligence, particularly in the realm of generative AI and deepfakes, has ignited significant concerns about its potential impact on society, especially in the context of elections and political discourse. Deepfakes, which are manipulated videos or audio recordings that convincingly depict individuals saying or doing things they never did, pose a unique threat to the integrity of information and public trust. This burgeoning technology, coupled with other forms of AI-generated content, is drawing increasing attention from policymakers, evidenced by initiatives like President Biden’s Executive Order on AI and the establishment of the National Institute of Standards and Technology’s AI Safety Institute Consortium. The Federal Communication Commission’s ban on AI-generated voices in robocalls and Big Tech’s agreement to label AI-generated content further underscore the growing awareness of the potential risks associated with this technology.
Beyond the immediate dangers of deepfakes, a more insidious threat looms: the erosion of trust and the potential for manipulation of public opinion. Researchers at Purdue University’s Governance and Responsible AI Lab (GRAIL) are particularly concerned about the indirect effects of AI-generated content on individuals’ faith in the informational and political landscape. Their research focuses on the “liar’s dividend,” a concept describing how politicians and public figures exploit the existence of deepfakes and misinformation to their advantage. By falsely claiming that legitimate news stories are fabricated or deepfakes, they can deflect criticism, evade accountability, and sow distrust in media institutions.
The liar’s dividend, a term coined by legal scholars Bobby Chesney and Danielle Citron, posits that the mere possibility of deepfakes lends credibility to denials of genuine wrongdoing. GRAIL researchers have expanded this concept to explore its implications for political accountability. Their work examines whether politicians can leverage false claims of misinformation to maintain support even after being embroiled in scandals. While initial skepticism surrounded the impact of deepfakes back in 2018, subsequent events have demonstrated their potential for harm, with instances of election interference, fraud, and even attempted coups linked to the use of manipulated media. The difficulty in definitively verifying the authenticity of digital content further complicates the issue, creating an environment ripe for exploitation.
To investigate the liar’s dividend empirically, GRAIL researchers conducted five studies between 2020 and 2022, surveying over 15,000 American adults. Participants were presented with real news stories about political scandals, either in video or text format, followed by varying responses from the implicated politician. These responses included apologies, denials, denials invoking misinformation, or no response at all. The researchers then measured participants’ willingness to support the politician and their trust in the media. The findings consistently demonstrate the effectiveness of the liar’s dividend. Politicians who falsely claimed that news stories were fake or deepfakes garnered more support compared to those who remained silent or apologized. Surprisingly, this effect transcended political affiliations, impacting individuals across the political spectrum.
The research also revealed an intriguing difference between text-based and video-based scandals. False claims of misinformation proved more effective when the scandal was presented in text format. In one study, exposure to the politician’s misinformation claim reduced opposition by 10 to 12 percentage points among participants who read the scandal. However, when the scandal was presented via video, the liar’s dividend was less pronounced. This is noteworthy because the initial concerns surrounding the liar’s dividend stemmed from deepfakes, which are inherently video-based. However, more recent studies suggest that as public awareness of deepfakes increases, false claims about deepfakes may become more persuasive, potentially blurring the lines further. Encouragingly, the research did not find evidence that crying wolf about misinformation decreased trust in the media itself.
These findings carry significant implications for citizens, particularly in a year marked by major elections globally. It is crucial to be vigilant against the liar’s dividend and recognize that politicians may exploit misinformation for their own gain. Scrutinizing claims of deepfakes and misinformation is essential, requiring citizens to seek confirmation from multiple sources and evaluate the credibility of information. Furthermore, the research suggests that compelling audio-visual evidence, such as that produced by advanced AI models, may make it difficult for politicians to refute actual deepfakes in the future, raising the stakes even higher.
Addressing these challenges requires a multi-faceted approach involving technical solutions, policy interventions, and public education. Firstly, fact-checking should extend beyond news stories to encompass politicians’ claims about those stories. Research suggests that fact-checking can effectively counteract the liar’s dividend, although challenges remain in ensuring that individuals seek out and trust fact-checking information. Secondly, watermarking and labeling AI-generated content offer a potential solution. Visual indicators or metadata embedded within images and videos can signal their synthetic origin. Several tech companies are already adopting such measures, but their effectiveness hinges on public recognition, trust in providers, and the development of tamper-proof technology.
Thirdly, promoting media, digital, and AI literacy is paramount. Educating individuals about evaluating news sources, understanding social media algorithms, and recognizing AI-generated content is crucial. This requires integrating these topics into K-12 and higher education curricula, complemented by public awareness campaigns. Finally, continued research and policy attention focused on the political implications of generative AI are essential. Initiatives like GRAIL’s Political Deepfakes Incidents Database provide valuable resources for tracking deepfake usage and informing evidence-based policymaking. While raising awareness about deepfakes and misinformation may inadvertently heighten feelings of uncertainty, a comprehensive approach involving researchers, policymakers, journalists, educators, and AI developers can help mitigate the liar’s dividend and strengthen societal resilience against manipulation.