In an age where information swirls around us like a digital storm, it’s becoming harder to tell what’s real and what’s not. The MSU Museum, with its “Blurred Realities” exhibit, is stepping into this challenging space, inviting us to not just see, but to deeply think about the AI-generated content we’re constantly consuming. This isn’t just about cool tech; it’s about understanding how AI is quietly, and sometimes not so quietly, shaping our perceptions, especially in critical areas like elections. A recent panel, “AI, Elections, and the Fight for Facts,” brought together experts to unravel this complex knot, a collaboration between the museum and MSU’s Department of Political Science. It’s like they’re saying, “Hey, this isn’t just theory; this is happening now, and we need to talk about it.”
At the heart of the “Blurred Realities” series lies “Generative Persuasion,” an exhibit masterfully crafted by Dr. Jennifer Gradecki and Dr. Derek Curry, associate professors at Northeastern University. They didn’t just create an exhibit; they built a mirror reflecting how Generative AI can subtly twist our understanding, crafting compelling yet entirely false narratives that can push people towards extreme views. Their inspiration wasn’t theoretical; it was ripped straight from the headlines. Gradecki recounted unsettling parallels, pointing to the infamous Cambridge Analytica scandal of 2018, where personal data from millions of Facebook profiles was used without consent to microtarget voters, influencing pivotal elections like Brexit and the 2016 US presidential race. While the exact impact remains debated, the success of the campaigns it supported speaks volumes. Adding to this, she highlighted OpenAI’s threat reports, which expose how state and non-state actors are actively employing ChatGPT and other AI tools to concoct disinformation campaigns. What’s even more concerning, these actors aren’t just using paid services; they’re leveraging local, open-source AI models to operate in secrecy, making their influence even harder to trace.
Despite its fictional framework, “Generative Persuasion” is a stark portrayal of a very real threat. Gradecki emphasized that while the exhibit is an artwork, it’s not speculative. We know, she asserted, that influence campaigns are already using microtargeting and generative AI to rapidly produce persuasive disinformation. She made a crucial distinction: disinformation isn’t just outright lies; it’s also distorted truths, half-truths, and emotionally charged judgments. The core purpose, she explained, is to manipulate and influence – to ignite strong emotions like pride, hatred, or outrage, or even to divert attention and stifle dissent. For Curry, the exhibit’s mission is clear and urgent: to equip us with the skills to navigate this digital landscape, fostering “media, data and AI literacies” and encouraging a healthy skepticism towards all online content. It’s about empowering us to discern, question, and ultimately, stand firm against manipulative narratives.
Ashlee Smith, Senior Director of Content and Education for WKAR, brought her invaluable media perspective to the panel, moderating the discussion. She underscored the critical timing of these conversations, especially during an election year, as the proliferation of AI content blurs the lines between fact and fiction, making it increasingly difficult for ordinary people to tell what’s real. Having to navigate the evolving landscape of AI in her daily work at WKAR, Smith shared her firsthand experience with its rapid pace of change. She passionately articulated WKAR’s commitment to remaining a trusted source of information, emphasizing that their offerings are “human, factual, and editorially sound” in a vast “sea of synthetic media.” Her resolve to uphold journalistic integrity serves as a beacon in these turbulent times, reminding us of the enduring value of authentic reporting.
Smith’s message extends beyond the immediate challenge of identifying AI-generated content; she hopes to ignite a deeper understanding among students about the profound societal implications of AI. She acknowledged that AI can feel like an intimidating subject, yet stressed its paramount importance. “One of the most important things we can do as citizens and consumers is to be media literate,” she urged, emphasizing the need to grasp AI’s effects and implications so we can actively seek truth rather than passively accepting what’s presented. Her hope is that students will engage with these crucial conversations, be inspired to learn more, and share this vital information with their peers, fostering a ripple effect of informed citizenry. Claire Urban, an international relations and comparative cultures and politics sophomore, echoed this sentiment. She shared how her studies at James Madison College have honed her critical thinking skills, enabling her to scrutinize news with a discerning eye. Urban articulated her growing concern about AI’s increasing presence in politics, acknowledging its potential to profoundly shape future elections.
Urban’s concerns about AI’s impact on elections are not just abstract. She cited a Washington Post article detailing how AI companies have already influenced or aligned with dozens of candidates in primary elections. The recent news of OpenAI’s partnership with the Department of Defense, followed swiftly by a campaign against Anthropic labeling it a “supply chain risk,” further illuminates the complex power dynamics at play. As AI technologies advance, she observed, distinguishing between what’s AI and what’s not becomes an increasingly daunting task. Despite this growing influence, Urban firmly believes that “AI should not have a hand in our elections.” She eloquently highlighted the irreplaceable human elements in politics that AI can never replicate. Diplomacy, she argued, is our primary defense, preceding weapons and conflict. Relying on AI for electoral insights or world information, she warned, provides a “watered-down version” that lacks the critical thinking and empathy essential for understanding complex human situations. Urban urged people to engage directly with politicians’ words, to understand their legislative motivations, something AI simply cannot convey. Her powerful call to action stressed the urgent need to fact-check everything in this new era of constantly forged information, warning that without it, we risk misunderstanding conflicts, cultures, and the very ideas that shape our world.

