Penn State’s "Fake-a-thon" Challenges Community to Explore the Murky Waters of AI-Generated Fake News
UNIVERSITY PARK, Pa. –– In a unique initiative designed to delve into the increasingly complex world of AI-generated disinformation, Penn State University’s Center for Socially Responsible Artificial Intelligence (CSRAI) is launching "Fake-a-thon," a two-stage competition that challenges participants to both create and detect fake news articles generated by artificial intelligence. This five-day event, starting on April 1st (no joke!), aims to raise awareness about the growing prevalence of AI-powered fake news and explore strategies to combat its spread.
The first stage of Fake-a-thon, running from April 1st to April 4th, invites members of the Penn State community to unleash their creative (and perhaps mischievous) sides by crafting fabricated news stories using generative AI tools like ChatGPT and Microsoft Copilot. Participants are free to choose any topic, with the goal of creating stories so convincing they could pass as genuine news. Submissions will be collected via an online form, setting the stage for a showdown between human discernment and artificial ingenuity.
The second stage, taking place on April 5th in the Westgate Building, will put the fake news creations to the test. Participants who didn’t contribute stories in the first stage are invited to act as discerning detectives, scrutinizing a mix of AI-generated fake news and real news articles provided by the organizers. Their mission: to separate fact from fiction and identify the AI-generated imposters. This real-time evaluation of the fabricated stories promises to be a fascinating exploration of the evolving sophistication of AI-generated content and the challenges of discerning its authenticity.
The Fake-a-thon isn’t just about creating and identifying fake news; it’s also about recognizing the ethical implications of this rapidly advancing technology. Prizes will be awarded for the most convincing fake news stories, rewarding those who successfully fooled the detectors, as well as for the most astute detectors who successfully identified the fake news. This dual reward system highlights the importance of both understanding how AI can be misused to create disinformation and honing the skills needed to critically evaluate information in an increasingly complex media landscape.
Beyond the competitive aspect, the Fake-a-thon serves as a valuable data collection exercise for ongoing research at CSRAI. All participants will be invited to complete a post-event survey, contributing to a broader study on the impact and implications of AI-generated fake news. This research focus underlines the event’s commitment to not just raising awareness, but also to developing practical strategies and ethical frameworks for navigating the challenges posed by this emerging technology, paving the way for more responsible AI development and deployment.
The rise of generative AI has dramatically lowered the barrier to creating highly believable fake news, posing a significant threat to informed public discourse and democratic processes. Fake news can manipulate public opinion, spread misinformation about critical issues like health and politics, and erode trust in legitimate news sources. The Fake-a-thon, by engaging the Penn State community in a hands-on exploration of this issue, aims to foster critical thinking, promote media literacy, and contribute to the development of responsible AI practices. By understanding how AI-generated fake news is created and disseminated, we can better equip ourselves with the tools and strategies needed to combat its harmful effects and ensure that AI technologies are used ethically and for the benefit of society. The insights garnered from this event will contribute to the CSRAI’s ongoing efforts to promote responsible AI development and deployment. The Center, founded in 2020, is dedicated to advancing AI research while carefully considering the ethical and societal implications of these powerful technologies. The Fake-a-thon represents a novel approach to engaging the community in this important conversation, turning a potential threat into an opportunity for learning, critical analysis, and proactive engagement with the future of information.