Deepfakes in Schools: A Growing Threat Met with Inadequate Response
The rise of readily accessible artificial intelligence (AI) technology has ushered in a new era of digitally manipulated media, commonly known as "deepfakes." These manipulated videos, photos, and audio recordings, created without consent, are increasingly being weaponized in schools, contributing to a surge in AI-enabled sexual harassment. A recent study by the Center for Democracy & Technology (CDT) reveals a disturbing trend: 40% of students and 29% of teachers report awareness of deepfakes targeting individuals within their school community during the 2023-24 academic year. This indicates a significant escalation of existing issues surrounding non-consensual intimate imagery, fueled by the ease with which AI can generate deceptively realistic fake content. The primary perpetrators and victims of this disturbing trend are identified as students, highlighting the urgent need for proactive measures to address this growing problem.
The pervasiveness of deepfakes significantly expands the potential for harm, widening the pool of both perpetrators and victims. Kristin Woelfel, a policy counsel at CDT and a co-author of the report, emphasizes the democratized nature of this technology: "The surface area for who can become a victim and who can become a perpetrator is significantly increased when anybody has access to these tools. There’s really no limit as to who could be impacted by this.” Anecdotal evidence supports this concern, with recent reports surfacing of students using AI to fabricate pornographic images of classmates and even create fake videos depicting teachers and principals in compromising situations. The emotional and psychological toll on victims, whether targeted by real or deepfake imagery, is substantial, described as "scary" and "traumatic" by Anjali Verma, National Student Council president and a senior at a Pennsylvania charter school.
The CDT report highlights a concerning lack of awareness and support for victims of deepfake-related harassment. A mere 19% of students surveyed reported that their schools had explained what deepfakes are, while even fewer (13%) understood the potential impact on those depicted. Only 15% knew who to report such incidents to within their school. This lack of awareness extends to educators and parents, with 60% of teachers and 67% of parents reporting no knowledge of school or district policies addressing real or deepfake non-consensual intimate imagery. While a third of students believe their schools effectively apprehend perpetrators, a troubling 10% who are aware of such incidents report that the responsible individuals were never caught. This data paints a picture of schools struggling to adapt to the rapidly evolving digital landscape and the unique challenges posed by AI-generated content.
Current responses to deepfake incidents often prioritize "severe discipline" for perpetrators, including suspension, expulsion, and law enforcement involvement. However, Woelfel argues that this reactive approach neglects crucial elements of prevention and victim support. A proactive approach necessitates educating students and staff about deepfakes, emphasizing the harm they cause, the potential consequences of creation and distribution, and the reporting mechanisms available within the school. This education should begin at a younger age, introducing age-appropriate discussions about the responsible use of technology and the potential dangers of digital manipulation. Elementary school is not too early to begin these crucial conversations, preparing students for the digital landscape they will inevitably navigate.
Supporting victims is equally critical, and should encompass counseling services and resources to facilitate the removal of online deepfakes. Cultivating a school climate where students feel empowered to report incidents without fear of judgment is essential. Verma emphasizes the need for an environment where students trust that their concerns will be taken seriously, without fear of reprisal or dismissal. This requires a shift from a punitive, reactive approach to one that prioritizes prevention and support, fostering a culture of digital responsibility and respect within the school community.
The CDT study, based on a summer survey of over 3,300 students, teachers, and parents, underscores the urgent need for schools to acknowledge and address the growing threat of deepfakes. While policymakers have a role to play in guiding schools toward effective prevention and response strategies, schools can take immediate steps to educate their communities, provide support for victims, and create a safer digital environment. By recognizing the severity of this emerging issue, schools can empower students, staff, and parents to navigate the complexities of AI-generated media and mitigate the harmful impact of deepfakes on the school community. The responsibility lies with educators, administrators, and policymakers to equip students with the knowledge and resources they need to navigate this evolving digital landscape safely.