In today’s fast-paced digital world, where information zips across screens at lightning speed, a concerning trend is emerging that blurs the lines between reality and fiction: the proliferation of AI-generated content, especially targeting public figures. Imagine being a devoted fan of a beloved sports icon like Dawn Staley, the celebrated head coach of the USC women’s basketball team. You log onto Facebook, eager for updates, only to be met with a barrage of unsettling and untrue posts. One claims she has cancer, another suggests she’s retiring, and yet another displays a picture of her in a hospital gown, complete with visible “stitches.” These aren’t just isolated incidents; they’re part of a growing wave of what’s being dubbed “AI slop” – a concoction of deepfakes, misinformation, and spam that is increasingly difficult to discern from genuine news at first glance. These posts don’t just appear in obscure corners of the internet; even a simple search for “Dawn Staley” can bring up an AI-generated image of her cradling a baby, often garnering hundreds of comments and thousands of likes, all feeding into a manufactured narrative. It’s a digital quagmire, where the truth gets muddled, and the emotional impact on fans and the individuals themselves can be significant.
Keshav Gupta, an assistant professor and researcher at USC specializing in sports technology, has seen the full spectrum of these fabricated narratives. From Staley supposedly having a baby to an absurd claim of her rejecting a $10 million offer from Elon Musk – which, as Gupta wryly notes, is likely a paltry sum for someone of her stature – these AI-generated stories are rife with sensationalism. They seep into every corner of the sports industry, fabricating athlete transfers, conjuring up feuds between coaches, and painting a distorted picture of reality. While AI isn’t the sole culprit behind misinformation, it has undeniably become a powerful enabler. It makes it easier than ever to churn out convincing-looking fake pictures, text, and even entire social media accounts. These deceptive posts often strategically mimic the visual styles of reputable media outlets like ESPN, preying on our inherent trust in professional news sources. More insidiously, they tap into strong emotions, making celebrities and public figures like Dawn Staley prime targets. Their fame, coupled with their often outspoken nature on social issues, makes them particularly vulnerable to AI-generated content that exploits political, racial, or gender-based polarization. The creators of this “AI slop” understand that controversy drives engagement; they thrive on the “battlefield” created when people react emotionally, leading to countless views and, ultimately, financial gain. It’s a cynical game where authentic dialogue is sacrificed for superficial engagement and profit.
One of the most insidious aspects of these AI-generated falsehoods is their ability to leverage confirmation bias – our innate tendency to interpret information in a way that confirms our existing beliefs. As Gupta explains, if someone already holds a negative view of a public figure, encountering a piece of bad news, even if fabricated, can reinforce their prejudice, making them think, “Aha, I knew that person wasn’t good.” AI, in this context, becomes a tool for validating pre-existing biases, further entrenching divisive narratives. The true danger, however, arises when these “obviously” fake posts lose their obviousness. A widely circulated example was an AI-generated cover of TIME magazine, falsely claiming Dawn Staley had won the “Most Influential People” award. To Preach Jacobs, a writer and columnist who himself found himself entangled in a fake narrative as the “father” of Staley’s fabricated child, this TIME cover initially gave him pause. It possessed that “tinge of something that’s possible,” he noted, a subtle plausibility that made it harder to dismiss outright. Staley’s immense respect and influence, coupled with the fact that one of her former players, A’ja Wilson, genuinely won Athlete of the Year from TIME (albeit in 2025, per the article), added a layer of deceptive credibility to the AI-generated hoax. Though a deeper dive revealed the cracks, the damage was already done; the fabricated image circulated widely within the Columbia community, racking up hundreds of comments and thousands of likes, demonstrating how easily such sophisticated fakes can be embraced as truth.
The evolution of AI has dramatically shifted the landscape of digital deception. As Preach Jacobs aptly points out, the days of easily identifying AI images by obvious flaws like extra fingers are gone. The technology has advanced to a point where these fabricated visuals are alarmingly realistic, making it incredibly difficult for the average user to distinguish between genuine and artificial content. This growing prevalence of convincing fakes breeds a pervasive skepticism, eroding trust in the information we encounter online. Jacobs voices a profound concern that the ultimate “outcome” isn’t just about individual hoaxes, but about a broader environment where entire communities lose faith in what they see and read. This erosion of trust poses a significant threat to informed public discourse and the fabric of our digital society. It creates a constant state of vigilance for individuals, forcing them to question every piece of information, even from seemingly credible sources. In this new reality, the burden of verification increasingly falls on the individual, making the simple act of scrolling through social media a more complex and potentially misleading experience.
In this relentless cat-and-mouse game between AI advancement and attempts to mitigate its misuse, the responsibility falls across multiple shoulders. Keshav Gupta suggests that a robust public relations team can act as a crucial first line of defense, swiftly identifying and addressing particularly harmful posts. However, individual and team social media accounts also play a vital role in disseminating accurate information and debunking falsehoods. Diana L. Koval, USC’s Associate Athletic Director, acknowledges the university’s efforts to report misleading posts to social media platforms and delete false comments, yet she concedes that “little recourse” is often available, highlighting the limitations of current safeguards. This underscores the critical need for fans to become savvier digital citizens. While organizations have a responsibility to provide accurate information, individuals must also cultivate their own critical thinking skills and take ownership of verifying what they consume online. A heartbreaking screenshot of an AI-generated post falsely announcing the death of a USC women’s basketball player serves as a stark reminder of the emotional toll and real-world impact these fabrications can have. It’s a chilling illustration of how easily AI can be weaponized to cause distress and sow confusion, making the need for personal discernment even more pressing.
To navigate this increasingly complex digital landscape and avoid being duped, Gupta offers some invaluable advice: cultivate a less reactive mindset. When a post, whether positive or negative, triggers a strong emotional response, pause. Take a moment to cross-check the information by conducting a quick Google search or consulting verified sources before sharing or commenting. As Gupta emphasizes, AI is often designed to exploit these emotional triggers, using algorithms to play on pre-existing feelings. His hope is to see more regulation implemented to ensure that AI can be harnessed for its immense beneficial potential – such as compiling data for athletes or offering suggestions for improvement and recovery – while simultaneously curbing its malicious applications. Ultimately, Gupta advocates for using AI as an assistant, a tool to augment our abilities, rather than a decision-maker. The moment we abdicate our critical thinking and blindly rely on AI, he warns, is precisely “where the problems start emerging.” In a world awash with AI-generated content, the human element of discernment, critical evaluation, and a healthy dose of skepticism are our most potent weapons against misinformation.

