Imagine a world where reliable health information is as accessible as your favorite cat video. A world where searching for answers about something as personal as birth control doesn’t lead you down a rabbit hole of fear-mongering and distorted facts. This, at its heart, is the world that Ritwik Banerjee, a brilliant mind from Stony Brook University’s computer science department, envisions and is actively building.
Following the seismic shift of the Supreme Court’s 2022 Dobbs decision, millions of Americans found their access to reproductive healthcare drastically altered. In their urgent search for information and guidance, many instinctively turned to the digital world. They swiped through endless TikToks, scoured Reddit threads, and binge-watched YouTube videos, seeking clarity on birth control. What they frequently encountered, however, was not the carefully vetted medical advice they desperately needed. Instead, they found a relentless barrage of viral “horror stories” and misleading claims, meticulously amplified by algorithms designed to keep them captivated and clicking, rather than informed and safe. This digital landscape, often a double-edged sword, presented a critical need for a new kind of intervention.
Enter Ritwik Banerjee, an assistant professor of research whose name might not yet be a household one, but whose work is poised to make a profound difference. Funded by a two-year grant from the Society of Family Planning’s Contraceptive Misinformation and Disinformation initiative, Banerjee stands out in his cohort. He’s not a medical doctor or a public health expert, but a computer scientist – the lone wolf bringing the power of algorithms and coding to combat this pernicious problem. His unique position allows him to go beyond simply cataloging what people are saying about birth control online. Instead, he delves into how these narratives are constructed and framed within specific online communities, understanding the subtle linguistic tricks that allow them to spread like wildfire across recommendation systems. He’s peeling back the layers of the internet to expose the hidden mechanisms of misinformation, a journey both complex and crucial.
Banerjee’s approach is ambitiously comprehensive, a stark contrast to traditional methods. Instead of painstakingly reviewing a small handful of social media posts by hand, his project will meticulously analyze over a million social media posts related to contraceptive health. This monumental task will be powered by transformer-based language models, sophisticated AI tools that can sift through vast amounts of text to identify recurring themes, rhetorical patterns, and underlying frames that signal misinformation. To further understand the human element, his team will design “AI agents” – essentially digital avatars – to simulate how diverse users might encounter this misleading content. Imagine an AI simulating a curious teenager trying to understand birth control side effects, or a concerned adult in a state with new reproductive health restrictions searching for accurate information. These simulations will provide invaluable insights into the journey of misinformation from the user’s perspective, revealing the pathways of harm.
Crucially, the project will also map the journey of misinformation across different digital platforms. Picture a single, misleading TikTok video. Banerjee’s work will trace its evolution, showing how its influence ripples into Reddit discussions, subtly morphing and distorting medical facts, ultimately corrupting the broader online ecosystem. “Computational tools for language analysis let us see the ecosystem, not just a small sample,” Banerjee explains, highlighting the transformative power of his methodology. This isn’t just about spotting individual falsehoods; it’s about understanding the intricate web of deceit and how it ensnares individuals and communities.
For Banerjee, this digital ecosystem is far from an abstract concept. It’s intimately connected to real people and, regrettably, real harm. Drawing on a rich tapestry of personal and academic experience with cross-cultural health narratives, he has witnessed firsthand how health-related stigma and misinformation disproportionately impact underserved communities. These communities, often already marginalized, bear the brunt of false information, whether it spreads through whispered rumors or algorithmically curated feeds. He also shines a spotlight on a significant gap in existing research: while vaccine misinformation has been meticulously explored and documented, contraception, a topic equally vital to public health, has often been overlooked in rigorous, data-driven studies. This oversight leaves a gaping vulnerability, one that Banerjee is committed to addressing with scientific precision and a deep sense of social responsibility.
By the end of this two-year grant, Banerjee envisions a tangible and practical legacy for his project. The team plans to release open-source natural language processing plugins, readily available tools that health departments – even those with limited resources – can utilize to preemptively monitor emerging myths and false narratives. Furthermore, they will publish practical API guidelines, offering clear recommendations that social media platforms can adopt to “de-amplify” harmful content, effectively reducing its reach and impact. The project will also establish policy benchmarks, providing actionable frameworks for holding recommendation algorithms more accountable in the sensitive domain of reproductive health.
“I want a world where a health department with limited resources can use what we build; these tools should not be just for universities or big tech companies,” he asserts, his words radiating a profound commitment to accessibility and equitable access to information. For Ritwik Banerjee, the ultimate goal transcends mere academic understanding of online contraceptive misinformation. It’s about empowering public health practitioners with innovative tools to recognize, actively respond to, and ultimately mitigate the harm of misinformation before it can inflict further damage. His work offers a beacon of hope, promising a future where digital spaces become safer, more reliable sources of health information for everyone.

