The complex and often challenging relationship between humanity and the ever-evolving automated state took center stage at Leiden Law School on March 19, 2026, as Dr. Mengchen Dong, a distinguished social psychologist and behavioral scientist from the Max Planck Institute for Human Development, delivered a compelling lecture titled “False Consensus Biases AI Against Vulnerable Stakeholders.” Dr. Dong’s research, deeply rooted in AI ethics and governance, explores the intricate ways power dynamics, individual circumstances, and sociocultural backgrounds shape human-AI interactions. Her work serves as a crucial compass in navigating the ethical minefield of artificial intelligence, particularly as it encroaches upon critical societal functions such as welfare benefit allocation. This series, “Humanity in the Automated State,” funded by the Dutch Research Council and supported by Leiden Law School, is a vital platform for interdisciplinary dialogue, bringing together diverse academic perspectives to unravel the profound implications of algorithmic governance on our relationship with public authority.
Dr. Dong’s lecture unveiled startling findings from a large-scale empirical study examining public attitudes towards AI-assisted welfare benefit allocation in the United States and the United Kingdom, drawing on the perspectives of over 3,200 participants. At the heart of her investigation lay a critical tension: when AI systems promise faster decisions but at the cost of higher error rates, whose preferences should truly hold sway? Her research illuminated a significant divergence that often remains hidden beneath the surface of aggregate public opinion. While the general population might exhibit a modest willingness to accept minor accuracy losses for the sake of speed, welfare claimants – the very individuals most directly impacted by these systems – demonstrated a significantly stronger resistance to such trade-offs. This finding is particularly urgent given that the deployment of AI in welfare systems has already led to a disturbing increase in wrongful benefit denials and erroneous fraud accusations, underscoring the real-world consequences of these algorithmic decisions on vulnerable lives.
A particularly striking revelation from Dr. Dong’s study was what she termed “asymmetric insights.” Her research revealed a concerning pattern: non-claimants consistently overestimated the willingness of welfare claimants to accept AI-driven trade-offs. This overestimation persisted even when non-claimants were financially incentivized to accurately understand the perspectives of claimants. In stark contrast, claimants exhibited a much more accurate understanding of non-claimants’ views. This asymmetry is profoundly problematic because non-claimants, constituting the majority and typically wielding greater influence in policy-making, can inadvertently create a “false consensus.” This means that even well-intentioned advocacy on behalf of vulnerable groups can be flawed if it’s built upon a misinterpretation of those groups’ actual needs and preferences. It’s a sobering reminder that good intentions, without genuine understanding, can pave the way to unintended harm.
The implications of this “false consensus” are far-reaching, especially in the context of designing and deploying AI systems in environments marked by inherent power imbalances. Dr. Dong concluded her lecture with a powerful and timely call to action: a direct engagement with vulnerable stakeholders is not merely an option, but a fundamental necessity. We cannot assume that their preferences can be inferred or adequately represented by others, no matter how well-meaning. This underscores the critical importance of a bottom-up approach to AI development, one that prioritizes the voices and lived experiences of those most affected, rather than relying on top-down assumptions. Failing to do so risks perpetuating inequalities and further marginalizing those who already stand on the periphery.
The “Humanity in the Automated State” lecture series, masterfully organized by Dr. Melanie Fink and Dr. Daria Morozova, and generously supported by the Dutch Research Council’s VENI grant “Gateways for Humanity: The Duty to Reason in the Automated State,” is a testament to the urgency of these discussions. It provides a crucial interdisciplinary forum, bringing together scholars from law, management, public administration, and computer science throughout the 2025/2026 academic year. This collaborative effort aims to deconstruct how algorithmic governance is fundamentally reshaping human relationships with public authority, fostering a deeper understanding of the challenges and opportunities presented by an increasingly automated world. The series itself is a beacon of hope, advocating for a more human-centered approach to technological advancement.
Looking ahead, the series promises continued intellectual stimulation with upcoming sessions featuring Ida Koivisto from the University of Helsinki on April 9th, and Natali Helberger from the University of Amsterdam on May 26th. These future lectures will undoubtedly contribute to the rich tapestry of dialogue surrounding AI, ethics, and society, further solidifying the series’ role as a vital platform for exploring the complex interplay between technology, law, and justice. The insights gleaned from these discussions are essential for shaping a future where AI serves humanity, rather than inadvertently disadvantaging its most vulnerable members, reminding us that the journey toward a truly ethical automated state is an ongoing, collaborative, and deeply human endeavor.

