In today’s fast-paced digital world, we’re constantly bombarded with information – so much so that it’s hard to keep up. This overwhelming flood of facts, opinions, and outright fabrications has left many of us scratching our heads, wondering who and what to believe. Here to help us navigate this treacherous landscape is Shyam Sundar, a leading expert in artificial intelligence and misinformation, and co-director of Penn State University’s Media Effects Research Lab. He’s spent years digging into how we consume information and, crucially, how we decide what to trust. After a recent talk on AI at the University of Hawaiʻi, Sundar shared his insights with HPR, offering a much-needed guide on how we can all sharpen our critical thinking skills in this era of digital deluge. His work isn’t just academic; it’s about helping everyday people like you and me make sense of the overwhelming amount of stuff thrown our way online.
Sundar paints a vivid picture of why misinformation spreads so easily: it’s all about information overload. Think about it – every day, our screens are overflowing with news, social media posts, ads, and emails. Our brains just can’t process it all in detail. To cope, we become “cognitive misers,” taking mental shortcuts rather than painstakingly verifying every single piece of information. It’s like when you’re in a hurry at the supermarket and just grab the brand you recognize, instead of comparing all the nutritional labels. Sundar points to several of these “heuristics,” or mental shortcuts. For instance, the “authority heuristic” makes us trust someone in a uniform or a white lab coat, even if we don’t know their credentials. Or the “bandwagon heuristic,” where we see a product with tons of five-star reviews on Amazon and immediately assume it’s great, even if those reviews are from anonymous strangers. Perhaps most relevant today is the “machine heuristic”: we tend to believe that anything produced by a machine – an algorithm, or AI – must be objective, unbiased, and therefore, true. This inherent trust in machines, while understandable, leaves us vulnerable to sophisticated forms of misinformation.
This brings us to a major problem: the disappearing act of traditional, trusted news sources. Sundar laments that the lack of “general interest intermediaries” – what he calls news organizations – has created a vacuum. These organizations, staffed by trained journalists, traditionally acted as gatekeepers, meticulously vetting facts and cross-referencing information from multiple sources. Think of a local reporter who knows her community inside and out, constantly double-checking stories before they go to print. Most of us don’t apply the same rigorous standards to information shared by our friends on Facebook or Nextdoor. We don’t stop to consider if our friend, however well-meaning, has any training in journalism or if they’ve done their due diligence. Instead, we simply absorb what appears in our feeds and often repeat it as gospel. This “news desert” phenomenon, where reliable local news is scarce, makes it incredibly easy for unverified and even outright false information to circulate unchecked, leaving us with a distorted view of reality.
The rise of AI as an information source introduces even more alarming dangers, according to Sundar. He’s particularly concerned about the growing trend of people relying on AI summaries – whether from search engines or other AI tools – rather than digging into the original content. This isn’t just a minor convenience; Sundar notes a staggering 62% drop in people clicking through search links in the last 8-10 months alone, indicating a widespread reliance on these machine-generated snippets. For him, this is “really scary.” An AI summary, after all, is just that: a summary. It’s a distillation, and like any distillation, it can easily miss nuances, misinterpret context, or even outright invent information, a phenomenon often called “hallucinations.” Sundar, as a professor, sees this firsthand with his students. They often mistakenly believe an AI-generated summary accurately reflects a complex reading, only to find themselves completely off track. If even educated students are falling prey to this, he wonders, how many everyday news consumers are being misled by these seemingly authoritative AI summaries?
What this boils down to is a critical need for all of us to become savvier information consumers. Sundar’s work isn’t about shaming us for falling for misinformation, but about understanding the psychological shortcuts our brains take and arming ourselves with better tools. It’s about recognizing that in a world awash with information, our natural inclinations to trust authority, popularity, or even technology can sometimes lead us astray. We need to actively push back against the urge to take things at face value, especially when the source isn’t clearly established or doesn’t have a track record of rigorous fact-checking.
Ultimately, Sundar’s message is a call to arms for critical thinking in the digital age. It’s about understanding that the convenience of AI and the sheer volume of online content can be a double-edged sword. While technology offers unprecedented access to information, it also demands an unprecedented level of discernment from us. His research, highlighted by his conversation with HPR, serves as a vital reminder that in the absence of traditional gatekeepers, the responsibility for discerning truth from fiction increasingly falls on us. By understanding our cognitive biases and the mechanics of misinformation, we can all become better equipped to navigate the complex information landscape and make more informed decisions in our daily lives.

