In our increasingly online world, where every click counts, there’s a big question looming over how we consume information, especially when it comes to science. Imagine you’re scrolling through your feed, and a headline pops up: “Broccoli Cures Cancer!” Your eyes widen, your heart races – this is huge news! You click, you share, you tell your friends. But what if the original study actually said, “A compound in broccoli reduced cancer cell growth – in mice”? Those two tiny words, “in mice,” completely change the meaning. This struggle between grabbing attention and providing complete, accurate information is at the heart of new research from UC San Diego’s Marta Serra-Garcia. She’s a behavioral economist who’s delved deep into how the “attention economy” – that constant battle for your clicks and shares online – shapes what we learn, and sometimes, what we think we’ve learned about science. It turns out, making science more appealing can help some people learn, but it often leaves many others with a less-than-full picture, even if the information itself isn’t technically wrong. This isn’t about malicious intent; it’s about the subtle ways incomplete information can lead us astray.
Serra-Garcia’s study, published in the prestigious American Economic Review, makes it clear: the issue isn’t that eye-catching summaries are factually incorrect. Rather, they tend to omit crucial details, particularly about how scientific studies were conducted. Think of it like this: a catchy summary might be an amazing appetizer, but without the main course, you’re still hungry for real understanding. As Serra-Garcia explains, it’s not a simple case of “clickbait is bad.” We live in a world where getting people’s attention is key to getting them to learn anything at all. Sparking curiosity is a good thing. However, there’s a significant downside: content designed primarily to engage can inadvertently contribute to misunderstandings, which in turn can feed the beast of misinformation. It’s a delicate balancing act, trying to be accessible without sacrificing essential context. Her team’s findings are based on a massive, two-part experiment involving nearly 600 summaries of actual scientific research penned by freelance writers, and over 3,700 participants who were then tested on what they absorbed.
Let’s go back to that “broccoli in mice” example, because it perfectly illustrates the core problem. If a study finds a compound in broccoli reduces cancer cell growth in mice, those last two words are absolutely vital. Removing them makes the finding sound directly applicable to human health, which it isn’t, not yet anyway. Serra-Garcia points out the simplicity of adding “in mice” – it’s just two words. But those two words might deter some readers, making the article less “clickable.” This highlights the tension: the pursuit of engagement can lead to the omission of small but critical pieces of information. The study consistently showed that summaries crafted to attract maximum attention were indeed shorter, easier to digest, and more engaging. However, they consistently lacked detailed information, especially regarding crucial elements like sample sizes and methodologies – the “how” behind the science.
Here’s kicker: when readers were given the option to delve deeper and access more information, most of them simply didn’t. This behavior isn’t just confined to the experiment; it mirrors what we see every day online. Studies on social media habits suggest that a huge amount of content is shared without users ever clicking through to read the full story. For the participants in Serra-Garcia’s study who relied solely on these concise, attention-grabbing summaries, their comprehension of the scientific findings dropped by a noticeable 6 to 7 percentage points. Even more concerning, they were significantly more prone to drawing incorrect conclusions, like assuming research done on animals automatically applies to humans, or mistaking preliminary findings for solid medical advice. It’s like only reading the first sentence of a complex novel and then thinking you understand the entire plot.
To really nail down these effects, Serra-Garcia designed a meticulously controlled experiment. In the initial phase, a group of 149 freelance writers were tasked with summarizing the same set of scientific studies, covering a range of topics from cancer and sleep to vaccines and climate change. Crucially, some writers were instructed to summarize purely to inform accurately, while others were told to optimize their summaries for attracting clicks and shares. In the second phase, over 3,700 participants then read these summaries under various conditions, including whether they had the opportunity to click through for more in-depth information. The conclusions were consistent across the board: summaries designed for attention did boost engagement, and sometimes even prompted some readers to explore further. However, for a significant number of others, these summaries resulted in a less complete understanding of the science.
What’s even more fascinating (and perhaps a little unsettling) is that this pattern didn’t change when humans weren’t the ones doing the writing. When a large language model (like the AI I’m using right now) was prompted to generate summaries focused on attracting attention, it too produced less detailed content. This strongly suggests that the issue isn’t about whether a human or an AI creates the content. Instead, it’s driven by the underlying objective: if the goal is to maximize engagement above all else, detail often takes a backseat. For Serra-Garcia, these findings pose a profound and ongoing challenge for everyone involved in communicating science – researchers, journalists, and institutions alike. The central question remains: “How do you make science engaging and important to readers without missing the essentials that convey the full picture?” In a world awash with information, the art of communicating science effectively—balancing appeal with essential truth—is more critical than ever.

