In our increasingly digital world, where information zips around the globe in milliseconds, a recent study has pulled back the curtain on a surprisingly powerful emotion driving the spread of online falsehoods: moral anger. Published in the respected journal Cognition and Emotion, this research doesn’t just say anger makes us impulsive; it shows that when we’re morally angry, we’re far more likely to hit that share button on misinformation, often ignoring whether the source is trustworthy. It’s a key insight into how our gut reactions on social media can inadvertently become super-spreaders of untruths. For a long time, we’ve known that social media is a hotbed for deliberately provocative content – posts specifically crafted to stir up strong emotions. While many have pointed to “moral outrage” as the culprit in spreading fake news, this study dives deeper, revealing that moral outrage isn’t a single, uniform feeling. Instead, it’s a mix of distinct emotions, and it turns out, anger and disgust don’t play the same role in our online sharing habits.
Leading this fascinating exploration is Xiaozhe Peng, an associate professor at Shenzhen University’s School of Psychology. As the head of the Emotion and Communication Neuroscience Lab, Peng has spent years fascinated by how our feelings shape the way we talk and share. He readily admits that the inspiration for this project came from witnessing firsthand how emotionally charged content on social media could not only accelerate the spread of misinformation but also sometimes escalate into outright online aggression. Peng and his team weren’t content with just a general understanding of “moral outrage.” They wanted to pinpoint which specific moral emotions were the primary drivers behind this phenomenon. Psychological theories tell us that persuasion happens in different ways: sometimes we carefully weigh the facts, and other times we take mental shortcuts, relying on things like our emotions or how credible we perceive a source to be. The researchers designed their experiments to observe these mental shortcuts in action, especially when people are scrolling through their social media feeds.
Their first experiment involved 223 participants from China, who were asked to evaluate 24 manipulated news headlines, all designed to represent false information. These headlines varied in how severe the described moral wrongdoings were, ranging from completely neutral scenarios to egregious moral violations. To add another layer of complexity, the headlines were also attributed to sources with varying levels of credibility, from entirely untrustworthy to fully credible. Before deciding whether they’d share each headline, participants were directed to focus on specific details: either the accuracy of the news, the morality of the events, or nothing at all. The scientists observed a few key patterns: generally, people were more inclined to share news from highly credible sources, and severe moral violations also increased sharing willingness. Interestingly, this inclination to share morally provocative content was especially strong when participants were prompted to focus on the moral aspects of the story. Crucially, when people were told to focus on either accuracy or morality, they relied less on the source’s credibility. It seemed that directing their attention inward, to the message’s content, made the external “credibility label” less important in their decision-making.
The second experiment honed in on the nuances between moral anger and moral disgust, involving 116 university students. This time, 18 false news headlines, depicting minor or severe moral violations, were presented as coming from either a highly credible or a low-credibility source. The goal was to see how different emotional states influenced sharing. Participants were asked to rate their current feelings of anger, disgust, or maintain a neutral focus before indicating their willingness to share. The findings were stark: participants who were prompted to feel angry were significantly more willing to share headlines from low-credibility sources compared to those in the disgust or control groups. The disgust prompt, surprisingly, didn’t boost sharing willingness compared to the neutral group. This suggested a powerful revelation: moral anger actively overrides a person’s reliance on source credibility when they decide to share information. “What surprised us most was how consistently moral anger, rather than moral disgust, drove sharing across studies,” Peng noted. He highlighted that while both emotions are often lumped under “moral outrage,” they have distinct behavioral consequences – anger seems to propel us to confront, while disgust often makes us want to retreat.
The third experiment delved even deeper into the cognitive mechanisms behind how anger influences these sharing decisions. Sixty-three university students were tasked with evaluating 36 true and false headlines, each paired with low, ambiguous, or high source credibility labels. To really stir up a strong emotional state, participants first completed a memory task: they were asked to vividly recall and write about a personal memory that made them intensely angry. After reliving this anger, they then rated their willingness to share the headlines. The researchers employed sophisticated mathematical models to measure the speed of participants’ decisions and the amount of mental evidence they required before opting to share. These models are crucial in psychology for distinguishing between slow, cautious choices and fast, impulsive ones. The results showed a clear impact: the anger induction lowered participants’ decision thresholds. This meant that the angry students needed less evidence and less time to decide to share a headline. That feeling of anger made them quicker and less cautious in their sharing decisions across the board. “We also found that anger was associated with lower decision thresholds, suggesting that it can make people decide to share more quickly and with less caution,” Peng confirmed. Importantly, the models also revealed that anger didn’t diminish a person’s ability to discern between true and false information; it simply lowered the mental barrier – the internal “gatekeeper” – that typically makes us pause before hitting the share button.
While this groundbreaking research offers compelling insights into the mechanics of online sharing, Peng and his team are quick to point out its limitations. “Our studies were conducted in controlled experimental settings, and we measured willingness to share rather than actual sharing behavior on live social media platforms,” Peng explained. This controlled environment allowed for precise identification of mechanisms, but real-world online interactions are, of course, far more complex and nuanced. Furthermore, the studies were conducted entirely within a specific cultural context in China, raising questions about generalizability. “Our samples came from a specific cultural context, so future work should examine how broadly these findings generalize across countries and platforms,” Peng noted, acknowledging that emotional expression and differentiation can vary significantly across cultures. Looking ahead, the researchers aim to test these mechanisms in more naturalistic settings, broadening their understanding of how specific emotions shape not only whether people share information but also how they weigh cues like accuracy, source credibility, and social signals during that decision-making process. The team is also actively exploring interventions to curb the spread of false content, suggesting “lightweight prompts that warn users when a post contains highly emotion-arousing or outrage-provoking content.” Ultimately, Peng concludes that misinformation isn’t just about false beliefs; it’s deeply intertwined with emotionally charged communication. Moral anger, in particular, is a potent driver because it’s “action-oriented,” pushing people toward expression, condemnation, and rapid dissemination. This inherent drive makes moral anger a critical factor in understanding why some misleading content explodes so quickly online. The takeaway for everyday social media users is simple yet powerful: “if a post makes you instantly angry, that is exactly the moment to pause before liking, commenting, or sharing.” This simple act of pausing could be our best defense against inadvertently becoming a part of the misinformation problem.

