Negativity spreads faster than positivity online, and news organizations at both ends of the political spectrum are leveraging this tendency on Twitter, according to a new study.
To test whether the broadcast news adage, “If it bleeds, it leads,” persists in the social media realm, Harvard Business School professor Amit Goldenberg and colleagues Nathan Young and Andrea Bellovary of DePaul University analyzed 140,358 tweets posted by 44 news agencies in early 2020. An automated sentiment analysis tool confirmed their hunch: negativity was about 15 percent more prevalent than positivity, and negative tweets engaged more users.
Originating as internet novelties 15 years ago, social media platforms have altered everything from democracy to human relationships in ways that are hard to overstate. Governments around the world have been debating how to rein in the invective and misinformation on Facebook and Twitter that fuels political extremism and hate crimes. Given the appeal of negative rhetoric, there are no easy solutions.
“Although people produce much more positive content on social media in general, negative content is much more likely to spread,” says Goldenberg.
Social media has also become a crucial channel for news organizations, who must compete for engagement to grow their audiences. In their new paper, Left- and Right-Leaning News Organizations’ Negative Tweets Are More Likely to Be Shared (pdf), Goldenberg, Bellovary, and Young point out that 47 percent of Americans say they access news through Twitter and other social media platforms. Their analysis, which will be published in a forthcoming issue of Affective Science, suggests that, just as violence on television increases viewership and ratings, news organizations have found that negativity reaches more social media users.
Boosting engagement with negativity
Right- and left-leaning news organizations both used negativity to engage their audiences on Twitter at roughly the same rate, and the research shows no significant difference in the levels of reader engagement for negative tweets. Although the team didn’t track political content specifically, they collected tweets during the early days of the COVID-19 pandemic, when the crisis and government response to it dominated news cycles.
“IT’S THE FIRST TIME IN US HISTORY THAT NEGATIVITY TOWARD ONE’S OUT-GROUP IS STRONGER THAN POSITIVITY TOWARD ONE’S IN-GROUP.”
Past research shows negativity spread fastest in contexts that involved two or more rival or competing groups, where negative emotions were more likely to prevail. The polarized nature of American political discourse, particularly in recent years, is a prime example, although it remains unclear to what extent social media has exacerbated such divisions.
“People’s emotions toward their own group have stayed constant over the last 50 years, whereas people’s negative emotions toward the other side have increased,” Goldenberg says. “In the last five years, it’s the first time in US history that negativity toward one’s out-group is stronger than positivity toward one’s in-group.”
Using hate to show belonging
Previous studies suggest that it’s easier to discern someone’s political affiliation by examining who they hate, as opposed to their affection for their own group or community. Because social media has become a primary place to show group membership, and because negativity is an effective tool for signaling, Goldenberg says, it makes sense that news outlets would see more engagement from negative tweets.
“It seems like high-arousal emotionality is more engaging,” he says. “There are high-arousal emotions, like anger and excitement, which are often more engaging than low-arousal emotions, like sadness or calmness, but this is more prevalent in Western societies where high-arousal emotions are more prevalent in general.”
These findings have implications for marketers and others who want to spread their messages online.
“Emotions lead to engagement, regardless of whether they’re positive or negative,” explains Goldenberg. “So, if you want to make sure that your message spreads, it has to include a lot of emotion.”
Social media companies are well aware of this phenomenon, which is why their algorithms amplify highly emotional posts. They understand that showing users content that elicits emotion will keep them on their platforms longer.
Social media: a lab for emotion research
People tend to latch on to negative information more readily than positive. Known as negativity bias, this has been shown empirically across a variety of contexts. But how that negative information makes people feel and behave is not as well understood.
Some of the earliest and strongest evidence that exposure to negativity online makes people experience and share more negative emotions came from Facebook. In a controversial 2012 experiment, the social media giant secretly altered the news feeds of 689,003 users to see if posts about others’ positive life experiences would make users feel better or inspire envy. Some users unknowingly viewed about 95 percent negative content, while another group saw posts that were 95 percent positive.
“WHAT IS THE OBLIGATION THAT A BODY LIKE FACEBOOK HAS FOR OUR WELLBEING?”
The results, published in 2014, showed that social media content can stoke emotional contagion, when a person’s emotions transfer to another. Moreover, those who had seen mostly negative content produced more negative posts and fewer positive posts. For those who had seen mostly positive posts, the opposite occurred.
For Goldenberg, the experiment raises interesting questions: “What is the obligation that a body like Facebook has for our wellbeing, and how much responsibility should they have, or how careful should they be, in the way that they manipulate our emotions?”
Asked whether he believes social media is partly responsible for increasing social division, Goldenberg says it definitely doesn’t help. “It’s a pretty bad mix for what we call affective polarization, which is basically our tendency to increase our hate for the out-group,” he says, “but this happened much before social media came into play.”
Send in the positivity police?
Armed with the results of their study, Goldenberg and his colleagues have started thinking about ways to intervene. The researchers are developing bots that could potentially identify and notify people who post high levels of negative content. Social networks have used a similar notification strategy to stem misinformation and discourage people from sharing factually inaccurate posts.
“Obviously, it’s not very good that negativity spreads,” Goldenberg says. “It doesn’t help people’s wellbeing. It doesn’t help the political discourse.”
Goldenberg, a psychologist by training, has been studying conflict and social behavior for six years, first at Stanford University before joining HBS last year. These subjects naturally intersect: People with extreme political views tend to react with more emotion, he says.
“So, if we can find ways to reduce negativity or emotionality by helping people reduce their emotions,” he says, “maybe we can improve the discourse on social media.”