The researchers simulated the behavior of around 350,000 real Twitter users. They found that the sharing patterns of some 4 million tweets about voter fraud are consistent with people being much more likely to retweet social posts that contain stronger negative emotion.
The data for their study came from the VoterFraud2020 dataset, collected between October 23 and December 16, 2020. This dataset includes 7.6 million tweets and 25.6 million retweets that were collected in real-time using X’s streaming Application Program Interface, under the established guidelines for ethical and social media data use.
“Conspiracy theories about large-scale voter fraud spread widely and rapidly on Twitter during the 2020 U.S. presidential election, but it is unclear what processes are responsible for their amplification,” says Youngblood.
Given that, the team ran simulations of individual users tweeting and retweeting one another under different levels and forms of cognitive bias and compared the output to real patterns of retweet behavior among proponents of voter fraud conspiracy theories during and around the election.
“Our results suggest that the spread of voter fraud messages on Twitter was driven by a bias for tweets with more negative emotion, and this has important implications for current debates on how to counter the spread of conspiracy theories and misinformation on social media,” Youngblood adds.
Through their simulations and numerical analysis, Youngblood and colleagues found that their results are consistent with previous research by others suggesting that emotionally negative content has an advantage on social media across a variety of domains, including news coverage and political discourse.
The model also showed that even though negative tweets were more likely to be retweeted, quote tweets tended to be more moderate than the original ones, as people tended not to amplify negativity when commenting on something.
Youngblood says that because the team’s simulation-based model recreates the patterns in the actual data quite well, it may potentially be useful for simulating interventions against misinformation in the future. For example, the model could be easily modified to reflect the ways that social media companies or policy makers might try to curb the spread of information, such as reducing the rate at which tweets hit people’s timelines.
###