America is dealing both with a pandemic and an infodemic, a term coined in a 2020 joint statement by the World Health Organization, the United Nations and other global health groups, says Dr. Candice Lanius, an assistant professor of communication arts and the first author on the paper.
Co-author researchers are Dr. William “Ivey” MacKenzie, an associate professor of management, and Dr. Ryan Weber, an associate professor of English.
“The infodemic draws attention to our unique contemporary circumstances, where there is a glut of information flowing through social media and traditional news media,” says Dr. Lanius.
“Some people are naively sharing bad information, but there are also intentional bad actors sharing wrong information to further their own political or financial agendas,” she says.
These bad actors often use robotic – or “bot” – accounts to rapidly share and like misinformation, hastening its spread.
“The infodemic is a global problem, just like the pandemic is a global problem,” says Dr. Lanius. “Our research found that those who consume more news media, in particular right-leaning media, are more susceptible to misinformation in the context of the COVID-19 pandemic.”
Why is that? While the researchers are unable to say definitively, they say that there are some possible explanations.
First, the media these survey respondents consume often relies on ideological and emotional appeals that work well for peripheral persuasion, where a follower decides whether to agree with the message based on cues other than the strength of its ideas or arguments.
A second possible explanation is that credible scientific information has been updated and improved over the past year as more empirical research has been done, the more skeptical people surveyed had a perception that the right-leaning media have been consistent in messaging while the Centers for Disease Control and other expert groups are changing their story.
Last, the survey found that one primer for COVID-19 skepticism is geography. According to the American Communities Project, many right-leaning news media consumers happen to be more rural than urban, so they did not have the firsthand experience with the pandemic that many urban populations faced in March 2020.
“Often, attempts to correct people’s misperceptions actually cause them to dig in deeper to their false beliefs, a process that psychological researchers call ‘the backfire effect,’” says Dr. Weber.
“But in this study, to our pleasant surprise, we found that flags worked,” he says. “Flags indicating that a tweet came from a bot and that it may contain misinformation significantly lowered participants’ perceptions that a tweet was credible, useful, accurate, relevant and interesting.”
First, researchers asked the survey respondents their views of COVID-19 numbers. Did they feel there is underreporting, overreporting, accurate reporting, or did they not have an opinion?
“We were interested to see how people would respond to bots and flags that echoed their own views,” says Dr. MacKenzie. “So, people who believe the numbers were underreported, see tweets that claim there is underreporting and people who believe in overreporting see tweets stating that overreporting is occurring.”
Participants who believed the numbers are accurate or had no opinion were randomly assigned to either an over- or underreporting group. Surveying was done in real time, so as soon as the participant answered the first question about their view of COVID-19 numbers, they were automatically assigned to one of the two groups for the rest of the survey based on their response, Dr. MacKenzie says.
Dr. Weber says the researchers presented participants with two types of flags. The first told participants that the tweet came from a suspected bot account. The second told people that the tweet contained misinformation.
“These flags made people believe that the tweet was less credible, trustworthy, accurate, useful, relevant and interesting,” Dr. Weber says. “People also expressed less willingness to engage the tweet by liking or sharing it after they saw each flag.”
The order in which participants saw the flags wasn’t randomized, so they always saw the flag about a bot account first.
“Therefore, we can’t say whether the order of flags matters, or whether the misinformation flag is useful by itself,” Dr. Weber says. “But we definitely saw that both flags in succession make people much more skeptical of bad tweets.”
Flags also made most respondents say they were less likely to like or retweet the message or follow the account that created it – but not all.
“Some people showed more immunity to the flags than others,” Dr. Weber says. “For instance, Fox News viewers and those who spent more time on social media were less affected by the flags than others.”
The flags were also less effective at changing participants’ minds about COVID-19 numbers overall, so even people who found the tweet less convincing after seeing the flags might not reexamine their opinion about COVID-19 death counts.
“However,” Dr. Weber says, “some people did change their minds, most notably in the group that initially believed that COVID-19 numbers were overcounted.”
People reported that they were more likely to seek out additional information from unflagged tweets than those that were flagged, Dr. MacKenzie says. “As a whole, our research would suggest that individuals want to consume social media that is factual, and if mechanisms are in place to allow them to disregard false information, they will ignore it,” Dr. MacKenzie says. “I think the most important takeaway from this research is that identifying misinformation and bot accounts will change social media users’ behaviors.”