The research suggests figures such as tech mogul Elon Musk, a self-proclaimed “free-speech absolutist”, are out of step with how the public want to resolve moral dilemmas regarding misinformation on social media. Its findings showed people largely support intervention to control the spread of misinformation, especially if it is harmful and shared repeatedly.
Content moderation of online speech is a moral minefield, particularly when freedom of expression and preventing harm caused by misinformation are conflicting. By understanding more about how people think these moral dilemmas should be addressed, the research aims to help shape new rules for content moderation which the public will regard as legitimate.
First author Dr Anastasia Kozyreva, Adaptive Rationality Research Scientist from the Max Planck Institute for Human Development, in Berlin, Germany said: “So far, social media platforms have been the ones making key decisions on moderating misinformation, which effectively puts them in the position of arbiters of free speech. Moreover, discussions about online content moderation often run hot, but are largely uninformed by empirical evidence.”
As part of the study, more than 2,500 people in the USA took part in a survey experiment where respondents were shown information about hypothetical social media posts containing misinformation. They were asked to make two choices: whether to remove the posts mentioned in the post and whether to suspend the account that posted them. Topics of the posts included misinformation about the last US Presidential election, anti-vaccination, Holocaust denial, and climate change denial, and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post, as well as the consequences of the misinformation.
The majority chose to take at least some action to prevent the spread of falsehoods. When asked about how to deal with the questionable post, two-thirds (66%) expressed support for deleting it across all scenarios. When asked about how to deal with the account behind it, nearly four in five (78%) would intervene, with actions ranging from temporary to indefinite account suspension to issuing a warning.
When given the choice of doing nothing, using a warning, temporary suspension or indefinite suspension, most respondents preferred the option to issue a warning (between 31% and 37% across all four topics).
Not all misinformation types were penalised equally: climate change denial was acted on the least (58%), whereas Holocaust denial (71%) and denial that Joe Biden won the US Presidential election (69%) were acted on more often, closely followed by anti-vaccination content (66%). Across these four specific issues, Republicans were less likely to remove posts and punish accounts than Democrats.
Co-author Professor Stephan Lewandowsky, Chair in Cognitive Psychology at the University of Bristol, in the UK said: “Our results show that so-called free-speech absolutists, such as Elon Musk, are out of touch with public opinion. People by and large recognize that there should be limits to free speech, and that content removal or even deplatforming can be appropriate in extreme circumstances, such as Holocaust denial.”
The study helps demonstrate which factors affect people’s decisions regarding content moderation online. In addition to the topic, the severity of the consequences of the misinformation, and whether it was a repeat offense influenced decisions to remove posts and suspend accounts most strongly. Characteristics of the account itself — the person behind the account and their partisanship had little to no effect on respondents’ decisions.
While the number of followers of an account had little effect overall, this aspect reveals an interesting split in how people may view the role of platforms in public discourse. Among respondents who said they prioritise free speech (versus stopping misinformation), the more followers an account had the less the participants wanted to delete the post or sanction the account. The opposite was true for those who prioritise stopping misinformation – accounts with more followers were more likely to be sanctioned and there was greater support for reviewing those posts.
The study was conducted by Anastasia Kozyreva, Ralph Hertwig, Philipp Lorenz-Spreen and Stefan M. Herzog, from the Max Planck Institute for Human Development, Mark Leiser, from the Vrije University of Amsterdam in the Netherlands, Stephan Lewandowsky, from the University of Bristol, and Jason Reifler from the University of Exeter in the UK.
Prof Reifler, Professor of Political Science, said: “We hope our research can inform the design of transparent rules for content moderation of harmful misinformation. People’s preferences are not the only benchmark for making important trade-offs on content moderation, but ignoring the fact that there is support for taking action against misinformation and the accounts that publish it risks undermining the public’s trust in content moderation policies and regulations.”
Professor Ralph Hertwig, Director at the Center for Adaptive Rationality of the Max Planck Institute for Human Development, said: “To deal adequately with conflicts between free speech and harmful misinformation, we need to know how people handle various forms of moral dilemmas when making decisions about content moderation.”
Dr Mark Leiser, Assistant Professor in Internet Law, added: “Effective and meaningful platform regulation requires not only clear and transparent rules for content moderation, but general acceptance of the rules as legitimate constraints on the fundamental right to free expression. This important research goes a long way to informing policy makers about what is and, more importantly, what is not acceptable user-generated content.”
Paper
‘Resolving content moderation dilemmas between free speech and harmful misinformation’ by Anastasia Kozyreva et al in PNAS