ARSTECHNICA.COM
People will share misinformation that sparks moral outrage
Rage clicks People will share misinformation that sparks moral outrage People can tell it's not true, but if they're outraged by it, they'll share anyway. Jacek Krywko Dec 2, 2024 3:18 pm | 21 Credit: Ricardo Mendoza Garbayo Credit: Ricardo Mendoza Garbayo Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreRob Bauer, the chair of a NATO military committee, reportedly said, It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first. These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of peoplefound outrageously dangerous.But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.Why do stories like this get so many views and shares? The vast majority of misinformation studies assume people want to be accurate, but certain things distract them, says William J. Brady, a researcher at Northwestern University. Maybe its the social media environment. Maybe theyre not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article. Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if whats got us outraged is even real.Tracking the outrageThe rapid spread of misinformation on social media has generally been explained by something you might call an error theorythe idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, havent worked very well. To get to the root of the problem, Bradys team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model, Brady says.The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as a mixture of anger and disgust triggered by perceived moral transgressions. After training, the AI was effective. It performed as good as humans, Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determining whether the content was trustworthy news or misinformation.We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach, Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun Times ended up classified as trustworthy; Breitbart, not so much. One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules, Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Bradys team thought was good enough to work with.Finally, the researchers started analyzing the data to answer questions like whether misinformation sources evoke more outrage, whether outrageous news was shared more often than non-outrageous news, and finally, what reasons people had for sharing outrageous content. And thats when the idealized picture of honest, truthful citizens who shared misinformation just because they were too distracted to recognize it started to crack.Going with the flowThe Facebook and Twitter data analyzed by Bradys team revealed that misinformation evoked more outrage than trustworthy news. At the same time, people were way more likely to share outrageous content, regardless of whether it was misinformation or not. Putting those two trends together led the team to conclude outrage primarily boosted the spread of fake news, since reliable sources usually produced less outrageous content.What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history, Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group, Brady explains.This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. It serves very particular social functions, Brady says. Its a cheap way to signal group affiliation or commitment.Cheap, however, didnt mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputationspewing nonsense doesnt make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such, or if they just considered signaling their affiliation was more important.Flawed human natureBradys team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or nota result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.Brady pointed to an example from the recent campaign, when a reporter pushed J.D. Vance about false claims regarding immigrants eating pets. When the reporter pushed him, he implied that yes, it was fabrication, but it was outrageous and spoke to the issues his constituents were mad about, Brady says. These experiments show that this kind of dishonesty is not exclusive to politicians running for officepeople do this on social media all the time.The urge for signaling a moral stance quite often takes precedence over truth, but misinformation is not exclusively due to flaws in human nature. One thing this study was not focused on was the impact of social media algorithms, Brady notes. Those algorithms usually boost content that generates engagement, and we tend to engage more with outrageous content. This, in turn, incentivizes people to make their content more outrageous to get this algorithmic boost.Science, 2024. DOI: 10.1126/science.adl2829Jacek KrywkoAssociate WriterJacek KrywkoAssociate Writer Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry. 21 Comments
0 Comments
0 Shares
18 Views