Does Fact-Checking Work on Social Media?
www.scientificamerican.com
January 15, 20254 min readDoes Fact-Checking Work? Heres What the Science SaysCommunication and misinformation researchers reveal the value of fact-checking, where perceived biases come from and what Metas decision could meanBy David Adam & Nature magazine Meta plans to scrap its third-party fact-checking programme in favour of X-like community notes. PA Images/Alamy Stock PhotoIt is said that a lie can fly halfway around the world while the truth is getting its boots on. That trek to challenge online falsehoods and misinformation got a little harder this week, when Facebooks parent company Meta announced plans to scrap the platforms fact-checking programme, which was set up in 2016 and pays independent groups to verify selected articles and posts.The company said that the move was to counter fact checkers political bias and censorship. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact-check and how, Metas chief global-affairs officer Joel Kaplan wrote on 7 January.Nature spoke to communication and misinformation researchers about the value of fact-checking, where perceived biases come from and what Metas decision could mean.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Positive influenceIn terms of helping to convince people that information is true and trustworthy, fact-checking does work, says Sander van der Linden, a social psychologist at the University of Cambridge, UK, who acted as an unpaid adviser on Facebooks fact-checking programme in 2022. Studies provide very consistent evidence that fact-checking does at least partially reduce misperceptions about false claims.For example, a 2019 meta-analysis of the effectiveness of fact-checking in more than 20,000 people found a significantly positive overall influence on political beliefs.Ideally, wed want people to not form misperceptions in the first place, adds van der Linden. But if we have to work with the fact that people are already exposed, then reducing it is almost as good as it as its going to get.Fact-checking is less effective when an issue is polarized, says Jay Van Bavel, a psychologist at New York University in New York City. If youre fact-checking something around Brexit in the UK or the election in United States, thats where fact-checks dont work very well, he says. In part thats because people who are partisans dont want to believe things that make their party look bad.But even when fact-checks dont seem to change peoples minds on contentious issues, they can still be helpful, says Alexios Mantzarlis, a former fact checker who directs the Security, Trust, and Safety Initiative at Cornell Tech in New York City.On Facebook, articles and posts deemed false by fact checkers are currently flagged with a warning. They are also shown to fewer users by the platforms suggestion algorithms, Mantzarlis says, and people are more likely to ignore flagged content than to read and share it.Flagging posts as problematic could also have knock-on effects on other users that are not captured by studies of the effectiveness of fact-checks, says Kate Starbird, a computer scientist at the University of Washington in Seattle. Measuring the direct effect of labels on user beliefs and actions is different from measuring the broader effects of having those fact-checks in the information ecosystem, she adds.More misinformation, more red flagsRegarding Metas claims of bias among fact-checkers, Van Bavel agrees that misinformation from the political right does get fact-checked and flagged as problematic on Facebook and other platforms more often than does misinformation from the left. But he offers a simple explanation.Its largely because the conservative misinformation is the stuff that is being spread more, he says. When one party, at least in the United States, is spreading most of the misinformation, its going to look like fact-checks are biased because theyre getting called out way more.There are data to support this. A study published in Nature last year showed that, although politically conservative people on X, formerly Twitter, were more likely to be suspended from the platform than were liberals, they were also more likely to share information from news sites that were judged as low quality by a representative group of laypeople.If you wanted to know whether a person is exposed to misinformation online, knowing if theyre politically conservative is your best predictor of that, says Gordon Pennycook, a psychologist at Cornell University in Ithaca, New York, who worked on that analysis.Implementation mattersMetas chief executive Mark Zuckerberg has said that in place of third-party fact-checking, Facebook could adopt a system similar to the community notes used by X, in which corrections and context are crowdsourced from users and added to posts.Research shows that those systems can also work to correct misinformation, up to a point. The way its been implemented on X actually doesnt work very well, says van der Linden. He points to an analysis done last year that found the community notes on X were often added to problematic posts too late to reduce engagement, because they came after false claims had already spread widely. X vice-president of product Keith Coleman told Reuters last year that community notes maintains a high bar to make notes effective and maintain trust.Crowdsourcing is a useful solution, but in practice it very much depends on how its implemented, van der Linden adds. Replacing fact checking with community notes just seems like it would make things a lot worse.This article is reproduced with permission and was first published on January 10, 2025.
0 Комментарии
·0 Поделились
·46 Просмотры