TIME.COM
Why Metas Fact-Checking Change Could Lead to More Misinformation on Facebook and Instagram
Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during the Meta Connect event in Menlo Park, California, US, on Wednesday, Sept. 25, 2024. David Paul Morris/Bloomberg Getty ImagesBy Andrew R. ChowJanuary 7, 2025 2:48 PM ESTLess than two weeks before Donald Trump is reinstated as President, Meta is abandoning its fact-checking program in favor of a crowdsourced model that emphasizes free expression. The shift marks a profound change in how the company moderates content on its platformsand has sparked fierce debate over its implications for misinformation and hate speech online.Meta, which operates Facebook, Instagram and Threads, had long funded fact-checking efforts to review content. But many Republicans chafed against those policies, arguing that they were disproportionately stifling right-wing thought. Last year, Trump threatened Meta CEOSince Trumps electoral victory, Zuckerberg has tried to mend the relationship by donating $1 million (through Meta) to Trump's inaugural fund and promoting longtime conservative Joel Kaplan to become Metas new global policy chief. This policy change is one of the first major decisions to be made under Kaplans leadership, and follows the model of Community Notes championed by Trump allyZuckerberg, in a video statement, acknowledged that the policy change might mean that were going to catch less bad stuff. When asked at a press conference Tuesday if he thought Metas change was in response to his previous threats, Trump said, Probably.While conservatives and free-speech activists praised the decision, watchdogs and social media experts warned of its ripple effects on misinformation spread. This type of wisdom-of-the-crowd approach can be really valuable, says Valerie Wirtschafter, a fellow at the Brookings Institution. But doing so without proper testing and viewing its viability around scale is really, really irresponsible. Metas already having a hard time dealing with bad content as it is, and its going to get even worse.Facebook and misinformationMetas checkered history with combating misinformation underscores the challenges ahead. In 2016, the company launched a fact-checking program amid widespread concerns over the platforms impact on the U.S. elections. Researchers would later uncover that the political analysis company Cambridge Analytica harvested the private data of more than 50 million Facebook users as part of a campaign to support Trump. As part of its new fact-checking program, Facebook relied on outside organizations like The Associated Press and Snopes to review posts and either remove them or add an annotation. But the companys efforts still fell short in many ways. In 2017, Amnesty International found that Metas algorithms and lack of content moderation substantially contributed to helping foment violence in Myanmar against the Rohingya people.In 2021, a study found that Facebook could have prevented billions of views on pages that shared misinformation related to the 2020 election, but failed to tweak its algorithms. Some of those pages glorified violence in the lead-up to the Jan. 6, 2021 attack on the U.S. Capitol, the study found. (Facebook called the reports methodology flawed.) The day after the Capitol riot, Zuckerberg banned Trump from Facebook, writing that the risks of allowing the President to continue to use our service during this period are simply too great.But as critics clamored for more moderation on Meta platforms, a growing contingent stumped for less. In particular, some Republicans felt that Metas fact-checking partners were biased against them. Many were particularly incensed when Facebook, under pressure from Biden Administration officials, cracked down against disputed COVID-19 information, including claims that the virus had man-made origins. Some U.S. intelligence officers subsequently supported the lab leak theory, prompting Facebook to reverse the ban. As criticism from both sides grew, Zuckerberg decided to reduce his risk by simply deprioritizing news on Meta platforms.Pivoting to Community NotesAs Zuckerberg and Meta weathered criticism over their fact-checking tactics, billionaire Tesla CEOTwitter, which Musk quickly renamed X, ended free access to its API, making it harder for researchers to study how Community Notes impacted the spread of hate speech and misinformation on the platform. But several studies have been conducted on the topic, and theyscientific study found that Community Notes on X were effective in combating misinformation about COVID-19 vaccines and citing high-quality sources when doing so. Conversely, the Center for Countering Digital Hate found in October that the majority of accurate community notes were not shown to all users, allowing the original false claims to spread unchecked. Those misleading posts, which included claims that Democrats were importing illegal voters and that the 2020 election was stolen from Trump, racked up billions of views, the study wrote.Now, Meta will attempt to replicate a similar system on its own platforms, starting in the U.S. Zuckerberg and Kaplan, in announcing the decision, did little to hide its political valence. Kaplan, previously George W. Bushs deputy chief of staff, announced the decision on Fox & Friends, and said it would reset the balance in favor of free expression. Zuckerberg, who recently visited Trump at Mar-a-Lago, contended in a video statement that the fact checkers have just been too politically biased, and have destroyed more trust than theyve created. He added that restrictions on controversial topics like immigration and gender would be removed.Metas announcement was received positively by Trump. I thought it was a very good news conference. Honestly, I think theyve come a long way, he said on Tuesday about the change. Metas decision may also alter the calculus for congressional Republicans who have been pushing to pass legislation cracking down on social media or attempting to re-write Section 230 of the Communications Decency Act, which protects tech platforms from lawsuits for content posted by their users.Many journalists and misinformation researchers responded with dismay. Facebook and Instagram users are about to see a lot more dangerous misinformation in their feeds, Public Citizen wrote on X. The tech journalist Kara Swisher wrote that Zuckerbergs scapegoating of fact-checkers was misplaced: Toxic floods of lies on social media platforms like Facebook have destroyed trust, not fact checkers, she wrote on Bluesky.Wirtschafter, at the Brookings Institution, says that Metas pivot toward Community Notes isnt necessarily dangerous on its own. She wrote a paper in 2023 with Sharanya Majumder which found that although Xs Community Notes faced challenges in reaching consensus around political content, the programs quality improved as the company tinkered with itand as its contributor base expanded. It's a very nuanced program with a lot of refinement over years, she says.Meta, in contrast, seems to be rolling out the program with far less preparation, Wirtschafter says. Adding to the Metas challenge will be creating systems that are fine-tuned to each of Metas platforms: Facebook, Instagram, and Threads are all distinct in their content and userbases. Meta already has a spam problem and an AI-generated content problem, Wirtschafter says. Content moderation is good for business in some sense: It helps clear some of that muck that Meta is already having a hard time dealing with as it is. Thinking that the wisdom-of-the-crowd approach is going to work immediately for the problems they face is pretty naive.Luca Luceri, a research assistant professor at the University of Southern California, says that Metas larger pivot away from content moderation, which Zuckerberg signaled in his announcement video, is just as concerning as the removal of fact-checking. The risk is that any form of manipulation can be exacerbated or amplified, like influence campaigns from foreign actors, or bots which can be used to write Community Notes, he says. And there are other forms of content besides misinformationfor instance, related eating disorders or mental health or self harmthat still need some moderation. The shift may alsoPoynter. The end of those partnerships could deliver a significant blow to an already underfunded sector.More Must-Reads from TIMEDonald Trump Is TIME's 2024 Person of the YearTIMEs Top 10 Photos of 2024Why Gen Z Is Drinking LessThe Best Movies About CookingWhy Is Anxiety Worse at Night?A Head-to-Toe Guide to Treating Dry SkinWhy Street Cats Are Taking Over Urban NeighborhoodsColumn: Jimmy Carters Global Legacy Was Moral ClarityContact us at letters@time.com
0 Kommentare
0 Anteile
46 Ansichten