Bug That Showed Violent Content in Instagram Feeds Is Fixed, Meta Says
www.cnet.com
Meta, the parent company of Instagram, apologized on Thursday forthe violent, graphic contentsome users saw on their Instagram Reels feeds. Meta attributed the problem to an error the company says has been addressed."We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended," a Meta spokesperson said in a statement provided to CNET. "We apologize for the mistake."Meta went on to say that the incident was an error unrelated to any content-policy changes the company has made. At the start of the year, Instagram made some significant changes to its user and content-creation policies, but these changes didn't specifically address content filtering or inappropriate content appearing on feeds.Meta made its own content-moderation changes more recently and has dismantled its fact-checking department in favor of community-driven moderation. Amnesty International warned earlier this month that Meta's changes could raise the risk of fueling violence.Read more: Instagram May Spin Off Reels As a Standalone App, Report SaysMeta says that most graphic or disturbing imagery it flags is removed and replaced with a warning label users must click through to view the imagery. Some content, Meta says, is also filtered for those younger than 18. The company says it develops its policies around violent and graphic imagery with the help of international experts and that refining those policies is an ongoing process.Users posted on social media and on message boards,including Reddit, about some of the unwanted imagery they saw on Instagram, presumably due to the glitch. They included shootings, beheadings, people being struck by vehicles, and other violent acts.Brooke Erin Duffy, a social media researcher and associate professor at Cornell University, said she's unconvinced by Meta's claims that the violent-content issue was unrelated to policy changes."Content moderation systems -- whether powered by AI or human labor -- are never failsafe," Duffy told CNET. "And while many speculated that Meta's moderation overhaul (announced last month) would create heightened risks and vulnerabilities, yesterday's 'glitch' provided firsthand evidence of the costs of a less-restrained platform."Duffy added that while moderating social-media platforms is difficult, "platforms moderation guidelines have served as safety mechanisms for users, especially those from marginalized communities. Meta's replacement of its existing system with a 'community notes' feature represents a step backward in terms of user protection."
0 Reacties ·0 aandelen ·69 Views