Meta allowed pornographic ads that break its content moderation rules
www.newscientist.com
Meta owns social media platforms including Facebook and InstagramJRdes / ShutterstockIn 2024, Meta allowed more than 3300 pornographic ads many featuring AI-generated content on its social media platforms, including Facebook and Instagram.The findings come from a report by AI Forensics, a European non-profit organisation focused on investigating tech platform algorithms. The researchers also discovered an inconsistency in Metas content moderation policies by re-uploading many of the same explicit images as standard posts on Instagram and Facebook. Unlike the ads, those posts were swiftly removed for violating Metas Community Standards. AdvertisementIm both disappointed and not surprised by the report, given that my research has already exposed double standards in content moderation, particularly in the realms of sexual content, says Carolina Are at Northumbria Universitys Centre for Digital Citizens in the UK.The AI Forensics report focused on a small sample of ads aimed at the European Union. It found that the explicit ads allowed by Meta primarily targeted middle-aged and older men with promotions for dubious sexual enhancement products and hook-up dating websites, with a total reach of more than 8.2 million impressions.Such permissiveness reflects a broader double standard in content moderation, says Are. Tech platforms often block content by and for women, femme-presenting and LGBTQIA+ users, she says. That double standard extends to male and female sexual health. An example is lingerie and period-related ads being [removed] from Meta, while ads for Viagra are approved, she says. Receive a weekly dose of discovery in your inbox.Sign up to newsletterIn addition to finding AI-generated imagery in the ads, the AI Forensics team also discovered audio deepfakes: in some ads for sexual enhancement medication, for example, pornographic visuals were overlaid with the digitally manipulated voice of actor Vincent Cassel.Meta prohibits the display of nudity or sexual activity in ads or organic posts on our platforms, and we are removing the violating content that was shared with us, says a Meta spokesperson. Bad actors are constantly evolving their tactics to avoid enforcement, which is why we continue to invest in the best tools and technology to help identify and remove violating content.The report coincides with Meta CEO Mark Zuckerberg announcing that the company will be getting rid of its fact-checking teams in favour of crowdsourced community notes.If we want to sound really dystopian and at this stage given Zuckerbergs latest decision to remove fact-checkers I think we have reason to be we can even say that Meta is as quick to strip individual, marginalised users of their agency as it is to take money from dubious ads, says Are.Topics:
0 Comments ·0 Shares ·15 Views