Why a new anti-revenge porn law has free speech experts alarmedÂ
Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes.Â
The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images â real or AI-generated â and gives platforms just 48 hours to comply with a victimâs takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance.Â
âContent moderation at scale is widely problematic and always ends up with important and necessary speech being censored,â India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch.
Online platforms have one year to establish a process for removing nonconsensual intimate imagery. While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature â no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse.
âI really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think itâs gonna be consensual porn,â McKinney said.Â
Senator Marsha Blackburn, a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation â the conservative think tank behind Project 2025 â has also said that âkeeping trans content away from children is protecting kids.âÂ
Because of the liability that platforms face if they donât take down an image within 48 hours of receiving a request, âthe default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if itâs another type of protected speech, or if itâs even relevant to the person whoâs making the request,â said McKinney.
Techcrunch event
Join us at TechCrunch Sessions: AI
Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.
Exhibit at TechCrunch Sessions: AI
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what youâve built â without the big spend. Available through May 9 or while tables last.
Berkeley, CA
|
June 5
REGISTER NOW
Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunchâs requests for more information about how theyâll verify whether the person requesting a takedown is a victim.Â
Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim.Â
Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesnât âreasonably complyâ with takedown demands as committing an âunfair or deceptive act or practiceâ â even if the host isnât a commercial entity.
âThis is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,â the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement.Â
Proactive monitoring
McKinney predicts that platforms will start moderating content before itâs disseminated so they have fewer problematic posts to take down in the future.Â
Platforms are already using AI to monitor for harmful content.
Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material. Some of Hiveâs customers include Reddit, Giphy, Vevo, Bluesky, and BeReal.Â
âWe were actually one of the tech companies that endorsed that bill,â Guo told TechCrunch. âItâll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.âÂ
Hiveâs model is a software-as-a-service, so the startup doesnât control how platforms use its product to flag or remove content. But Guo said many clients insert Hiveâs API at the point of upload to monitor before anything is sent out to the community.Â
A Reddit spokesperson told TechCrunch the platform uses âsophisticated internal tools, processes, and teams to address and removeâ NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim.Â
McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to âremove and make reasonable efforts to prevent the reuploadâ of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesnât include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage.Â
Meta, Signal, and Apple have not responded to TechCrunchâs request for more information on their plans for encrypted messaging.
Broader free speech implications
On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law.Â
âAnd Iâm going to use that bill for myself, too, if you donât mind,â he added. âThereâs nobody who gets treated worse than I do online.âÂ
While the audience laughed at the comment, not everyone took it as a joke. Trump hasnât been shy about suppressing or retaliating against unfavorable speech, whether thatâs labeling mainstream media outlets âenemies of the people,â barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS.
On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trumpâs demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the universityâs tax-exempt status.Â
 âAt a time when weâre already seeing school boards try to ban books and weâre seeing certain politicians be very explicitly about the types of content they donât want people to ever see, whether itâs critical race theory or abortion information or information about climate changeâŠit is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,â McKinney said.
#why #new #antirevenge #porn #law
Why a new anti-revenge porn law has free speech experts alarmedÂ
Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes.Â
The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images â real or AI-generated â and gives platforms just 48 hours to comply with a victimâs takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance.Â
âContent moderation at scale is widely problematic and always ends up with important and necessary speech being censored,â India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch.
Online platforms have one year to establish a process for removing nonconsensual intimate imagery. While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature â no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse.
âI really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think itâs gonna be consensual porn,â McKinney said.Â
Senator Marsha Blackburn, a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation â the conservative think tank behind Project 2025 â has also said that âkeeping trans content away from children is protecting kids.âÂ
Because of the liability that platforms face if they donât take down an image within 48 hours of receiving a request, âthe default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if itâs another type of protected speech, or if itâs even relevant to the person whoâs making the request,â said McKinney.
Techcrunch event
Join us at TechCrunch Sessions: AI
Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.
Exhibit at TechCrunch Sessions: AI
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what youâve built â without the big spend. Available through May 9 or while tables last.
Berkeley, CA
|
June 5
REGISTER NOW
Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunchâs requests for more information about how theyâll verify whether the person requesting a takedown is a victim.Â
Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim.Â
Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesnât âreasonably complyâ with takedown demands as committing an âunfair or deceptive act or practiceâ â even if the host isnât a commercial entity.
âThis is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,â the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement.Â
Proactive monitoring
McKinney predicts that platforms will start moderating content before itâs disseminated so they have fewer problematic posts to take down in the future.Â
Platforms are already using AI to monitor for harmful content.
Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material. Some of Hiveâs customers include Reddit, Giphy, Vevo, Bluesky, and BeReal.Â
âWe were actually one of the tech companies that endorsed that bill,â Guo told TechCrunch. âItâll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.âÂ
Hiveâs model is a software-as-a-service, so the startup doesnât control how platforms use its product to flag or remove content. But Guo said many clients insert Hiveâs API at the point of upload to monitor before anything is sent out to the community.Â
A Reddit spokesperson told TechCrunch the platform uses âsophisticated internal tools, processes, and teams to address and removeâ NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim.Â
McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to âremove and make reasonable efforts to prevent the reuploadâ of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesnât include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage.Â
Meta, Signal, and Apple have not responded to TechCrunchâs request for more information on their plans for encrypted messaging.
Broader free speech implications
On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law.Â
âAnd Iâm going to use that bill for myself, too, if you donât mind,â he added. âThereâs nobody who gets treated worse than I do online.âÂ
While the audience laughed at the comment, not everyone took it as a joke. Trump hasnât been shy about suppressing or retaliating against unfavorable speech, whether thatâs labeling mainstream media outlets âenemies of the people,â barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS.
On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trumpâs demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the universityâs tax-exempt status.Â
 âAt a time when weâre already seeing school boards try to ban books and weâre seeing certain politicians be very explicitly about the types of content they donât want people to ever see, whether itâs critical race theory or abortion information or information about climate changeâŠit is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,â McKinney said.
#why #new #antirevenge #porn #law
0 Commenti
0 condivisioni