• The DeepSeek R1 update proves its an active threat to OpenAI and Google

    DeepSeek's R1 update, plus the rest of the AI news this week.
    Credit: Thomas Fuller / SOPA Images / LightRocket / Getty Images

    This week, DeepSeek released an updated version of its R1 model on HuggingFace, reigniting the open-source versus closed-source competition. The updated version, called DeekSeek-R1-0528, has 685 billion parameters, an upgrade from January's version, which had 671 billion. Unlike OpenAI and Google's models, which are famously closed-source, DeepSeek's model weights are publicly available. According to the benchmarks, the R1-0528 update has improved reasoning and inference capabilities and is closing the gap with OpenAI's o3 and Google's Gemini 2.5 Pro. DeepSeek also introduced a distilled version of R1-0528 using Alibaba's Qwen3 8B model. This is an example of a lightweight model that is less capable but also requires less computing power. DeepSeek-R1-0528-Qwen3-8B outperforms both Google's latest lightweight model Gemini-2.5-Flash-Thinking-0520 and OpenAI's o3-mini in certain benchmarks. But the bigger deal is that DeekSeek's distilled model can reportedly run on a single GPU, according to TechCrunch.

    You May Also Like

    To… distill all this information, the Chinese rival is catching up to its U.S. competitors with an open-weight approach that's cheaper and more accessible. Plus, DeepSeek continues to prove that AI models may not require as much computing power as OpenAI, Google, and other AI heavyweights currently use. Suffice to say, watch this space.That said, DeepSeek's models also have their drawbacks. According to one AI developer, the new DeepSeek update is even more censored than its previous version when it comes to criticism of the Chinese government. Of course, a lot more happened in the AI world over the past few days. After last week's parade of AI events from Google, Anthropic, and Microsoft, this week was lighter on product and feature news. That's one reason DeepSeek's R1 update captured the AI world's attention this week. In other AI news, Anthropic finally gets voice mode, AI influencers go viral, Anthropic's CEO warns of mass layoffs, and an AI-generated kangaroo. Google's Veo 3 takes the internet by stormOn virtually every social media platform, users are freaking out about the new Veo 3, Google's new AI video model. The results are impressive, and we're already seeing short films made entirely with Veo 3. Not bad for a product that came out 11 days ago.

    Not to be outdone by AI video artists, a reporter from The Wall Street Journal made a short film about herself and a robot using Veo 3.Mashable's Tech Editor Timothy Werth recapped Veo's big week and had a simple conclusion: We're so cooked.More AI product news: Claude's new voice mode and the beginning of the agentic browser eraAfter last week's barrage, this week was lighter on the volume of AI news. But what was announced this week is no less significant. 

    Mashable Light Speed

    Want more out-of-this world tech, space and science stories?
    Sign up for Mashable's weekly Light Speed newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    Anthropic finally introduced its own voice mode for Claude to compete with ChatGPT, Grok, and Gemini. The feature is currently in beta on mobile for the Claude app and will even be available to free plans with a limit of 20 to 30 voice conversations per day. Anthropic says you can ask Claude to summarize your calendar or read documents out loud. Paying subscribers can connect to Google Workspace for Calendar, Gmail, and Docs access. OpenAI is exploring the ability to sign into third-party apps with ChatGPT. We don't know much yet, but the company posted an interest form on its site for developers using Codex, its engineering agent, to add this capability to their own apps. It may not sound like a big deal, but it basically means users could easily link their personalized ChatGPT memories and settings to third-party apps, much like the way it works when you sign into a new app with your Google account.Opera announced a new agentic AI browser called Neon. "Much more than a place to view web pages, Neon can browse with you or for you, take action, and help you get things done," the announcement read. That includes a chatbot interface within the browser and the ability to fill in web forms for tasks like booking trips and shopping. The announcement, which included a promo video of a humanoid robot browsing the robot, which is scant on details but says Neon will be a "premium subscription product" and has a waitlist to sign up.The browser has suddenly become a new frontier for agentic AI, now that it's capable of automating web search tasks. Perplexity is working on a similar tool called Comet, and The Browser Company pivoted from its Arc browser to a more AI-centric browser called Dia. All of this is happening while Google might be forced to sell off Chrome, which OpenAI has kindly offered to take off its hands. Dario Amodei's prediction about AI replacing entry-level jobs is already starting to happenAnthropic CEO Dario Amodei warned in an interview with Axios that AI could "wipe out half of all entry-level white-collar jobs." Amodei's predictions might be spot on because a new study from VC firm SignalFire found that hiring for entry-level jobs is down to 7 percent from 25 percent in the previous year. Some of that is due to changes in the economic climate, but AI is definitely a factor since firms are opting to automate the less-technical aspects of work that would've been taken on by new hires. 

    Related Stories

    The latest in AI culture: That AI-generated kangaroo, Judge Judy, and everything elseGoogle wants you to know its AI overviews reach 1.5 billion people a month. They probably don't want you to know AI Overviews still struggles to count, spell, and know what year it is. As Mashable's Tim Marcin put it, would AI Overviews pass concussion protocol?The proposal of a 10-year ban on states regulating AI is pretty unpopular, according to a poll from Common Sense Media. The survey found that 57 percent of respondents opposed the moratorium, including half of the Republican respondents. As Mashable's Rebecca Ruiz reported, "the vast majority of respondents, regardless of their political affiliation, agreed that Congress shouldn't ban states from enacting or enforcing their own youth online safety and privacy laws."In the private sector, The New York Times signed a licensing deal with Amazon to allow their editorial content to be used for Amazon's AI models. The details are unclear, but from the outside, this seems like a change of tune from the Times, which is currently suing OpenAI for copyright infringement for allegedly using its content to train its models. That viral video of an emotional support kangaroo holding a plane ticket and being denied boarding? It's AI-generated, of course. Slightly more obvious, but no less creepy is another viral trend of using AI to turn public figures like Emmanuel Macron and Judge Judy into babies. These are strange AI-slop-infested times we're living in. AI has some positive uses too. This week, we learned about a new humanoid robot from HuggingFace called HopeJr, which could be available for sale later this year for just And to end this recap on a high note, the nonprofit Colossal Foundation has developed an AI algorithm to detect the bird calls of the near-extinct tooth-billed pigeon. Also known as the "little dodo," the tooth-billed pigeon is Samoa's national bird, and scientists are using the bioacoustic algorithm to locate and protect them. Want to get the latest AI news, from new product features to viral trends? Check back next week for another AI news recap, and in the meantime, follow @cecily_mauran and @mashable for more news.Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

    Topics
    OpenAI
    DeepSeek

    Cecily Mauran
    Tech Reporter

    Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran.
    #deepseek #update #proves #its #active
    The DeepSeek R1 update proves its an active threat to OpenAI and Google
    DeepSeek's R1 update, plus the rest of the AI news this week. Credit: Thomas Fuller / SOPA Images / LightRocket / Getty Images This week, DeepSeek released an updated version of its R1 model on HuggingFace, reigniting the open-source versus closed-source competition. The updated version, called DeekSeek-R1-0528, has 685 billion parameters, an upgrade from January's version, which had 671 billion. Unlike OpenAI and Google's models, which are famously closed-source, DeepSeek's model weights are publicly available. According to the benchmarks, the R1-0528 update has improved reasoning and inference capabilities and is closing the gap with OpenAI's o3 and Google's Gemini 2.5 Pro. DeepSeek also introduced a distilled version of R1-0528 using Alibaba's Qwen3 8B model. This is an example of a lightweight model that is less capable but also requires less computing power. DeepSeek-R1-0528-Qwen3-8B outperforms both Google's latest lightweight model Gemini-2.5-Flash-Thinking-0520 and OpenAI's o3-mini in certain benchmarks. But the bigger deal is that DeekSeek's distilled model can reportedly run on a single GPU, according to TechCrunch. You May Also Like To… distill all this information, the Chinese rival is catching up to its U.S. competitors with an open-weight approach that's cheaper and more accessible. Plus, DeepSeek continues to prove that AI models may not require as much computing power as OpenAI, Google, and other AI heavyweights currently use. Suffice to say, watch this space.That said, DeepSeek's models also have their drawbacks. According to one AI developer, the new DeepSeek update is even more censored than its previous version when it comes to criticism of the Chinese government. Of course, a lot more happened in the AI world over the past few days. After last week's parade of AI events from Google, Anthropic, and Microsoft, this week was lighter on product and feature news. That's one reason DeepSeek's R1 update captured the AI world's attention this week. In other AI news, Anthropic finally gets voice mode, AI influencers go viral, Anthropic's CEO warns of mass layoffs, and an AI-generated kangaroo. Google's Veo 3 takes the internet by stormOn virtually every social media platform, users are freaking out about the new Veo 3, Google's new AI video model. The results are impressive, and we're already seeing short films made entirely with Veo 3. Not bad for a product that came out 11 days ago. Not to be outdone by AI video artists, a reporter from The Wall Street Journal made a short film about herself and a robot using Veo 3.Mashable's Tech Editor Timothy Werth recapped Veo's big week and had a simple conclusion: We're so cooked.More AI product news: Claude's new voice mode and the beginning of the agentic browser eraAfter last week's barrage, this week was lighter on the volume of AI news. But what was announced this week is no less significant.  Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Anthropic finally introduced its own voice mode for Claude to compete with ChatGPT, Grok, and Gemini. The feature is currently in beta on mobile for the Claude app and will even be available to free plans with a limit of 20 to 30 voice conversations per day. Anthropic says you can ask Claude to summarize your calendar or read documents out loud. Paying subscribers can connect to Google Workspace for Calendar, Gmail, and Docs access. OpenAI is exploring the ability to sign into third-party apps with ChatGPT. We don't know much yet, but the company posted an interest form on its site for developers using Codex, its engineering agent, to add this capability to their own apps. It may not sound like a big deal, but it basically means users could easily link their personalized ChatGPT memories and settings to third-party apps, much like the way it works when you sign into a new app with your Google account.Opera announced a new agentic AI browser called Neon. "Much more than a place to view web pages, Neon can browse with you or for you, take action, and help you get things done," the announcement read. That includes a chatbot interface within the browser and the ability to fill in web forms for tasks like booking trips and shopping. The announcement, which included a promo video of a humanoid robot browsing the robot, which is scant on details but says Neon will be a "premium subscription product" and has a waitlist to sign up.The browser has suddenly become a new frontier for agentic AI, now that it's capable of automating web search tasks. Perplexity is working on a similar tool called Comet, and The Browser Company pivoted from its Arc browser to a more AI-centric browser called Dia. All of this is happening while Google might be forced to sell off Chrome, which OpenAI has kindly offered to take off its hands. Dario Amodei's prediction about AI replacing entry-level jobs is already starting to happenAnthropic CEO Dario Amodei warned in an interview with Axios that AI could "wipe out half of all entry-level white-collar jobs." Amodei's predictions might be spot on because a new study from VC firm SignalFire found that hiring for entry-level jobs is down to 7 percent from 25 percent in the previous year. Some of that is due to changes in the economic climate, but AI is definitely a factor since firms are opting to automate the less-technical aspects of work that would've been taken on by new hires.  Related Stories The latest in AI culture: That AI-generated kangaroo, Judge Judy, and everything elseGoogle wants you to know its AI overviews reach 1.5 billion people a month. They probably don't want you to know AI Overviews still struggles to count, spell, and know what year it is. As Mashable's Tim Marcin put it, would AI Overviews pass concussion protocol?The proposal of a 10-year ban on states regulating AI is pretty unpopular, according to a poll from Common Sense Media. The survey found that 57 percent of respondents opposed the moratorium, including half of the Republican respondents. As Mashable's Rebecca Ruiz reported, "the vast majority of respondents, regardless of their political affiliation, agreed that Congress shouldn't ban states from enacting or enforcing their own youth online safety and privacy laws."In the private sector, The New York Times signed a licensing deal with Amazon to allow their editorial content to be used for Amazon's AI models. The details are unclear, but from the outside, this seems like a change of tune from the Times, which is currently suing OpenAI for copyright infringement for allegedly using its content to train its models. That viral video of an emotional support kangaroo holding a plane ticket and being denied boarding? It's AI-generated, of course. Slightly more obvious, but no less creepy is another viral trend of using AI to turn public figures like Emmanuel Macron and Judge Judy into babies. These are strange AI-slop-infested times we're living in. AI has some positive uses too. This week, we learned about a new humanoid robot from HuggingFace called HopeJr, which could be available for sale later this year for just And to end this recap on a high note, the nonprofit Colossal Foundation has developed an AI algorithm to detect the bird calls of the near-extinct tooth-billed pigeon. Also known as the "little dodo," the tooth-billed pigeon is Samoa's national bird, and scientists are using the bioacoustic algorithm to locate and protect them. Want to get the latest AI news, from new product features to viral trends? Check back next week for another AI news recap, and in the meantime, follow @cecily_mauran and @mashable for more news.Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems. Topics OpenAI DeepSeek Cecily Mauran Tech Reporter Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran. #deepseek #update #proves #its #active
    MASHABLE.COM
    The DeepSeek R1 update proves its an active threat to OpenAI and Google
    DeepSeek's R1 update, plus the rest of the AI news this week. Credit: Thomas Fuller / SOPA Images / LightRocket / Getty Images This week, DeepSeek released an updated version of its R1 model on HuggingFace, reigniting the open-source versus closed-source competition. The updated version, called DeekSeek-R1-0528, has 685 billion parameters, an upgrade from January's version, which had 671 billion. Unlike OpenAI and Google's models, which are famously closed-source, DeepSeek's model weights are publicly available. According to the benchmarks, the R1-0528 update has improved reasoning and inference capabilities and is closing the gap with OpenAI's o3 and Google's Gemini 2.5 Pro. DeepSeek also introduced a distilled version of R1-0528 using Alibaba's Qwen3 8B model. This is an example of a lightweight model that is less capable but also requires less computing power. DeepSeek-R1-0528-Qwen3-8B outperforms both Google's latest lightweight model Gemini-2.5-Flash-Thinking-0520 and OpenAI's o3-mini in certain benchmarks. But the bigger deal is that DeekSeek's distilled model can reportedly run on a single GPU, according to TechCrunch. You May Also Like To… distill all this information, the Chinese rival is catching up to its U.S. competitors with an open-weight approach that's cheaper and more accessible. Plus, DeepSeek continues to prove that AI models may not require as much computing power as OpenAI, Google, and other AI heavyweights currently use. Suffice to say, watch this space.That said, DeepSeek's models also have their drawbacks. According to one AI developer (via TechCrunch), the new DeepSeek update is even more censored than its previous version when it comes to criticism of the Chinese government. Of course, a lot more happened in the AI world over the past few days. After last week's parade of AI events from Google, Anthropic, and Microsoft, this week was lighter on product and feature news. That's one reason DeepSeek's R1 update captured the AI world's attention this week. In other AI news, Anthropic finally gets voice mode, AI influencers go viral, Anthropic's CEO warns of mass layoffs, and an AI-generated kangaroo. Google's Veo 3 takes the internet by stormOn virtually every social media platform, users are freaking out about the new Veo 3, Google's new AI video model. The results are impressive, and we're already seeing short films made entirely with Veo 3. Not bad for a product that came out 11 days ago. Not to be outdone by AI video artists, a reporter from The Wall Street Journal made a short film about herself and a robot using Veo 3.Mashable's Tech Editor Timothy Werth recapped Veo's big week and had a simple conclusion: We're so cooked.More AI product news: Claude's new voice mode and the beginning of the agentic browser eraAfter last week's barrage, this week was lighter on the volume of AI news. But what was announced this week is no less significant.  Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Anthropic finally introduced its own voice mode for Claude to compete with ChatGPT, Grok, and Gemini. The feature is currently in beta on mobile for the Claude app and will even be available to free plans with a limit of 20 to 30 voice conversations per day. Anthropic says you can ask Claude to summarize your calendar or read documents out loud. Paying subscribers can connect to Google Workspace for Calendar, Gmail, and Docs access. OpenAI is exploring the ability to sign into third-party apps with ChatGPT. We don't know much yet, but the company posted an interest form on its site for developers using Codex, its engineering agent, to add this capability to their own apps. It may not sound like a big deal, but it basically means users could easily link their personalized ChatGPT memories and settings to third-party apps, much like the way it works when you sign into a new app with your Google account.Opera announced a new agentic AI browser called Neon. "Much more than a place to view web pages, Neon can browse with you or for you, take action, and help you get things done," the announcement read. That includes a chatbot interface within the browser and the ability to fill in web forms for tasks like booking trips and shopping. The announcement, which included a promo video of a humanoid robot browsing the robot, which is scant on details but says Neon will be a "premium subscription product" and has a waitlist to sign up.The browser has suddenly become a new frontier for agentic AI, now that it's capable of automating web search tasks. Perplexity is working on a similar tool called Comet, and The Browser Company pivoted from its Arc browser to a more AI-centric browser called Dia. All of this is happening while Google might be forced to sell off Chrome, which OpenAI has kindly offered to take off its hands. Dario Amodei's prediction about AI replacing entry-level jobs is already starting to happenAnthropic CEO Dario Amodei warned in an interview with Axios that AI could "wipe out half of all entry-level white-collar jobs." Amodei's predictions might be spot on because a new study from VC firm SignalFire found that hiring for entry-level jobs is down to 7 percent from 25 percent in the previous year. Some of that is due to changes in the economic climate, but AI is definitely a factor since firms are opting to automate the less-technical aspects of work that would've been taken on by new hires.  Related Stories The latest in AI culture: That AI-generated kangaroo, Judge Judy, and everything elseGoogle wants you to know its AI overviews reach 1.5 billion people a month. They probably don't want you to know AI Overviews still struggles to count, spell, and know what year it is. As Mashable's Tim Marcin put it, would AI Overviews pass concussion protocol?The proposal of a 10-year ban on states regulating AI is pretty unpopular, according to a poll from Common Sense Media. The survey found that 57 percent of respondents opposed the moratorium, including half of the Republican respondents. As Mashable's Rebecca Ruiz reported, "the vast majority of respondents, regardless of their political affiliation, agreed that Congress shouldn't ban states from enacting or enforcing their own youth online safety and privacy laws."In the private sector, The New York Times signed a licensing deal with Amazon to allow their editorial content to be used for Amazon's AI models. The details are unclear, but from the outside, this seems like a change of tune from the Times, which is currently suing OpenAI for copyright infringement for allegedly using its content to train its models. That viral video of an emotional support kangaroo holding a plane ticket and being denied boarding? It's AI-generated, of course. Slightly more obvious, but no less creepy is another viral trend of using AI to turn public figures like Emmanuel Macron and Judge Judy into babies. These are strange AI-slop-infested times we're living in. AI has some positive uses too. This week, we learned about a new humanoid robot from HuggingFace called HopeJr (with engineering by The Robot Studio), which could be available for sale later this year for just $3,000.And to end this recap on a high note, the nonprofit Colossal Foundation has developed an AI algorithm to detect the bird calls of the near-extinct tooth-billed pigeon. Also known as the "little dodo," the tooth-billed pigeon is Samoa's national bird, and scientists are using the bioacoustic algorithm to locate and protect them. Want to get the latest AI news, from new product features to viral trends? Check back next week for another AI news recap, and in the meantime, follow @cecily_mauran and @mashable for more news.Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems. Topics OpenAI DeepSeek Cecily Mauran Tech Reporter Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • DeepSeek’s latest AI model a ‘big step backwards’ for free speech

    DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it upAI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.China criticism? Computer says noThis pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.“The model is open source with a permissive license, so the community canaddress this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #deepseeks #latest #model #big #step
    DeepSeek’s latest AI model a ‘big step backwards’ for free speech
    DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it upAI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.China criticism? Computer says noThis pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.“The model is open source with a permissive license, so the community canaddress this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #deepseeks #latest #model #big #step
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    DeepSeek’s latest AI model a ‘big step backwards’ for free speech
    DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it upAI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.China criticism? Computer says noThis pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.“The model is open source with a permissive license, so the community can (and will) address this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.(Photo by John Cameron)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • DeepSeek’s updated R1 AI model is more censored, test finds

    Chinese AI startup DeepSeek’s newest AI model, an updated version of the company’s R1 reasoning model, achieves impressive scores on benchmarks for coding, math, and general knowledge, nearly surpassing OpenAI’s flagship o3. But the upgraded R1, also known as “R1-0528,” might also be less willing to answer contentious questions, in particular questions about topics the Chinese government considers to be controversial.
    That’s according to testing conducted by the pseudonymous developer behind SpeechMap, a platform to compare how different models treat sensitive and controversial subjects. The developer, who goes by the username “xlr8harder” on X, claims that R1-0528 is “substantially” less permissive of contentious free speech topics than previous DeepSeek releases and is “the most censored DeepSeek model yet for criticism of the Chinese government.”
    As Wired explained in a piece from January, models in China are required to follow stringent information controls. A 2023 law forbids models from generating content that “damages the unity of the country and social harmony,” which could be construed as content that counters the government’s historical and political narratives. To comply, Chinese startups often censor their models by either using prompt-level filters or fine-tuning them. One study found that DeepSeek’s original R1 refuses to answer 85% of questions about subjects deemed by the Chinese government to be politically controversial.
    According to xlr8harder, R1-0528 censors answers to questions about topics like the internment camps in China’s Xinjiang region, where more than a million Uyghur Muslims have been arbitrarily detained. While it sometimes criticizes aspects of Chinese government policy — in xlr8harder’s testing, it offered the Xinjiang camps as an example of human rights abuses — the model often gives the Chinese government’s official stance when asked questions directly.
    TechCrunch observed this in our brief testing, as well.
    DeepSeek’s updated R1 answer when asked whether Chinese leader Xi Jinping should be removed.Image Credits:DeepSeek
    China’s openly available AI models, including video-generating models such as Magi-1 and Kling, have attracted criticism in the past for censoring topics sensitive to the Chinese government, such as the Tiananmen Square massacre. In December, Clément Delangue, the CEO of AI dev platform Hugging Face, warned about the unintended consequences of Western companies building on top of well-performing, openly licensed Chinese AI. 
    #deepseeks #updated #model #more #censored
    DeepSeek’s updated R1 AI model is more censored, test finds
    Chinese AI startup DeepSeek’s newest AI model, an updated version of the company’s R1 reasoning model, achieves impressive scores on benchmarks for coding, math, and general knowledge, nearly surpassing OpenAI’s flagship o3. But the upgraded R1, also known as “R1-0528,” might also be less willing to answer contentious questions, in particular questions about topics the Chinese government considers to be controversial. That’s according to testing conducted by the pseudonymous developer behind SpeechMap, a platform to compare how different models treat sensitive and controversial subjects. The developer, who goes by the username “xlr8harder” on X, claims that R1-0528 is “substantially” less permissive of contentious free speech topics than previous DeepSeek releases and is “the most censored DeepSeek model yet for criticism of the Chinese government.” As Wired explained in a piece from January, models in China are required to follow stringent information controls. A 2023 law forbids models from generating content that “damages the unity of the country and social harmony,” which could be construed as content that counters the government’s historical and political narratives. To comply, Chinese startups often censor their models by either using prompt-level filters or fine-tuning them. One study found that DeepSeek’s original R1 refuses to answer 85% of questions about subjects deemed by the Chinese government to be politically controversial. According to xlr8harder, R1-0528 censors answers to questions about topics like the internment camps in China’s Xinjiang region, where more than a million Uyghur Muslims have been arbitrarily detained. While it sometimes criticizes aspects of Chinese government policy — in xlr8harder’s testing, it offered the Xinjiang camps as an example of human rights abuses — the model often gives the Chinese government’s official stance when asked questions directly. TechCrunch observed this in our brief testing, as well. DeepSeek’s updated R1 answer when asked whether Chinese leader Xi Jinping should be removed.Image Credits:DeepSeek China’s openly available AI models, including video-generating models such as Magi-1 and Kling, have attracted criticism in the past for censoring topics sensitive to the Chinese government, such as the Tiananmen Square massacre. In December, Clément Delangue, the CEO of AI dev platform Hugging Face, warned about the unintended consequences of Western companies building on top of well-performing, openly licensed Chinese AI.  #deepseeks #updated #model #more #censored
    TECHCRUNCH.COM
    DeepSeek’s updated R1 AI model is more censored, test finds
    Chinese AI startup DeepSeek’s newest AI model, an updated version of the company’s R1 reasoning model, achieves impressive scores on benchmarks for coding, math, and general knowledge, nearly surpassing OpenAI’s flagship o3. But the upgraded R1, also known as “R1-0528,” might also be less willing to answer contentious questions, in particular questions about topics the Chinese government considers to be controversial. That’s according to testing conducted by the pseudonymous developer behind SpeechMap, a platform to compare how different models treat sensitive and controversial subjects. The developer, who goes by the username “xlr8harder” on X, claims that R1-0528 is “substantially” less permissive of contentious free speech topics than previous DeepSeek releases and is “the most censored DeepSeek model yet for criticism of the Chinese government.” As Wired explained in a piece from January, models in China are required to follow stringent information controls. A 2023 law forbids models from generating content that “damages the unity of the country and social harmony,” which could be construed as content that counters the government’s historical and political narratives. To comply, Chinese startups often censor their models by either using prompt-level filters or fine-tuning them. One study found that DeepSeek’s original R1 refuses to answer 85% of questions about subjects deemed by the Chinese government to be politically controversial. According to xlr8harder, R1-0528 censors answers to questions about topics like the internment camps in China’s Xinjiang region, where more than a million Uyghur Muslims have been arbitrarily detained. While it sometimes criticizes aspects of Chinese government policy — in xlr8harder’s testing, it offered the Xinjiang camps as an example of human rights abuses — the model often gives the Chinese government’s official stance when asked questions directly. TechCrunch observed this in our brief testing, as well. DeepSeek’s updated R1 answer when asked whether Chinese leader Xi Jinping should be removed.Image Credits:DeepSeek China’s openly available AI models, including video-generating models such as Magi-1 and Kling, have attracted criticism in the past for censoring topics sensitive to the Chinese government, such as the Tiananmen Square massacre. In December, Clément Delangue, the CEO of AI dev platform Hugging Face, warned about the unintended consequences of Western companies building on top of well-performing, openly licensed Chinese AI. 
    0 Comentários 0 Compartilhamentos 0 Anterior
  • AI cybersecurity risks and deepfake scams on the rise

    Published
    May 27, 2025 10:00am EDT close Deepfake technology 'is getting so easy now': Cybersecurity expert Cybersecurity expert Morgan Wright breaks down the dangers of deepfake video technology on 'Unfiltered.' Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it's not really them. It's a deepfake, powered by AI, and you're the target of a sophisticated scam. These kinds of attacks are happening right now, and they're getting more convincing every day.That's the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference, one of the world's biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that's touching more lives than ever before. Illustration of cybersecurity risks.AI tools are leaking sensitive dataOne of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks.This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later.Deepfake scams are now real-time and multilingualAI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds.Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders. Illustration of a person video conferencing on their laptop.AI is running phishing and scam operations at scaleSocial engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim’s language, stay online constantly, or manually write convincing messages.Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around per month, making it widely accessible to bad actors.Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim’s responses, giving the illusion of a human behind the screen.AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like "Time is running out" might be reworded as "The hourglass is nearly empty for you," making the message feel more personal and urgent while also avoiding detection.By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort. Stolen AI accounts are sold on the dark webWith AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation. Illustration of a person signing into their laptop.Jailbreaking AI is now a common tacticCriminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include:Telling the AI to pretend it is a fictional character that has no rules or limitationsPhrasing dangerous questions as academic or research-related scenariosAsking for technical instructions using less obvious wording so the request doesn’t get flaggedSome AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways.AI-generated malware is entering the mainstreamAI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website..Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for "text recognition" to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers.Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware.Get a free scan to find out if your personal information is already out on the web Poisoned AI models are spreading misinformationSometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens:Training poisoning: Attackers sneak false or harmful data into the model during developmentRetrieval poisoning: Misleading content online gets planted, which the AI later picks up when generating answersIn 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code.A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time. Illustration of a hacker at workHow to protect yourself from AI-driven cyber threatsAI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here’s how to stay protected:1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused.2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices.3) Turn on two-factor authentication: 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords.4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real.5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap - and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. 6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number, phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials and financial information are now being traded and exploited at scale by cybercriminals using AI.8) Use a secure password manager: Stolen AI accounts and credential stuffing attacks are a growing threat, with hackers using automated tools to break into accounts and sell access on the dark web. A secure password manager helps you create and store strong, unique passwords for every account, making it far more difficult for attackers to compromise your logins, even if some of your information is leaked or targeted by AI-driven attacks. Get more details about my best expert-reviewed Password Managers of 2025 here.9) Keep your software updated: AI-generated malware and advanced phishing kits are designed to exploit vulnerabilities in outdated software. To stay ahead of these evolving threats, ensure all your devices, browsers, and applications are updated with the latest security patches. Regular updates close security gaps that AI-powered malware and cybercriminals are actively seeking to exploit. Kurt's key takeawaysCybercriminals are now using AI to power some of the most convincing and scalable attacks we’ve ever seen. From deepfake video calls and AI-generated phishing emails to stolen AI accounts and malware written by chatbots, these scams are becoming harder to detect and easier to launch. Attackers are even poisoning AI models with false information and creating fake tools that look legitimate but are designed to do harm. To stay safe, it’s more important than ever to use strong antivirus protection, enable multi-factor authentication, and avoid sharing sensitive data with AI tools you do not fully trust.Have you noticed AI scams getting more convincing? Let us know your experience or questions by writing us at Cyberguy.com/Contact. Your story could help someone else stay safe.For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to coverFollow Kurt on his social channelsAnswers to the most asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com.  All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #cybersecurity #risks #deepfake #scams #rise
    AI cybersecurity risks and deepfake scams on the rise
    Published May 27, 2025 10:00am EDT close Deepfake technology 'is getting so easy now': Cybersecurity expert Cybersecurity expert Morgan Wright breaks down the dangers of deepfake video technology on 'Unfiltered.' Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it's not really them. It's a deepfake, powered by AI, and you're the target of a sophisticated scam. These kinds of attacks are happening right now, and they're getting more convincing every day.That's the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference, one of the world's biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that's touching more lives than ever before. Illustration of cybersecurity risks.AI tools are leaking sensitive dataOne of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks.This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later.Deepfake scams are now real-time and multilingualAI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds.Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders. Illustration of a person video conferencing on their laptop.AI is running phishing and scam operations at scaleSocial engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim’s language, stay online constantly, or manually write convincing messages.Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around per month, making it widely accessible to bad actors.Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim’s responses, giving the illusion of a human behind the screen.AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like "Time is running out" might be reworded as "The hourglass is nearly empty for you," making the message feel more personal and urgent while also avoiding detection.By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort. Stolen AI accounts are sold on the dark webWith AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation. Illustration of a person signing into their laptop.Jailbreaking AI is now a common tacticCriminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include:Telling the AI to pretend it is a fictional character that has no rules or limitationsPhrasing dangerous questions as academic or research-related scenariosAsking for technical instructions using less obvious wording so the request doesn’t get flaggedSome AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways.AI-generated malware is entering the mainstreamAI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website..Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for "text recognition" to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers.Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware.Get a free scan to find out if your personal information is already out on the web Poisoned AI models are spreading misinformationSometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens:Training poisoning: Attackers sneak false or harmful data into the model during developmentRetrieval poisoning: Misleading content online gets planted, which the AI later picks up when generating answersIn 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code.A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time. Illustration of a hacker at workHow to protect yourself from AI-driven cyber threatsAI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here’s how to stay protected:1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused.2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices.3) Turn on two-factor authentication: 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords.4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real.5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap - and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. 6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number, phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials and financial information are now being traded and exploited at scale by cybercriminals using AI.8) Use a secure password manager: Stolen AI accounts and credential stuffing attacks are a growing threat, with hackers using automated tools to break into accounts and sell access on the dark web. A secure password manager helps you create and store strong, unique passwords for every account, making it far more difficult for attackers to compromise your logins, even if some of your information is leaked or targeted by AI-driven attacks. Get more details about my best expert-reviewed Password Managers of 2025 here.9) Keep your software updated: AI-generated malware and advanced phishing kits are designed to exploit vulnerabilities in outdated software. To stay ahead of these evolving threats, ensure all your devices, browsers, and applications are updated with the latest security patches. Regular updates close security gaps that AI-powered malware and cybercriminals are actively seeking to exploit. Kurt's key takeawaysCybercriminals are now using AI to power some of the most convincing and scalable attacks we’ve ever seen. From deepfake video calls and AI-generated phishing emails to stolen AI accounts and malware written by chatbots, these scams are becoming harder to detect and easier to launch. Attackers are even poisoning AI models with false information and creating fake tools that look legitimate but are designed to do harm. To stay safe, it’s more important than ever to use strong antivirus protection, enable multi-factor authentication, and avoid sharing sensitive data with AI tools you do not fully trust.Have you noticed AI scams getting more convincing? Let us know your experience or questions by writing us at Cyberguy.com/Contact. Your story could help someone else stay safe.For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to coverFollow Kurt on his social channelsAnswers to the most asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com.  All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #cybersecurity #risks #deepfake #scams #rise
    WWW.FOXNEWS.COM
    AI cybersecurity risks and deepfake scams on the rise
    Published May 27, 2025 10:00am EDT close Deepfake technology 'is getting so easy now': Cybersecurity expert Cybersecurity expert Morgan Wright breaks down the dangers of deepfake video technology on 'Unfiltered.' Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it's not really them. It's a deepfake, powered by AI, and you're the target of a sophisticated scam. These kinds of attacks are happening right now, and they're getting more convincing every day.That's the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference (RSAC), one of the world's biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that's touching more lives than ever before. Illustration of cybersecurity risks. (Kurt "CyberGuy" Knutsson)AI tools are leaking sensitive dataOne of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks.This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later.Deepfake scams are now real-time and multilingualAI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds.Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders. Illustration of a person video conferencing on their laptop. (Kurt "CyberGuy" Knutsson)AI is running phishing and scam operations at scaleSocial engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim’s language, stay online constantly, or manually write convincing messages.Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around $500 per month, making it widely accessible to bad actors.Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim’s responses, giving the illusion of a human behind the screen.AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like "Time is running out" might be reworded as "The hourglass is nearly empty for you," making the message feel more personal and urgent while also avoiding detection.By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort. Stolen AI accounts are sold on the dark webWith AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation. Illustration of a person signing into their laptop. (Kurt "CyberGuy" Knutsson)Jailbreaking AI is now a common tacticCriminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include:Telling the AI to pretend it is a fictional character that has no rules or limitationsPhrasing dangerous questions as academic or research-related scenariosAsking for technical instructions using less obvious wording so the request doesn’t get flaggedSome AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways.AI-generated malware is entering the mainstreamAI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website..Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for "text recognition" to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers.Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware.Get a free scan to find out if your personal information is already out on the web Poisoned AI models are spreading misinformationSometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens:Training poisoning: Attackers sneak false or harmful data into the model during developmentRetrieval poisoning: Misleading content online gets planted, which the AI later picks up when generating answersIn 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code.A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time. Illustration of a hacker at work (Kurt "CyberGuy" Knutsson)How to protect yourself from AI-driven cyber threatsAI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here’s how to stay protected:1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused.2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices.3) Turn on two-factor authentication (2FA): 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords.4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real.5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap - and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. 6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials and financial information are now being traded and exploited at scale by cybercriminals using AI.8) Use a secure password manager: Stolen AI accounts and credential stuffing attacks are a growing threat, with hackers using automated tools to break into accounts and sell access on the dark web. A secure password manager helps you create and store strong, unique passwords for every account, making it far more difficult for attackers to compromise your logins, even if some of your information is leaked or targeted by AI-driven attacks. Get more details about my best expert-reviewed Password Managers of 2025 here.9) Keep your software updated: AI-generated malware and advanced phishing kits are designed to exploit vulnerabilities in outdated software. To stay ahead of these evolving threats, ensure all your devices, browsers, and applications are updated with the latest security patches. Regular updates close security gaps that AI-powered malware and cybercriminals are actively seeking to exploit. Kurt's key takeawaysCybercriminals are now using AI to power some of the most convincing and scalable attacks we’ve ever seen. From deepfake video calls and AI-generated phishing emails to stolen AI accounts and malware written by chatbots, these scams are becoming harder to detect and easier to launch. Attackers are even poisoning AI models with false information and creating fake tools that look legitimate but are designed to do harm. To stay safe, it’s more important than ever to use strong antivirus protection, enable multi-factor authentication, and avoid sharing sensitive data with AI tools you do not fully trust.Have you noticed AI scams getting more convincing? Let us know your experience or questions by writing us at Cyberguy.com/Contact. Your story could help someone else stay safe.For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to coverFollow Kurt on his social channelsAnswers to the most asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com.  All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    1 Comentários 0 Compartilhamentos 0 Anterior
  • Signal Now Blocks Windows Recall From Capturing Your Conversations (Unless You Don’t Want It To)

    On Wednesday, Signal announced a new feature for the Windows version of its app. Why not include the new feature in the Mac version of the app? Because this update exclusively targets a Windows 11 feature that Signal doesn't believe is secure: Recall. Why isn't Windows Recall secure?In case you're unfamiliar, Windows Recall is an AI-powered feature Microsoft rolled out for Copilot+ PCs. Recall essentially takes screenshots of your display throughout the day, building a compendium of your PC activity. As such, you can access your Recall screenshots to search for specific actions, messages, apps, and more from your personal Windows history. It's neat in theory—instead of endlessly scrolling through files or chats, you can search Windows for the specific thing you want. In practice though, the privacy and security implications are difficult to get past. The feature originally was set to launch last year, but Microsoft kept delaying it due to security concerns: In its first form, Microsoft decrypted the entire database of screenshots when you unlocked your PC, which meant anyone with physical access to your computer or knowledge of your PC's password could access your PC activity history. Microsoft plugged that security hole, but there were still issues, such as allowing all sensitive informationinto screenshots, and even saving the text from screenshots in plain text—a hacker's dream.Microsoft has been busy workshopping the feature ever since, and recently brought the feature back for good. Now, Recall is protected by Windows Hello authentication during setup and whenever accessing the screenshot database; sensitive information is supposed to be censored by default; and the feature now lets you choose apps to omit from screenshots, in case you don't want Windows to take screenshots of private chats or important work, for example. Still, security risks still existas they always will when you let a programtake screenshots of your computing activities all day, every day. That's simply a bridge too far for Signal, a company that famously takes security very seriously. In response, Signal for Windows now blocks Recall on Copilot+ PCs by default. This isn't a simple setting that apps can choose to enable for themselves. In order to achieve this, Signal has flagged its app window as displaying DRMcontent. That tricks Windows into thinking the Signal window is playing copyrighted content, and, therefore, won't take screenshots of that window for Recall.It's clever, but it does have two implications. For starters, it does disable Recall for Signal for any users who actually want the feature to work. I wouldn't personally use Recall, but I can see how someone who does might not love an app going rogue and disabling a feature they want to use—especially considering you have to opt into using Recall in the first place. But even if you don't use Recall, it also interferes with your ability to take screenshots yourself: If you've ever tried to take a screenshot of a DRM window—say, while watching Netflix—you'll know what I mean.How to stop Signal from blocking screenshots on Windows 11Luckily, there's an easy way around the changes. Signal itself admits that's by design, knowing that there will be users who at least want to be able to screenshot their chats for one reason or another.To disable Signal's DRM window feature, head to Signal Settings > Privacy > Screen security. When you disable "Screen security," you will get a pop-up, warning you that by doing so, Windows might capture screenshots of your Signal window in a way that "may not be private." Click "Disable," and you're good to go.

    Credit: Signal
    #signal #now #blocks #windows #recall
    Signal Now Blocks Windows Recall From Capturing Your Conversations (Unless You Don’t Want It To)
    On Wednesday, Signal announced a new feature for the Windows version of its app. Why not include the new feature in the Mac version of the app? Because this update exclusively targets a Windows 11 feature that Signal doesn't believe is secure: Recall. Why isn't Windows Recall secure?In case you're unfamiliar, Windows Recall is an AI-powered feature Microsoft rolled out for Copilot+ PCs. Recall essentially takes screenshots of your display throughout the day, building a compendium of your PC activity. As such, you can access your Recall screenshots to search for specific actions, messages, apps, and more from your personal Windows history. It's neat in theory—instead of endlessly scrolling through files or chats, you can search Windows for the specific thing you want. In practice though, the privacy and security implications are difficult to get past. The feature originally was set to launch last year, but Microsoft kept delaying it due to security concerns: In its first form, Microsoft decrypted the entire database of screenshots when you unlocked your PC, which meant anyone with physical access to your computer or knowledge of your PC's password could access your PC activity history. Microsoft plugged that security hole, but there were still issues, such as allowing all sensitive informationinto screenshots, and even saving the text from screenshots in plain text—a hacker's dream.Microsoft has been busy workshopping the feature ever since, and recently brought the feature back for good. Now, Recall is protected by Windows Hello authentication during setup and whenever accessing the screenshot database; sensitive information is supposed to be censored by default; and the feature now lets you choose apps to omit from screenshots, in case you don't want Windows to take screenshots of private chats or important work, for example. Still, security risks still existas they always will when you let a programtake screenshots of your computing activities all day, every day. That's simply a bridge too far for Signal, a company that famously takes security very seriously. In response, Signal for Windows now blocks Recall on Copilot+ PCs by default. This isn't a simple setting that apps can choose to enable for themselves. In order to achieve this, Signal has flagged its app window as displaying DRMcontent. That tricks Windows into thinking the Signal window is playing copyrighted content, and, therefore, won't take screenshots of that window for Recall.It's clever, but it does have two implications. For starters, it does disable Recall for Signal for any users who actually want the feature to work. I wouldn't personally use Recall, but I can see how someone who does might not love an app going rogue and disabling a feature they want to use—especially considering you have to opt into using Recall in the first place. But even if you don't use Recall, it also interferes with your ability to take screenshots yourself: If you've ever tried to take a screenshot of a DRM window—say, while watching Netflix—you'll know what I mean.How to stop Signal from blocking screenshots on Windows 11Luckily, there's an easy way around the changes. Signal itself admits that's by design, knowing that there will be users who at least want to be able to screenshot their chats for one reason or another.To disable Signal's DRM window feature, head to Signal Settings > Privacy > Screen security. When you disable "Screen security," you will get a pop-up, warning you that by doing so, Windows might capture screenshots of your Signal window in a way that "may not be private." Click "Disable," and you're good to go. Credit: Signal #signal #now #blocks #windows #recall
    LIFEHACKER.COM
    Signal Now Blocks Windows Recall From Capturing Your Conversations (Unless You Don’t Want It To)
    On Wednesday, Signal announced a new feature for the Windows version of its app. Why not include the new feature in the Mac version of the app? Because this update exclusively targets a Windows 11 feature that Signal doesn't believe is secure: Recall. Why isn't Windows Recall secure?In case you're unfamiliar, Windows Recall is an AI-powered feature Microsoft rolled out for Copilot+ PCs. Recall essentially takes screenshots of your display throughout the day, building a compendium of your PC activity. As such, you can access your Recall screenshots to search for specific actions, messages, apps, and more from your personal Windows history. It's neat in theory—instead of endlessly scrolling through files or chats, you can search Windows for the specific thing you want. In practice though, the privacy and security implications are difficult to get past. The feature originally was set to launch last year, but Microsoft kept delaying it due to security concerns: In its first form, Microsoft decrypted the entire database of screenshots when you unlocked your PC, which meant anyone with physical access to your computer or knowledge of your PC's password could access your PC activity history. Microsoft plugged that security hole, but there were still issues, such as allowing all sensitive information (Social Security numbers, plain text passwords, private chats, etc.) into screenshots, and even saving the text from screenshots in plain text—a hacker's dream.Microsoft has been busy workshopping the feature ever since, and recently brought the feature back for good. Now, Recall is protected by Windows Hello authentication during setup and whenever accessing the screenshot database; sensitive information is supposed to be censored by default; and the feature now lets you choose apps to omit from screenshots, in case you don't want Windows to take screenshots of private chats or important work, for example. Still, security risks still exist (sensitive info isn't always censored, for one) as they always will when you let a program (the OS, no less) take screenshots of your computing activities all day, every day. That's simply a bridge too far for Signal, a company that famously takes security very seriously. In response, Signal for Windows now blocks Recall on Copilot+ PCs by default. This isn't a simple setting that apps can choose to enable for themselves (another issue Signal has with Microsoft's feature). In order to achieve this, Signal has flagged its app window as displaying DRM (Digital Rights Management) content. That tricks Windows into thinking the Signal window is playing copyrighted content, and, therefore, won't take screenshots of that window for Recall.It's clever, but it does have two implications. For starters, it does disable Recall for Signal for any users who actually want the feature to work. I wouldn't personally use Recall, but I can see how someone who does might not love an app going rogue and disabling a feature they want to use—especially considering you have to opt into using Recall in the first place. But even if you don't use Recall, it also interferes with your ability to take screenshots yourself: If you've ever tried to take a screenshot of a DRM window—say, while watching Netflix—you'll know what I mean.How to stop Signal from blocking screenshots on Windows 11Luckily, there's an easy way around the changes. Signal itself admits that's by design, knowing that there will be users who at least want to be able to screenshot their chats for one reason or another.To disable Signal's DRM window feature, head to Signal Settings > Privacy > Screen security. When you disable "Screen security," you will get a pop-up, warning you that by doing so, Windows might capture screenshots of your Signal window in a way that "may not be private." Click "Disable," and you're good to go. Credit: Signal
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Trump Signs Controversial Law Targeting Nonconsensual Sexual Content

    US President Donald Trump signed into law legislation on Monday nicknamed the Take It Down Act, which requires platforms to remove nonconsensual instances of “intimate visual depiction” within 48 hours of receiving a request. Companies that take longer or don’t comply at all could be subject to penalties of roughly per violation.The law received support from tech firms like Google, Meta, and Microsoft and will go into effect within the next year. Enforcement will be left up to the Federal Trade Commission, which has the power to penalize companies for what it deems unfair and deceptive business practices. Other countries, including India, have enacted similar regulations requiring swift removals of sexually explicit photos or deepfakes. Delays can lead to content spreading uncontrollably across the web; Microsoft, for example, took months to act in one high-profile case.But free speech advocates are concerned that a lack of guardrails in the Take It Down Act could allow bad actors to weaponize the policy to force tech companies to unjustly censor online content. The new law is modeled on the Digital Millennium Copyright Act, which requires internet service providers to expeditiously remove material that someone claims is infringing on their copyright. Companies can be held financially liable for ignoring valid requests, which has motivated many firms to err on the side of caution and preemptively remove content before a copyright dispute has been resolved.For years, fraudsters have abused the DMCA takedown process to get content censored for reasons that have nothing to do with copyright infringements. In some cases, the information is unflattering or belongs to industry competitors that they want to harm. The DMCA does include provisions that allow fraudsters to be held financially liable when they make false claims. Last year, for example, Google secured a default judgment against two individuals accused of orchestrating a scheme to suppress competitors in the T-shirt industry by filing frivolous requests to remove hundreds of thousands of search results.Fraudsters who may have feared the penalties of abusing DMCA could find Take It Down a less risky pathway. The Take It Down Act doesn’t include a robust deterrence provision, requiring only that takedown requestors exercise “good faith,” without specifying penalties for acting in bad faith. Unlike the DMCA, the new law also doesn’t outline an appeals process for alleged perpetrators to challenge what they consider erroneous removals. Critics of the regulation say it should have exempted certain content, including material that can be viewed as being in the public’s interest to remain online.Another concern is that the 48-hour deadline specified in the Take It Down Act may limit how much companies can vet requests before making a decision about whether to approve them. Free speech groups contend that could lead to the erasure of content well beyond nonconsensual “visually intimate depictions,” and invite abuse by the same kinds of fraudsters who took advantage of the DMCA.
    #trump #signs #controversial #law #targeting
    Trump Signs Controversial Law Targeting Nonconsensual Sexual Content
    US President Donald Trump signed into law legislation on Monday nicknamed the Take It Down Act, which requires platforms to remove nonconsensual instances of “intimate visual depiction” within 48 hours of receiving a request. Companies that take longer or don’t comply at all could be subject to penalties of roughly per violation.The law received support from tech firms like Google, Meta, and Microsoft and will go into effect within the next year. Enforcement will be left up to the Federal Trade Commission, which has the power to penalize companies for what it deems unfair and deceptive business practices. Other countries, including India, have enacted similar regulations requiring swift removals of sexually explicit photos or deepfakes. Delays can lead to content spreading uncontrollably across the web; Microsoft, for example, took months to act in one high-profile case.But free speech advocates are concerned that a lack of guardrails in the Take It Down Act could allow bad actors to weaponize the policy to force tech companies to unjustly censor online content. The new law is modeled on the Digital Millennium Copyright Act, which requires internet service providers to expeditiously remove material that someone claims is infringing on their copyright. Companies can be held financially liable for ignoring valid requests, which has motivated many firms to err on the side of caution and preemptively remove content before a copyright dispute has been resolved.For years, fraudsters have abused the DMCA takedown process to get content censored for reasons that have nothing to do with copyright infringements. In some cases, the information is unflattering or belongs to industry competitors that they want to harm. The DMCA does include provisions that allow fraudsters to be held financially liable when they make false claims. Last year, for example, Google secured a default judgment against two individuals accused of orchestrating a scheme to suppress competitors in the T-shirt industry by filing frivolous requests to remove hundreds of thousands of search results.Fraudsters who may have feared the penalties of abusing DMCA could find Take It Down a less risky pathway. The Take It Down Act doesn’t include a robust deterrence provision, requiring only that takedown requestors exercise “good faith,” without specifying penalties for acting in bad faith. Unlike the DMCA, the new law also doesn’t outline an appeals process for alleged perpetrators to challenge what they consider erroneous removals. Critics of the regulation say it should have exempted certain content, including material that can be viewed as being in the public’s interest to remain online.Another concern is that the 48-hour deadline specified in the Take It Down Act may limit how much companies can vet requests before making a decision about whether to approve them. Free speech groups contend that could lead to the erasure of content well beyond nonconsensual “visually intimate depictions,” and invite abuse by the same kinds of fraudsters who took advantage of the DMCA. #trump #signs #controversial #law #targeting
    WWW.WIRED.COM
    Trump Signs Controversial Law Targeting Nonconsensual Sexual Content
    US President Donald Trump signed into law legislation on Monday nicknamed the Take It Down Act, which requires platforms to remove nonconsensual instances of “intimate visual depiction” within 48 hours of receiving a request. Companies that take longer or don’t comply at all could be subject to penalties of roughly $50,000 per violation.The law received support from tech firms like Google, Meta, and Microsoft and will go into effect within the next year. Enforcement will be left up to the Federal Trade Commission, which has the power to penalize companies for what it deems unfair and deceptive business practices. Other countries, including India, have enacted similar regulations requiring swift removals of sexually explicit photos or deepfakes. Delays can lead to content spreading uncontrollably across the web; Microsoft, for example, took months to act in one high-profile case.But free speech advocates are concerned that a lack of guardrails in the Take It Down Act could allow bad actors to weaponize the policy to force tech companies to unjustly censor online content. The new law is modeled on the Digital Millennium Copyright Act, which requires internet service providers to expeditiously remove material that someone claims is infringing on their copyright. Companies can be held financially liable for ignoring valid requests, which has motivated many firms to err on the side of caution and preemptively remove content before a copyright dispute has been resolved.For years, fraudsters have abused the DMCA takedown process to get content censored for reasons that have nothing to do with copyright infringements. In some cases, the information is unflattering or belongs to industry competitors that they want to harm. The DMCA does include provisions that allow fraudsters to be held financially liable when they make false claims. Last year, for example, Google secured a default judgment against two individuals accused of orchestrating a scheme to suppress competitors in the T-shirt industry by filing frivolous requests to remove hundreds of thousands of search results.Fraudsters who may have feared the penalties of abusing DMCA could find Take It Down a less risky pathway. The Take It Down Act doesn’t include a robust deterrence provision, requiring only that takedown requestors exercise “good faith,” without specifying penalties for acting in bad faith. Unlike the DMCA, the new law also doesn’t outline an appeals process for alleged perpetrators to challenge what they consider erroneous removals. Critics of the regulation say it should have exempted certain content, including material that can be viewed as being in the public’s interest to remain online.Another concern is that the 48-hour deadline specified in the Take It Down Act may limit how much companies can vet requests before making a decision about whether to approve them. Free speech groups contend that could lead to the erasure of content well beyond nonconsensual “visually intimate depictions,” and invite abuse by the same kinds of fraudsters who took advantage of the DMCA.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Why a new anti-revenge porn law has free speech experts alarmed 

    Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes. 
    The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim’s takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance. 
    “Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,” India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch.
    Online platforms have one year to establish a process for removing nonconsensual intimate imagery. While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse.
    “I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it’s gonna be consensual porn,” McKinney said. 
    Senator Marsha Blackburn, a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that “keeping trans content away from children is protecting kids.” 
    Because of the liability that platforms face if they don’t take down an image within 48 hours of receiving a request, “the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it’s another type of protected speech, or if it’s even relevant to the person who’s making the request,” said McKinney.

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch’s requests for more information about how they’ll verify whether the person requesting a takedown is a victim. 
    Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim. 
    Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn’t “reasonably comply” with takedown demands as committing an “unfair or deceptive act or practice” – even if the host isn’t a commercial entity.
    “This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,” the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement. 
    Proactive monitoring
    McKinney predicts that platforms will start moderating content before it’s disseminated so they have fewer problematic posts to take down in the future. 
    Platforms are already using AI to monitor for harmful content.
    Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material. Some of Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and BeReal. 
    “We were actually one of the tech companies that endorsed that bill,” Guo told TechCrunch. “It’ll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.” 
    Hive’s model is a software-as-a-service, so the startup doesn’t control how platforms use its product to flag or remove content. But Guo said many clients insert Hive’s API at the point of upload to monitor before anything is sent out to the community. 
    A Reddit spokesperson told TechCrunch the platform uses “sophisticated internal tools, processes, and teams to address and remove” NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim. 
    McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn’t include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage. 
    Meta, Signal, and Apple have not responded to TechCrunch’s request for more information on their plans for encrypted messaging.
    Broader free speech implications
    On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law. 
    “And I’m going to use that bill for myself, too, if you don’t mind,” he added. “There’s nobody who gets treated worse than I do online.” 
    While the audience laughed at the comment, not everyone took it as a joke. Trump hasn’t been shy about suppressing or retaliating against unfavorable speech, whether that’s labeling mainstream media outlets “enemies of the people,” barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS.
    On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump’s demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university’s tax-exempt status. 
     “At a time when we’re already seeing school boards try to ban books and we’re seeing certain politicians be very explicitly about the types of content they don’t want people to ever see, whether it’s critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,” McKinney said.
    #why #new #antirevenge #porn #law
    Why a new anti-revenge porn law has free speech experts alarmed 
    Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes.  The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim’s takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance.  “Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,” India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch. Online platforms have one year to establish a process for removing nonconsensual intimate imagery. While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse. “I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it’s gonna be consensual porn,” McKinney said.  Senator Marsha Blackburn, a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that “keeping trans content away from children is protecting kids.”  Because of the liability that platforms face if they don’t take down an image within 48 hours of receiving a request, “the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it’s another type of protected speech, or if it’s even relevant to the person who’s making the request,” said McKinney. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch’s requests for more information about how they’ll verify whether the person requesting a takedown is a victim.  Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim.  Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn’t “reasonably comply” with takedown demands as committing an “unfair or deceptive act or practice” – even if the host isn’t a commercial entity. “This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,” the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement.  Proactive monitoring McKinney predicts that platforms will start moderating content before it’s disseminated so they have fewer problematic posts to take down in the future.  Platforms are already using AI to monitor for harmful content. Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material. Some of Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and BeReal.  “We were actually one of the tech companies that endorsed that bill,” Guo told TechCrunch. “It’ll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.”  Hive’s model is a software-as-a-service, so the startup doesn’t control how platforms use its product to flag or remove content. But Guo said many clients insert Hive’s API at the point of upload to monitor before anything is sent out to the community.  A Reddit spokesperson told TechCrunch the platform uses “sophisticated internal tools, processes, and teams to address and remove” NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim.  McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn’t include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage.  Meta, Signal, and Apple have not responded to TechCrunch’s request for more information on their plans for encrypted messaging. Broader free speech implications On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law.  “And I’m going to use that bill for myself, too, if you don’t mind,” he added. “There’s nobody who gets treated worse than I do online.”  While the audience laughed at the comment, not everyone took it as a joke. Trump hasn’t been shy about suppressing or retaliating against unfavorable speech, whether that’s labeling mainstream media outlets “enemies of the people,” barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS. On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump’s demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university’s tax-exempt status.   “At a time when we’re already seeing school boards try to ban books and we’re seeing certain politicians be very explicitly about the types of content they don’t want people to ever see, whether it’s critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,” McKinney said. #why #new #antirevenge #porn #law
    TECHCRUNCH.COM
    Why a new anti-revenge porn law has free speech experts alarmed 
    Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes.  The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim’s takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance.  “Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,” India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch. Online platforms have one year to establish a process for removing nonconsensual intimate imagery (NCII). While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse. “I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it’s gonna be consensual porn,” McKinney said.  Senator Marsha Blackburn (R-TN), a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that “keeping trans content away from children is protecting kids.”  Because of the liability that platforms face if they don’t take down an image within 48 hours of receiving a request, “the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it’s another type of protected speech, or if it’s even relevant to the person who’s making the request,” said McKinney. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch’s requests for more information about how they’ll verify whether the person requesting a takedown is a victim.  Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim.  Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn’t “reasonably comply” with takedown demands as committing an “unfair or deceptive act or practice” – even if the host isn’t a commercial entity. “This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,” the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement.  Proactive monitoring McKinney predicts that platforms will start moderating content before it’s disseminated so they have fewer problematic posts to take down in the future.  Platforms are already using AI to monitor for harmful content. Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material (CSAM). Some of Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and BeReal.  “We were actually one of the tech companies that endorsed that bill,” Guo told TechCrunch. “It’ll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.”  Hive’s model is a software-as-a-service, so the startup doesn’t control how platforms use its product to flag or remove content. But Guo said many clients insert Hive’s API at the point of upload to monitor before anything is sent out to the community.  A Reddit spokesperson told TechCrunch the platform uses “sophisticated internal tools, processes, and teams to address and remove” NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim.  McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn’t include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage.  Meta, Signal, and Apple have not responded to TechCrunch’s request for more information on their plans for encrypted messaging. Broader free speech implications On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law.  “And I’m going to use that bill for myself, too, if you don’t mind,” he added. “There’s nobody who gets treated worse than I do online.”  While the audience laughed at the comment, not everyone took it as a joke. Trump hasn’t been shy about suppressing or retaliating against unfavorable speech, whether that’s labeling mainstream media outlets “enemies of the people,” barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS. On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump’s demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university’s tax-exempt status.   “At a time when we’re already seeing school boards try to ban books and we’re seeing certain politicians be very explicitly about the types of content they don’t want people to ever see, whether it’s critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,” McKinney said.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Seed Oils, UPFs, And Carni-Bros: Is RFK Making America Healthy Again?

    French fries at Steak 'n' Shake in Greenwood, Indiana. RFK Jr touted French fries while dining at a ... More Steak 'n' Shake.Missvain, Wikimedia Commons
    RFK Jr is not just bringing back infectious diseases like measles. Our top health official is working hard to back diet-related diseases like obesity, diabetes, and heart attacks. During his first three months in office, RFK, Jr. has made three big pronouncements about what Americans should eat. The first is important but for the wrong reasons. The second builds on the fallacies of the first. And the third goes against 60 plus years of scientific evidence.

    1. Ultra-processed foodsare poisoning us

    Something is poisoning the American people. And we know that the primary culprit is our changing food supply to highly chemical and processed food.
    RFK Jr, at his Senate Finance Confirmation Hearings, January 29, 2025

    French Fries, with 13 Ingredients, would be considered an ultra-processed food.Open Food Facts

    RFK is not wrong if he is referring to ultra-processed foods. A recent study found that those who ate more UPFs were more likely to show early symptoms of Parkinson’s disease and a review study linked UPFs to higher risk of dying from heart disease, type 2 diabetes, obesity, and mental health outcomes including anxiety and sleeping difficulties.

    UPFs are made from multiple ingredients including additives like colorants, flavor enhancers, and preservatives. They contain high amounts of sugars, salt, and fats, which makes them hyper-palatable, or simply tasty. And they are cheap, readily available, and handy to eat. Unfortunately for the consumer, a review of studies with a combined population of over 1 million, found that for each 10% increase in UPF consumption, your risk of mortality increases by 10%.

    Why are UPFs unhealthy? Many people eschew the long list of “chemicals” on the ingredient labels of everything from Wheaties to Fritos. One type of ingredient--food dyes--can have negative health effects and are associated with hyperactivity in children. In fact, MAHA hopes to ban food dyes in UPFs like soft drinks and Fruit Loops. Yet I haven’t heard MAHA alerting us to the high levels of salt, sugar, and saturated fat in UPFs… all things that have been shown over and over to contribute to chronic diseases like high blood pressure, diabetes, and cancer.FI/FOOD Washington Post Studio DATE: 1/7/05 PHOTO: Julia Ewan/TWP Kellogg's Fruit Loops now have 1/3 ... More less sugar and 12 added vitamins and minerals.The Washington Post via Getty Images

    Dr Kevin Hall, who worked as a nutrition researcher at NIH for 21 years, found that people on an ultra-processed diet consumed about 500 more calories per day, which could explain why UPFs are associated with type 2 diabetes and obesity. But what explains why UPF consumers gobble up more calories? Dr Hall thinks energy density might be the culprit. Simply put, a chocolate chip cookie packs a lot more calories into every bite than a banana. So eating that ultra processed chocolate chip cookie means eating more calories per bite compared to eating fruit and other less processed foods. Not to mention that the sugar, salt and fat taste good… making me want to eat 4 or 5 chocolate chip cookies instead of one banana.
    Cramer ton, North Carolina, Floyd & Blackie's bakery employee with tray of large M&M chocolate chip ... More cookies.Jeffrey Greenberg/Universal Images Group via Getty ImagesUndated: A bunch of ripe yellow Bananas.Getty Images
    The preliminary results of Dr Hall’s recent study, which he posted on X, show that the high energy density and the irresistible taste of salt, sugar, and fat explain why people on high UPF diets eat more calories. But don’t expect to see the final results of this important study published anytime soon. Turns out Dr Hall took early retirement at 54 yrs old from his research position at NIH. Why? Because the MAHA administration forced him to withdraw his name from a paper on UPFs that mentioned “health equity”--or the difficulties some groups have accessing healthy food. The administration also took away the money Dr Hall needed to continue his UPF research, censored his media access, and even incorrectly edited his response to a NY Times inquiry. Just as we were on the brink of understanding why UPFs are making us sick, one of the world’s leading UPF scientists is out. Hard to see how lack of scientific information is Making Americans Healthy Again.
    2. Eat Beef Tallow instead of Seed OilsWASHINGTON, DC - MARCH 31: Beef tallow french fries photographed for Food in Washington, DC on March ... More 31, 2025.The Washington Post via Getty Images
    While dining on fries and a double cheeseburger at Steak N Shake with Fox News’s Sean Hannity, Kennedy touted French fries cooked in beef tallow.

    Robert F. Kennedy Jr 10/21/24

    @RobertKennedyJr

    Did you know that McDonald’s used to use beef tallow to make their fries from 1940 until phasing it out in favor of seed oils in 1990? This switch was made because saturated animal fats were thought to be unhealthy, but we have since discovered that seed oils are one of the driving causes of the obesity epidemic.

    …Americans should have every right to eat out at a restaurant without being unknowingly poisoned by heavily subsidized seed oils. It’s time to Make Frying Oil Tallow Again

    Close-up of a large frozen ball of beef kidney fat during home rendering of beef tallow, Lafayette, ... More California, March 25, 2025.Gado via Getty Images
    To be sure, consuming a lot of seed oils raises health concerns, including that they contain few nutrients, are often highly processed, and some, like soybean oil, might contain unhealthy amounts of omega 6 acids. But, are seed oils worse than saturated animal fats? Seed oils, unlike animal fats, are mostly unsaturated.

    According to Dr. Christopher Gardner, director of nutrition studies at the Stanford Prevention Research Center who has been studying the role of fat in our diet since 1995, "Every study for decades has shown that when you eat unsaturated fats instead of saturated fats, this lowers the level of LDL cholesterolin your blood. There are actually few associations in nutrition that have this much evidence behind them…To think that seed oils are anywhere near the top of the list of major nutrition concerns in our country is just nuts."

    And in a 2025 study, participants with the highest intake of butter, which similar to beef tallow is largely saturated animal fat, had a 16% less likely to die. About ⅓ of the deaths were due to cancer, about a third to cardiovascular disease, and a third other causes. The authors conclude:

    “Substituting butter with plant-based oils may confer substantial benefits for preventing premature deaths. These results support current dietary recommendations to replace animal fats like butter with non hydrogenated vegetable oils that are high in unsaturated fats, especially olive, soy, and canola oil.”Still life featuring a collection of olive oil bottles, 2011.Getty Images
    In short, if you have to choose between seed oils and animal fat, you are probably better off with seed oils, or even better, extra virgin olive oil. But, you should avoid consuming too much of any sort of oil or fat, which brings us to the third RFK Jr pronouncement.RFK Jr and West Virginia Governor Morissey. Presidential Candidate Robert F. Kennedy, Jr. ... More Celebrates Hispanic Heritage Month In Los Angeles. Patrick Morrisey speaking at the 2017 CPAC in National Harbor, Maryland.Mario Tama, Getty Images; Gage Skidmore
    3. Become a Carni-Bro
    At a public event to promote MAHA in West Virginia, RFK Jr body shamed Governor Patrick Morrisey for his weight.

    I’m going to put him on a really rigorous regime. We’re going to put him on a carnivore diet … Raise your hand if you want Governor Morrissey to do a public weigh-in once a month. And then when he’s lost 30 lbs I’m going to come back to this state and we’re going to do a celebration and a public weigh in with him.

    RFK, Jr.

    MAHA seems to be at the forefront of the next culture war: dump plant-based foods and become a “carni-bro.” Yet a comprehensive review of studies on foods and obesity concluded:

    High intakes of whole grains, legumes, nuts, and fruits are associated with a reduced risk of overweight and obesity, while red meat and sugar-sweetened beverages are associated with an increased risk of overweight and obesity.
    NEW YORK, NEW YORK - JULY 04: Spectators pose for a photo ahead of the 2023 Nathan's Famous Fourth ... More of July International Hot Dog Eating Contest at Coney Island on July 04, 2023 in the Brooklyn borough of New York City. The annual contest, which began in 1972, draws thousands of spectators to Nathan’s Famous located on Surf Avenue.Getty Images
    How do UPFs compare to red meat? The only study I found comparing the two found people eating UPFs had an approximately 14% greater chance of dying whereas those who ate red meat had an approximately 8% chance of death over the same time period.But this study was conducted with Seventh Day Adventists, whose meat consumption was way lower than the average American. People in West Virginia, whose governor is in fact rotund, are by far and away the biggest consumer of hotdogs in the US, at 481 hot dogs per person per year.
    In a recent UK study with a more typical population, every added 70 g of red meat and processed meatper day was associated with a 15% higher risk of coronary heart disease and a 30% higher risk of diabetes. Because red and processed meat consumption is also associated with higher rates of cancer, the World Cancer Research Fund recommends limiting red meat to no more than three portions per week and avoiding processed meat altogether.TOPSHOT - An overweight woman walks at the 61st Montgomery County Agricultural Fair on August 19, ... More 2009 in Gaithersburg, Maryland. At USD 150 billion, the US medical system spends around twice as much treating preventable health conditions caused by obesity than it does on cancer, Health Secretary Kathleen Sebelius said. Two-thirds of US adults and one in five children are overweight or obese, putting them at greater risk of chronic illness like heart disease, cancer, stroke and diabetes, according to reports released recently at the "Weight of the Nation" conference. AFP PHOTO / Tim SloanAFP via Getty Images
    Heart Disease: Still the leading killer
    According to the CDC, heart disease is the leading cause of death in the US, accounting for one in five deaths, or one death every 33 seconds. Heart disease cost the US about billion from 2019 to 2020. And if you look at a map of where heart disease is more common, it looks uncannily like a map of MAHA supporters.
    .Heart Disease Death Rates, 2018–2020 for Adults, Ages 35+, by CountyCDC
    The first items in a list of CDC recommendations for preventing heart disease are all about food: Choose healthy meals and snacks high in fiber and limit saturated and trans fats, salt, and sugar. This sounds like a recipe for avoiding UPFs. But it could also be a recipe for substituting whole grains and fruit and vegetables for red and processed meats, which punch the double whammy of being meat and UPFs.
    Is RFK, Jr. Making America Healthy Again?
    Let’s celebrate Kennedy’s move away from UPFs, an important step toward improving Americans’ health. But why does our top health official publicly tout beef tallow, French fries, and double cheeseburgers, when we know that Americans’ consumption of saturated fat and meat lead to obesity, diabetes, cancer, and heart disease? Or has he weighed in on ultra-processed meats, like Slim Jim’s, which with sales at billion last year is America’s fastest growing snack?NEW ORLEANS - OCTOBER 01: Amanda Barrett, 18-years-old, watches her mother Eve Barrett peel a ... More mold-covered layer of paint off a wall as the family sees what is left of their home in the Lakeview District October 1, 2005 in New Orleans, Louisiana. The people of New Orleans are still cleaning up over a month after Hurricane Katrina hit the area.Getty Images
    It’s hard to understand what is going on in RFK’s brain. He gloms on to a limited number of studies suggesting health risks of eating seed oils, while ignoring saturated fats and even encouraging Americans to eat fast foods. He wants to rout out corruption in the food and pharmaceutical industry, yet uses his position to sell Make America Tallow Again hats and T-shirts. He says he believes climate change poses an existential threat, yet on his second day in office eliminated funding for research on heat waves, indoor mold after flooding, and other NIH climate change and health programs. And in his big May report on children’s health, he ignores the largest causes of death for those under 19--gun violence and accidents. Raise your hand if you want Secretary Kennedy to conduct a public truth-telling once a month.
    #seed #oils #upfs #carnibros #rfk
    Seed Oils, UPFs, And Carni-Bros: Is RFK Making America Healthy Again?
    French fries at Steak 'n' Shake in Greenwood, Indiana. RFK Jr touted French fries while dining at a ... More Steak 'n' Shake.Missvain, Wikimedia Commons RFK Jr is not just bringing back infectious diseases like measles. Our top health official is working hard to back diet-related diseases like obesity, diabetes, and heart attacks. During his first three months in office, RFK, Jr. has made three big pronouncements about what Americans should eat. The first is important but for the wrong reasons. The second builds on the fallacies of the first. And the third goes against 60 plus years of scientific evidence. 1. Ultra-processed foodsare poisoning us Something is poisoning the American people. And we know that the primary culprit is our changing food supply to highly chemical and processed food. RFK Jr, at his Senate Finance Confirmation Hearings, January 29, 2025 French Fries, with 13 Ingredients, would be considered an ultra-processed food.Open Food Facts RFK is not wrong if he is referring to ultra-processed foods. A recent study found that those who ate more UPFs were more likely to show early symptoms of Parkinson’s disease and a review study linked UPFs to higher risk of dying from heart disease, type 2 diabetes, obesity, and mental health outcomes including anxiety and sleeping difficulties. UPFs are made from multiple ingredients including additives like colorants, flavor enhancers, and preservatives. They contain high amounts of sugars, salt, and fats, which makes them hyper-palatable, or simply tasty. And they are cheap, readily available, and handy to eat. Unfortunately for the consumer, a review of studies with a combined population of over 1 million, found that for each 10% increase in UPF consumption, your risk of mortality increases by 10%. Why are UPFs unhealthy? Many people eschew the long list of “chemicals” on the ingredient labels of everything from Wheaties to Fritos. One type of ingredient--food dyes--can have negative health effects and are associated with hyperactivity in children. In fact, MAHA hopes to ban food dyes in UPFs like soft drinks and Fruit Loops. Yet I haven’t heard MAHA alerting us to the high levels of salt, sugar, and saturated fat in UPFs… all things that have been shown over and over to contribute to chronic diseases like high blood pressure, diabetes, and cancer.FI/FOOD Washington Post Studio DATE: 1/7/05 PHOTO: Julia Ewan/TWP Kellogg's Fruit Loops now have 1/3 ... More less sugar and 12 added vitamins and minerals.The Washington Post via Getty Images Dr Kevin Hall, who worked as a nutrition researcher at NIH for 21 years, found that people on an ultra-processed diet consumed about 500 more calories per day, which could explain why UPFs are associated with type 2 diabetes and obesity. But what explains why UPF consumers gobble up more calories? Dr Hall thinks energy density might be the culprit. Simply put, a chocolate chip cookie packs a lot more calories into every bite than a banana. So eating that ultra processed chocolate chip cookie means eating more calories per bite compared to eating fruit and other less processed foods. Not to mention that the sugar, salt and fat taste good… making me want to eat 4 or 5 chocolate chip cookies instead of one banana. Cramer ton, North Carolina, Floyd & Blackie's bakery employee with tray of large M&M chocolate chip ... More cookies.Jeffrey Greenberg/Universal Images Group via Getty ImagesUndated: A bunch of ripe yellow Bananas.Getty Images The preliminary results of Dr Hall’s recent study, which he posted on X, show that the high energy density and the irresistible taste of salt, sugar, and fat explain why people on high UPF diets eat more calories. But don’t expect to see the final results of this important study published anytime soon. Turns out Dr Hall took early retirement at 54 yrs old from his research position at NIH. Why? Because the MAHA administration forced him to withdraw his name from a paper on UPFs that mentioned “health equity”--or the difficulties some groups have accessing healthy food. The administration also took away the money Dr Hall needed to continue his UPF research, censored his media access, and even incorrectly edited his response to a NY Times inquiry. Just as we were on the brink of understanding why UPFs are making us sick, one of the world’s leading UPF scientists is out. Hard to see how lack of scientific information is Making Americans Healthy Again. 2. Eat Beef Tallow instead of Seed OilsWASHINGTON, DC - MARCH 31: Beef tallow french fries photographed for Food in Washington, DC on March ... More 31, 2025.The Washington Post via Getty Images While dining on fries and a double cheeseburger at Steak N Shake with Fox News’s Sean Hannity, Kennedy touted French fries cooked in beef tallow. Robert F. Kennedy Jr 10/21/24 @RobertKennedyJr Did you know that McDonald’s used to use beef tallow to make their fries from 1940 until phasing it out in favor of seed oils in 1990? This switch was made because saturated animal fats were thought to be unhealthy, but we have since discovered that seed oils are one of the driving causes of the obesity epidemic. …Americans should have every right to eat out at a restaurant without being unknowingly poisoned by heavily subsidized seed oils. It’s time to Make Frying Oil Tallow Again 🇺🇸🍔 Close-up of a large frozen ball of beef kidney fat during home rendering of beef tallow, Lafayette, ... More California, March 25, 2025.Gado via Getty Images To be sure, consuming a lot of seed oils raises health concerns, including that they contain few nutrients, are often highly processed, and some, like soybean oil, might contain unhealthy amounts of omega 6 acids. But, are seed oils worse than saturated animal fats? Seed oils, unlike animal fats, are mostly unsaturated. According to Dr. Christopher Gardner, director of nutrition studies at the Stanford Prevention Research Center who has been studying the role of fat in our diet since 1995, "Every study for decades has shown that when you eat unsaturated fats instead of saturated fats, this lowers the level of LDL cholesterolin your blood. There are actually few associations in nutrition that have this much evidence behind them…To think that seed oils are anywhere near the top of the list of major nutrition concerns in our country is just nuts." And in a 2025 study, participants with the highest intake of butter, which similar to beef tallow is largely saturated animal fat, had a 16% less likely to die. About ⅓ of the deaths were due to cancer, about a third to cardiovascular disease, and a third other causes. The authors conclude: “Substituting butter with plant-based oils may confer substantial benefits for preventing premature deaths. These results support current dietary recommendations to replace animal fats like butter with non hydrogenated vegetable oils that are high in unsaturated fats, especially olive, soy, and canola oil.”Still life featuring a collection of olive oil bottles, 2011.Getty Images In short, if you have to choose between seed oils and animal fat, you are probably better off with seed oils, or even better, extra virgin olive oil. But, you should avoid consuming too much of any sort of oil or fat, which brings us to the third RFK Jr pronouncement.RFK Jr and West Virginia Governor Morissey. Presidential Candidate Robert F. Kennedy, Jr. ... More Celebrates Hispanic Heritage Month In Los Angeles. Patrick Morrisey speaking at the 2017 CPAC in National Harbor, Maryland.Mario Tama, Getty Images; Gage Skidmore 3. Become a Carni-Bro At a public event to promote MAHA in West Virginia, RFK Jr body shamed Governor Patrick Morrisey for his weight. I’m going to put him on a really rigorous regime. We’re going to put him on a carnivore diet … Raise your hand if you want Governor Morrissey to do a public weigh-in once a month. And then when he’s lost 30 lbs I’m going to come back to this state and we’re going to do a celebration and a public weigh in with him. RFK, Jr. MAHA seems to be at the forefront of the next culture war: dump plant-based foods and become a “carni-bro.” Yet a comprehensive review of studies on foods and obesity concluded: High intakes of whole grains, legumes, nuts, and fruits are associated with a reduced risk of overweight and obesity, while red meat and sugar-sweetened beverages are associated with an increased risk of overweight and obesity. NEW YORK, NEW YORK - JULY 04: Spectators pose for a photo ahead of the 2023 Nathan's Famous Fourth ... More of July International Hot Dog Eating Contest at Coney Island on July 04, 2023 in the Brooklyn borough of New York City. The annual contest, which began in 1972, draws thousands of spectators to Nathan’s Famous located on Surf Avenue.Getty Images How do UPFs compare to red meat? The only study I found comparing the two found people eating UPFs had an approximately 14% greater chance of dying whereas those who ate red meat had an approximately 8% chance of death over the same time period.But this study was conducted with Seventh Day Adventists, whose meat consumption was way lower than the average American. People in West Virginia, whose governor is in fact rotund, are by far and away the biggest consumer of hotdogs in the US, at 481 hot dogs per person per year. In a recent UK study with a more typical population, every added 70 g of red meat and processed meatper day was associated with a 15% higher risk of coronary heart disease and a 30% higher risk of diabetes. Because red and processed meat consumption is also associated with higher rates of cancer, the World Cancer Research Fund recommends limiting red meat to no more than three portions per week and avoiding processed meat altogether.TOPSHOT - An overweight woman walks at the 61st Montgomery County Agricultural Fair on August 19, ... More 2009 in Gaithersburg, Maryland. At USD 150 billion, the US medical system spends around twice as much treating preventable health conditions caused by obesity than it does on cancer, Health Secretary Kathleen Sebelius said. Two-thirds of US adults and one in five children are overweight or obese, putting them at greater risk of chronic illness like heart disease, cancer, stroke and diabetes, according to reports released recently at the "Weight of the Nation" conference. AFP PHOTO / Tim SloanAFP via Getty Images Heart Disease: Still the leading killer According to the CDC, heart disease is the leading cause of death in the US, accounting for one in five deaths, or one death every 33 seconds. Heart disease cost the US about billion from 2019 to 2020. And if you look at a map of where heart disease is more common, it looks uncannily like a map of MAHA supporters. .Heart Disease Death Rates, 2018–2020 for Adults, Ages 35+, by CountyCDC The first items in a list of CDC recommendations for preventing heart disease are all about food: Choose healthy meals and snacks high in fiber and limit saturated and trans fats, salt, and sugar. This sounds like a recipe for avoiding UPFs. But it could also be a recipe for substituting whole grains and fruit and vegetables for red and processed meats, which punch the double whammy of being meat and UPFs. Is RFK, Jr. Making America Healthy Again? Let’s celebrate Kennedy’s move away from UPFs, an important step toward improving Americans’ health. But why does our top health official publicly tout beef tallow, French fries, and double cheeseburgers, when we know that Americans’ consumption of saturated fat and meat lead to obesity, diabetes, cancer, and heart disease? Or has he weighed in on ultra-processed meats, like Slim Jim’s, which with sales at billion last year is America’s fastest growing snack?NEW ORLEANS - OCTOBER 01: Amanda Barrett, 18-years-old, watches her mother Eve Barrett peel a ... More mold-covered layer of paint off a wall as the family sees what is left of their home in the Lakeview District October 1, 2005 in New Orleans, Louisiana. The people of New Orleans are still cleaning up over a month after Hurricane Katrina hit the area.Getty Images It’s hard to understand what is going on in RFK’s brain. He gloms on to a limited number of studies suggesting health risks of eating seed oils, while ignoring saturated fats and even encouraging Americans to eat fast foods. He wants to rout out corruption in the food and pharmaceutical industry, yet uses his position to sell Make America Tallow Again hats and T-shirts. He says he believes climate change poses an existential threat, yet on his second day in office eliminated funding for research on heat waves, indoor mold after flooding, and other NIH climate change and health programs. And in his big May report on children’s health, he ignores the largest causes of death for those under 19--gun violence and accidents. Raise your hand if you want Secretary Kennedy to conduct a public truth-telling once a month. #seed #oils #upfs #carnibros #rfk
    WWW.FORBES.COM
    Seed Oils, UPFs, And Carni-Bros: Is RFK Making America Healthy Again?
    French fries at Steak 'n' Shake in Greenwood, Indiana. RFK Jr touted French fries while dining at a ... More Steak 'n' Shake.Missvain, Wikimedia Commons RFK Jr is not just bringing back infectious diseases like measles. Our top health official is working hard to back diet-related diseases like obesity, diabetes, and heart attacks. During his first three months in office, RFK, Jr. has made three big pronouncements about what Americans should eat. The first is important but for the wrong reasons. The second builds on the fallacies of the first. And the third goes against 60 plus years of scientific evidence. 1. Ultra-processed foods (UPFs) are poisoning us Something is poisoning the American people. And we know that the primary culprit is our changing food supply to highly chemical and processed food. RFK Jr, at his Senate Finance Confirmation Hearings, January 29, 2025 French Fries, with 13 Ingredients, would be considered an ultra-processed food.Open Food Facts RFK is not wrong if he is referring to ultra-processed foods (or UPFs). A recent study found that those who ate more UPFs were more likely to show early symptoms of Parkinson’s disease and a review study linked UPFs to higher risk of dying from heart disease, type 2 diabetes, obesity, and mental health outcomes including anxiety and sleeping difficulties. UPFs are made from multiple ingredients including additives like colorants, flavor enhancers, and preservatives. They contain high amounts of sugars, salt, and fats, which makes them hyper-palatable, or simply tasty. And they are cheap, readily available (witness the local gas station convenience store), and handy to eat. Unfortunately for the consumer, a review of studies with a combined population of over 1 million, found that for each 10% increase in UPF consumption, your risk of mortality increases by 10%. Why are UPFs unhealthy? Many people eschew the long list of “chemicals” on the ingredient labels of everything from Wheaties to Fritos. One type of ingredient--food dyes--can have negative health effects and are associated with hyperactivity in children. In fact, MAHA hopes to ban food dyes in UPFs like soft drinks and Fruit Loops. Yet I haven’t heard MAHA alerting us to the high levels of salt, sugar, and saturated fat in UPFs… all things that have been shown over and over to contribute to chronic diseases like high blood pressure, diabetes, and cancer.FI/FOOD Washington Post Studio DATE: 1/7/05 PHOTO: Julia Ewan/TWP Kellogg's Fruit Loops now have 1/3 ... More less sugar and 12 added vitamins and minerals. (Photo by Julia Ewan/The The Washington Post via Getty Images)The Washington Post via Getty Images Dr Kevin Hall, who worked as a nutrition researcher at NIH for 21 years, found that people on an ultra-processed diet consumed about 500 more calories per day, which could explain why UPFs are associated with type 2 diabetes and obesity. But what explains why UPF consumers gobble up more calories? Dr Hall thinks energy density might be the culprit. Simply put, a chocolate chip cookie packs a lot more calories into every bite than a banana. So eating that ultra processed chocolate chip cookie means eating more calories per bite compared to eating fruit and other less processed foods. Not to mention that the sugar, salt and fat taste good… making me want to eat 4 or 5 chocolate chip cookies instead of one banana. Cramer ton, North Carolina, Floyd & Blackie's bakery employee with tray of large M&M chocolate chip ... More cookies. (Photo by: Jeffrey Greenberg/Universal Images Group via Getty Images)Jeffrey Greenberg/Universal Images Group via Getty ImagesUndated: A bunch of ripe yellow Bananas. (Photo by Richard Whiting /Getty Images)Getty Images The preliminary results of Dr Hall’s recent study, which he posted on X, show that the high energy density and the irresistible taste of salt, sugar, and fat explain why people on high UPF diets eat more calories. But don’t expect to see the final results of this important study published anytime soon. Turns out Dr Hall took early retirement at 54 yrs old from his research position at NIH. Why? Because the MAHA administration forced him to withdraw his name from a paper on UPFs that mentioned “health equity”--or the difficulties some groups have accessing healthy food. The administration also took away the money Dr Hall needed to continue his UPF research, censored his media access, and even incorrectly edited his response to a NY Times inquiry. Just as we were on the brink of understanding why UPFs are making us sick, one of the world’s leading UPF scientists is out. Hard to see how lack of scientific information is Making Americans Healthy Again. 2. Eat Beef Tallow instead of Seed OilsWASHINGTON, DC - MARCH 31: Beef tallow french fries photographed for Food in Washington, DC on March ... More 31, 2025. (Photo by Scott Suchman for The Washington Post via Getty Images; food styling by Lisa Cherkasky for The Washington Post via Getty Images)The Washington Post via Getty Images While dining on fries and a double cheeseburger at Steak N Shake with Fox News’s Sean Hannity, Kennedy touted French fries cooked in beef tallow. Robert F. Kennedy Jr 10/21/24 @RobertKennedyJr Did you know that McDonald’s used to use beef tallow to make their fries from 1940 until phasing it out in favor of seed oils in 1990? This switch was made because saturated animal fats were thought to be unhealthy, but we have since discovered that seed oils are one of the driving causes of the obesity epidemic. …Americans should have every right to eat out at a restaurant without being unknowingly poisoned by heavily subsidized seed oils. It’s time to Make Frying Oil Tallow Again 🇺🇸🍔 Close-up of a large frozen ball of beef kidney fat during home rendering of beef tallow, Lafayette, ... More California, March 25, 2025. (Photo by Smith Collection/Gado/Getty Images)Gado via Getty Images To be sure, consuming a lot of seed oils raises health concerns, including that they contain few nutrients, are often highly processed, and some, like soybean oil, might contain unhealthy amounts of omega 6 acids. But, are seed oils worse than saturated animal fats? Seed oils, unlike animal fats, are mostly unsaturated. According to Dr. Christopher Gardner, director of nutrition studies at the Stanford Prevention Research Center who has been studying the role of fat in our diet since 1995, "Every study for decades has shown that when you eat unsaturated fats instead of saturated fats, this lowers the level of LDL cholesterol [bad cholesterol] in your blood. There are actually few associations in nutrition that have this much evidence behind them…To think that seed oils are anywhere near the top of the list of major nutrition concerns in our country is just nuts." And in a 2025 study, participants with the highest intake of butter, which similar to beef tallow is largely saturated animal fat, had a 16% less likely to die. About ⅓ of the deaths were due to cancer, about a third to cardiovascular disease, and a third other causes. The authors conclude: “Substituting butter with plant-based oils may confer substantial benefits for preventing premature deaths. These results support current dietary recommendations to replace animal fats like butter with non hydrogenated vegetable oils that are high in unsaturated fats, especially olive, soy, and canola oil.” (Note that olive oil, while plant-based, is not a seed oil since most of the oil comes from the fleshy part of the olive.) Still life featuring a collection of olive oil bottles, 2011. (Photo by Tom Kelley/Getty Images)Getty Images In short, if you have to choose between seed oils and animal fat, you are probably better off with seed oils, or even better, extra virgin olive oil (EVOO). But, you should avoid consuming too much of any sort of oil or fat, which brings us to the third RFK Jr pronouncement.RFK Jr and West Virginia Governor Morissey. Presidential Candidate Robert F. Kennedy, Jr. ... More Celebrates Hispanic Heritage Month In Los Angeles. Patrick Morrisey speaking at the 2017 CPAC in National Harbor, Maryland.Mario Tama, Getty Images; Gage Skidmore 3. Become a Carni-Bro At a public event to promote MAHA in West Virginia, RFK Jr body shamed Governor Patrick Morrisey for his weight. I’m going to put him on a really rigorous regime. We’re going to put him on a carnivore diet … Raise your hand if you want Governor Morrissey to do a public weigh-in once a month. And then when he’s lost 30 lbs I’m going to come back to this state and we’re going to do a celebration and a public weigh in with him. RFK, Jr. MAHA seems to be at the forefront of the next culture war: dump plant-based foods and become a “carni-bro.” Yet a comprehensive review of studies on foods and obesity concluded: High intakes of whole grains, legumes, nuts, and fruits are associated with a reduced risk of overweight and obesity, while red meat and sugar-sweetened beverages are associated with an increased risk of overweight and obesity. NEW YORK, NEW YORK - JULY 04: Spectators pose for a photo ahead of the 2023 Nathan's Famous Fourth ... More of July International Hot Dog Eating Contest at Coney Island on July 04, 2023 in the Brooklyn borough of New York City. The annual contest, which began in 1972, draws thousands of spectators to Nathan’s Famous located on Surf Avenue. (Photo by Alexi J. Rosenfeld/Getty Images)Getty Images How do UPFs compare to red meat? The only study I found comparing the two found people eating UPFs had an approximately 14% greater chance of dying whereas those who ate red meat had an approximately 8% chance of death over the same time period. (Those eating other types of meats like chicken and pork and fish did not have a greater chance of dying.) But this study was conducted with Seventh Day Adventists, whose meat consumption was way lower than the average American (while their UPF consumption was fairly typical of the US). People in West Virginia, whose governor is in fact rotund, are by far and away the biggest consumer of hotdogs in the US, at 481 hot dogs per person per year. In a recent UK study with a more typical population, every added 70 g of red meat and processed meat (like ham, hotdogs, bacon, and deli meats) per day was associated with a 15% higher risk of coronary heart disease and a 30% higher risk of diabetes. Because red and processed meat consumption is also associated with higher rates of cancer, the World Cancer Research Fund recommends limiting red meat to no more than three portions per week and avoiding processed meat altogether.TOPSHOT - An overweight woman walks at the 61st Montgomery County Agricultural Fair on August 19, ... More 2009 in Gaithersburg, Maryland. At USD 150 billion, the US medical system spends around twice as much treating preventable health conditions caused by obesity than it does on cancer, Health Secretary Kathleen Sebelius said. Two-thirds of US adults and one in five children are overweight or obese, putting them at greater risk of chronic illness like heart disease, cancer, stroke and diabetes, according to reports released recently at the "Weight of the Nation" conference. AFP PHOTO / Tim Sloan (Photo by Tim SLOAN / AFP) (Photo by TIM SLOAN/AFP via Getty Images)AFP via Getty Images Heart Disease: Still the leading killer According to the CDC, heart disease is the leading cause of death in the US, accounting for one in five deaths, or one death every 33 seconds. Heart disease cost the US about $252.2 billion from 2019 to 2020. And if you look at a map of where heart disease is more common, it looks uncannily like a map of MAHA supporters (including in West Virginia). .Heart Disease Death Rates, 2018–2020 for Adults, Ages 35+, by CountyCDC The first items in a list of CDC recommendations for preventing heart disease are all about food: Choose healthy meals and snacks high in fiber and limit saturated and trans fats, salt, and sugar. This sounds like a recipe for avoiding UPFs. But it could also be a recipe for substituting whole grains and fruit and vegetables for red and processed meats, which punch the double whammy of being meat and UPFs. Is RFK, Jr. Making America Healthy Again? Let’s celebrate Kennedy’s move away from UPFs, an important step toward improving Americans’ health. But why does our top health official publicly tout beef tallow, French fries, and double cheeseburgers, when we know that Americans’ consumption of saturated fat and meat lead to obesity, diabetes, cancer, and heart disease? Or has he weighed in on ultra-processed meats, like Slim Jim’s, which with sales at $3 billion last year is America’s fastest growing snack?NEW ORLEANS - OCTOBER 01: Amanda Barrett (L), 18-years-old, watches her mother Eve Barrett peel a ... More mold-covered layer of paint off a wall as the family sees what is left of their home in the Lakeview District October 1, 2005 in New Orleans, Louisiana. The people of New Orleans are still cleaning up over a month after Hurricane Katrina hit the area. (Photo by Ethan Miller/Getty Images)Getty Images It’s hard to understand what is going on in RFK’s brain. He gloms on to a limited number of studies suggesting health risks of eating seed oils, while ignoring saturated fats and even encouraging Americans to eat fast foods. He wants to rout out corruption in the food and pharmaceutical industry, yet uses his position to sell Make America Tallow Again hats and T-shirts. He says he believes climate change poses an existential threat, yet on his second day in office eliminated funding for research on heat waves, indoor mold after flooding, and other NIH climate change and health programs. And in his big May report on children’s health, he ignores the largest causes of death for those under 19--gun violence and accidents. Raise your hand if you want Secretary Kennedy to conduct a public truth-telling once a month.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Stalker Trilogy devs are “listening to feedback” over hated remasters, but won’t say if they’ll add in censored Russian content

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    GSC Game World’s Stalker: Legends of the Zone Trilogy Enhanced Edition remasters released to largely negative reviews from fans. While the new remasters don’t eliminate the original releases—in fact, buying the remasters also gives you access to their original versions—fans are upset at numerous changes across the games.
    One of the biggest changes in the Legends of the Zone Trilogy remasters is the elimination of Russian content across all three games. This not only includes the removal of the beloved Russian dub—which technically still exists underneath the Polish “lektor” dub—but also countless Soviet-era assets which leave the remasters feeling rather weird.
    Stalker Trilogy devs are listening to fans
    In a statement to fans alongside the games’ first patch, GSC Game World explained that it is “listening to feedback” relating to the new remaster. There have been numerous other issues across the game including new bugs, weirdly blurry visuals and the omission of techniques such as DLSS. However, the developer didn’t explain which feedback it is listening to.
    “We will continue to work on improving the trilogy,” the developer said in a recent statement. “Stalkers, we care about your feedback and are working on fixing the most critical issues. We really want to make your comeback to the Zone special.”
    GSC Game World also removed a number of references to Soviet Russia during the development of Stalker 2: Heart of Chornobyl after the country’s invasion of Ukraine. With friends, family and developers of the studio killed/injured in the conflict, the studio chose not to include Russian dubs and iconography throughout the game.
    For fans of the original games that want the cut content back into the game before an official patch—if it ever comes—the PC versions already received numerous mods on Day One to bring back the Russian dub and censored assets.
    For more news on the Stalker franchise, read about how the studio’s upcoming sequel expansions will offer “fresh perspectives” on the Zone that players have never seen before. Additionally, read about what was added in the sequel’s massive Patch 1.4 update.

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #stalker #trilogy #devs #are #listening
    Stalker Trilogy devs are “listening to feedback” over hated remasters, but won’t say if they’ll add in censored Russian content
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here GSC Game World’s Stalker: Legends of the Zone Trilogy Enhanced Edition remasters released to largely negative reviews from fans. While the new remasters don’t eliminate the original releases—in fact, buying the remasters also gives you access to their original versions—fans are upset at numerous changes across the games. One of the biggest changes in the Legends of the Zone Trilogy remasters is the elimination of Russian content across all three games. This not only includes the removal of the beloved Russian dub—which technically still exists underneath the Polish “lektor” dub—but also countless Soviet-era assets which leave the remasters feeling rather weird. Stalker Trilogy devs are listening to fans In a statement to fans alongside the games’ first patch, GSC Game World explained that it is “listening to feedback” relating to the new remaster. There have been numerous other issues across the game including new bugs, weirdly blurry visuals and the omission of techniques such as DLSS. However, the developer didn’t explain which feedback it is listening to. “We will continue to work on improving the trilogy,” the developer said in a recent statement. “Stalkers, we care about your feedback and are working on fixing the most critical issues. We really want to make your comeback to the Zone special.” GSC Game World also removed a number of references to Soviet Russia during the development of Stalker 2: Heart of Chornobyl after the country’s invasion of Ukraine. With friends, family and developers of the studio killed/injured in the conflict, the studio chose not to include Russian dubs and iconography throughout the game. For fans of the original games that want the cut content back into the game before an official patch—if it ever comes—the PC versions already received numerous mods on Day One to bring back the Russian dub and censored assets. For more news on the Stalker franchise, read about how the studio’s upcoming sequel expansions will offer “fresh perspectives” on the Zone that players have never seen before. Additionally, read about what was added in the sequel’s massive Patch 1.4 update. Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #stalker #trilogy #devs #are #listening
    WWW.VIDEOGAMER.COM
    Stalker Trilogy devs are “listening to feedback” over hated remasters, but won’t say if they’ll add in censored Russian content
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here GSC Game World’s Stalker: Legends of the Zone Trilogy Enhanced Edition remasters released to largely negative reviews from fans. While the new remasters don’t eliminate the original releases—in fact, buying the remasters also gives you access to their original versions—fans are upset at numerous changes across the games. One of the biggest changes in the Legends of the Zone Trilogy remasters is the elimination of Russian content across all three games. This not only includes the removal of the beloved Russian dub—which technically still exists underneath the Polish “lektor” dub—but also countless Soviet-era assets which leave the remasters feeling rather weird. Stalker Trilogy devs are listening to fans In a statement to fans alongside the games’ first patch, GSC Game World explained that it is “listening to feedback” relating to the new remaster. There have been numerous other issues across the game including new bugs, weirdly blurry visuals and the omission of techniques such as DLSS. However, the developer didn’t explain which feedback it is listening to. “We will continue to work on improving the trilogy,” the developer said in a recent statement. “Stalkers, we care about your feedback and are working on fixing the most critical issues. We really want to make your comeback to the Zone special.” GSC Game World also removed a number of references to Soviet Russia during the development of Stalker 2: Heart of Chornobyl after the country’s invasion of Ukraine. With friends, family and developers of the studio killed/injured in the conflict, the studio chose not to include Russian dubs and iconography throughout the game. For fans of the original games that want the cut content back into the game before an official patch—if it ever comes—the PC versions already received numerous mods on Day One to bring back the Russian dub and censored assets. For more news on the Stalker franchise, read about how the studio’s upcoming sequel expansions will offer “fresh perspectives” on the Zone that players have never seen before. Additionally, read about what was added in the sequel’s massive Patch 1.4 update. Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Reddit Bans Fringe Anti-Humanity Group After Attack on Palm Springs IVF Clinic

    By

    Matt Novak

    Published May 20, 2025

    |

    Comments|

    Debris is seen outside a damaged American Reproductive Centers fertility clinic after a bomb blast outside the building in Palm Springs, California, on May 17, 2025. © Photo by GABRIEL OSORIO/AFP via Getty Images

    An explosion outside a fertility clinic in Palm Springs, California, killed one person and injured four others Saturday morning in what the FBI has called an act of terrorism. The suspect in the bombing, 25-year-old Guy Edward Bartkus, was the lone death from the blast, and it seems apparent he held anti-human views. Now Reddit has banned a subreddit tied to the suspect’s ideology. Bartkus is believed to be the person who detonated a bomb at the Palm Springs American Reproductive Center, which offers services like IVF, because he was aligned with the pro-mortalist and anti-natalist movements—the idea that humans should not continue to procreate. Bartkus appears to have been posting to various subreddits, including r/Efilism, which advocated for violence. Reddit has now banned r/Efilism for violating its terms of service. “Violence has no place on Reddit,” a spokesperson for the platform told Gizmodo over email. “Our sitewide rules strictly prohibit any content that encourages, glorifies, incites, or calls for violence. In line with these rules, we are removing any instances of the suspect’s manifesto or recordings and hashing to prevent reupload. We’re also closely monitoring the communities on our platform to ensure compliance with our rules.”

    Proponents of Efilismare often known as anti-natalists, which is a more common name for the ideology, though Bartkus described himself as pro-mortalist in his 30-minute audio manifesto. Anti-natalism is a philosophy that advocates for people not to procreate, while pro-mortalists go beyond those anti-natalist ideas to advocate for death in all forms under the theory that because life is suffering it’s ethical to end your own life and even those around you in the process. Bartkus posted an audio file to his website explaining why he was targeting the clinic, filled with logical inconsistencies and general incoherence. Bartkus said he wanted to begin “sterilizing this planet of the disease of life,” but mentioned the recent suicide of his best friend affecting him deeply. Bartkus wrote on his personal website, “It’s just too much of a loss when there’s nobody else you really relate to significantly.” He was clearly struggling with personal issues beyond whatever philosophy to which he was supposedly swearing allegiance. That website has now been taken offline. There are other anti-natalist forums beyond r/Efilism on Reddit that haven’t been banned and some, like r/circlesnip—which includes a description reading, “The Vegan Antinatalist Circlejerk”—put out statements denouncing the attack on the IVF clinic.

    “It has come to my attention that the individual responsible for today’s bombing in Palm Springs namedropped our communities in their suicide note. Though they struggled with personal grief and mental health issues, their act of terrorism was unjustifiable, incoherent, immoral, and disgusting,” the statement reads.The moderator went on to say that their version of anti-natalism is “explicitly one of non-violence” and said that it should be up to each individual to “make their own reproductive decisions.”

    “The philosophy we represent is explicitly one of non-violence,” the moderator continued. “We believe it is up to each individual to make their own reproductive decisions. We hope that the Palm Springs American Reproductive Center can rebuild and resume operations.” Other anti-natalist subreddits run by the same moderator, r/Vystopia and r/antinatalism, posted the same statement condemning violence. The r/Efilism subreddit had about 12,000 members before it was banned, according to The Independent, which is certainly not large by Reddit standards. The biggest communities on Reddit have tens of millions of members.

    The term Elifism was reportedly coined by a fringe YouTuber named Gary Inmendham, who Bartkus mentions by name in his audio manifesto. Inmendham posted a video after the bombing saying that Bartkus had done something “really stupid, dumb, pointless, and even show-offy,” referring to it as a “dumbass act of violence.” Inmendham said that he’s even “against protesters” so he’s “obviously against terrorists.” Bartkus, who sounds deeply insecure about his philosophy in his audio manifesto, said that he was driven to commit the bombing because he couldn’t find people to connect with online to discuss things anymore because spaces like YouTube and X were being censored of anti-natalist content. Bartkus also insisted that while the internet is being “manipulated,” he was immune to the manipulation.

    Bartkus also said in the recording that he was a vegan and seem fixated on the welfare of animals, referring in his audio recording to “animals raped on farms,” but then going on to say that nature itself was horrifying in a way that even surpassed the suffering caused by humans. All life was suffering that needed to end, in his book. A YouTube account associated with Bartkus, which is now offline, reportedly contained explosions tests, according to ABC News. The size of the Palm Springs blast was considerable, stretching about 250 yards, with Akil Davis, assistant director at the FBI’s Los Angeles Field Office, describing it to NPR as, “probably the largest bombing scene that we’ve had in Southern California.” The FBI released a report last month about Nihilist Violent Extremists, though the definition is so loose that it can be applied pretty broadly to all kinds of ideologies. In this case, however, nihilism does seem to fit well as a descriptor for a philosophy grounded in destroying all of humanity for nebulous ends.

    Daily Newsletter

    You May Also Like

    By

    Lucas Ropek

    Published April 9, 2025

    By

    Matt Novak

    Published March 27, 2025

    By

    Lucas Ropek

    Published March 14, 2025

    By

    Matthew Gault

    Published March 5, 2025

    By

    AJ Dellinger

    Published February 14, 2025

    By

    AJ Dellinger

    Published January 22, 2025
    #reddit #bans #fringe #antihumanity #group
    Reddit Bans Fringe Anti-Humanity Group After Attack on Palm Springs IVF Clinic
    By Matt Novak Published May 20, 2025 | Comments| Debris is seen outside a damaged American Reproductive Centers fertility clinic after a bomb blast outside the building in Palm Springs, California, on May 17, 2025. © Photo by GABRIEL OSORIO/AFP via Getty Images An explosion outside a fertility clinic in Palm Springs, California, killed one person and injured four others Saturday morning in what the FBI has called an act of terrorism. The suspect in the bombing, 25-year-old Guy Edward Bartkus, was the lone death from the blast, and it seems apparent he held anti-human views. Now Reddit has banned a subreddit tied to the suspect’s ideology. Bartkus is believed to be the person who detonated a bomb at the Palm Springs American Reproductive Center, which offers services like IVF, because he was aligned with the pro-mortalist and anti-natalist movements—the idea that humans should not continue to procreate. Bartkus appears to have been posting to various subreddits, including r/Efilism, which advocated for violence. Reddit has now banned r/Efilism for violating its terms of service. “Violence has no place on Reddit,” a spokesperson for the platform told Gizmodo over email. “Our sitewide rules strictly prohibit any content that encourages, glorifies, incites, or calls for violence. In line with these rules, we are removing any instances of the suspect’s manifesto or recordings and hashing to prevent reupload. We’re also closely monitoring the communities on our platform to ensure compliance with our rules.” Proponents of Efilismare often known as anti-natalists, which is a more common name for the ideology, though Bartkus described himself as pro-mortalist in his 30-minute audio manifesto. Anti-natalism is a philosophy that advocates for people not to procreate, while pro-mortalists go beyond those anti-natalist ideas to advocate for death in all forms under the theory that because life is suffering it’s ethical to end your own life and even those around you in the process. Bartkus posted an audio file to his website explaining why he was targeting the clinic, filled with logical inconsistencies and general incoherence. Bartkus said he wanted to begin “sterilizing this planet of the disease of life,” but mentioned the recent suicide of his best friend affecting him deeply. Bartkus wrote on his personal website, “It’s just too much of a loss when there’s nobody else you really relate to significantly.” He was clearly struggling with personal issues beyond whatever philosophy to which he was supposedly swearing allegiance. That website has now been taken offline. There are other anti-natalist forums beyond r/Efilism on Reddit that haven’t been banned and some, like r/circlesnip—which includes a description reading, “The Vegan Antinatalist Circlejerk”—put out statements denouncing the attack on the IVF clinic. “It has come to my attention that the individual responsible for today’s bombing in Palm Springs namedropped our communities in their suicide note. Though they struggled with personal grief and mental health issues, their act of terrorism was unjustifiable, incoherent, immoral, and disgusting,” the statement reads.The moderator went on to say that their version of anti-natalism is “explicitly one of non-violence” and said that it should be up to each individual to “make their own reproductive decisions.” “The philosophy we represent is explicitly one of non-violence,” the moderator continued. “We believe it is up to each individual to make their own reproductive decisions. We hope that the Palm Springs American Reproductive Center can rebuild and resume operations.” Other anti-natalist subreddits run by the same moderator, r/Vystopia and r/antinatalism, posted the same statement condemning violence. The r/Efilism subreddit had about 12,000 members before it was banned, according to The Independent, which is certainly not large by Reddit standards. The biggest communities on Reddit have tens of millions of members. The term Elifism was reportedly coined by a fringe YouTuber named Gary Inmendham, who Bartkus mentions by name in his audio manifesto. Inmendham posted a video after the bombing saying that Bartkus had done something “really stupid, dumb, pointless, and even show-offy,” referring to it as a “dumbass act of violence.” Inmendham said that he’s even “against protesters” so he’s “obviously against terrorists.” Bartkus, who sounds deeply insecure about his philosophy in his audio manifesto, said that he was driven to commit the bombing because he couldn’t find people to connect with online to discuss things anymore because spaces like YouTube and X were being censored of anti-natalist content. Bartkus also insisted that while the internet is being “manipulated,” he was immune to the manipulation. Bartkus also said in the recording that he was a vegan and seem fixated on the welfare of animals, referring in his audio recording to “animals raped on farms,” but then going on to say that nature itself was horrifying in a way that even surpassed the suffering caused by humans. All life was suffering that needed to end, in his book. A YouTube account associated with Bartkus, which is now offline, reportedly contained explosions tests, according to ABC News. The size of the Palm Springs blast was considerable, stretching about 250 yards, with Akil Davis, assistant director at the FBI’s Los Angeles Field Office, describing it to NPR as, “probably the largest bombing scene that we’ve had in Southern California.” The FBI released a report last month about Nihilist Violent Extremists, though the definition is so loose that it can be applied pretty broadly to all kinds of ideologies. In this case, however, nihilism does seem to fit well as a descriptor for a philosophy grounded in destroying all of humanity for nebulous ends. Daily Newsletter You May Also Like By Lucas Ropek Published April 9, 2025 By Matt Novak Published March 27, 2025 By Lucas Ropek Published March 14, 2025 By Matthew Gault Published March 5, 2025 By AJ Dellinger Published February 14, 2025 By AJ Dellinger Published January 22, 2025 #reddit #bans #fringe #antihumanity #group
    GIZMODO.COM
    Reddit Bans Fringe Anti-Humanity Group After Attack on Palm Springs IVF Clinic
    By Matt Novak Published May 20, 2025 | Comments (0) | Debris is seen outside a damaged American Reproductive Centers fertility clinic after a bomb blast outside the building in Palm Springs, California, on May 17, 2025. © Photo by GABRIEL OSORIO/AFP via Getty Images An explosion outside a fertility clinic in Palm Springs, California, killed one person and injured four others Saturday morning in what the FBI has called an act of terrorism. The suspect in the bombing, 25-year-old Guy Edward Bartkus, was the lone death from the blast, and it seems apparent he held anti-human views. Now Reddit has banned a subreddit tied to the suspect’s ideology. Bartkus is believed to be the person who detonated a bomb at the Palm Springs American Reproductive Center, which offers services like IVF, because he was aligned with the pro-mortalist and anti-natalist movements—the idea that humans should not continue to procreate. Bartkus appears to have been posting to various subreddits, including r/Efilism, which advocated for violence. Reddit has now banned r/Efilism for violating its terms of service. “Violence has no place on Reddit,” a spokesperson for the platform told Gizmodo over email. “Our sitewide rules strictly prohibit any content that encourages, glorifies, incites, or calls for violence. In line with these rules, we are removing any instances of the suspect’s manifesto or recordings and hashing to prevent reupload. We’re also closely monitoring the communities on our platform to ensure compliance with our rules.” Proponents of Efilism (the word “life” spelled backwards) are often known as anti-natalists, which is a more common name for the ideology, though Bartkus described himself as pro-mortalist in his 30-minute audio manifesto. Anti-natalism is a philosophy that advocates for people not to procreate, while pro-mortalists go beyond those anti-natalist ideas to advocate for death in all forms under the theory that because life is suffering it’s ethical to end your own life and even those around you in the process. Bartkus posted an audio file to his website explaining why he was targeting the clinic, filled with logical inconsistencies and general incoherence. Bartkus said he wanted to begin “sterilizing this planet of the disease of life,” but mentioned the recent suicide of his best friend affecting him deeply. Bartkus wrote on his personal website, “It’s just too much of a loss when there’s nobody else you really relate to significantly.” He was clearly struggling with personal issues beyond whatever philosophy to which he was supposedly swearing allegiance. That website has now been taken offline. There are other anti-natalist forums beyond r/Efilism on Reddit that haven’t been banned and some, like r/circlesnip—which includes a description reading, “The Vegan Antinatalist Circlejerk”—put out statements denouncing the attack on the IVF clinic. “It has come to my attention that the individual responsible for today’s bombing in Palm Springs namedropped our communities in their suicide note. Though they struggled with personal grief and mental health issues, their act of terrorism was unjustifiable, incoherent, immoral, and disgusting,” the statement reads.The moderator went on to say that their version of anti-natalism is “explicitly one of non-violence” and said that it should be up to each individual to “make their own reproductive decisions.” “The philosophy we represent is explicitly one of non-violence,” the moderator continued. “We believe it is up to each individual to make their own reproductive decisions. We hope that the Palm Springs American Reproductive Center can rebuild and resume operations.” Other anti-natalist subreddits run by the same moderator, r/Vystopia and r/antinatalism, posted the same statement condemning violence. The r/Efilism subreddit had about 12,000 members before it was banned, according to The Independent, which is certainly not large by Reddit standards. The biggest communities on Reddit have tens of millions of members. The term Elifism was reportedly coined by a fringe YouTuber named Gary Inmendham, who Bartkus mentions by name in his audio manifesto. Inmendham posted a video after the bombing saying that Bartkus had done something “really stupid, dumb, pointless, and even show-offy,” referring to it as a “dumbass act of violence.” Inmendham said that he’s even “against protesters” so he’s “obviously against terrorists.” Bartkus, who sounds deeply insecure about his philosophy in his audio manifesto, said that he was driven to commit the bombing because he couldn’t find people to connect with online to discuss things anymore because spaces like YouTube and X were being censored of anti-natalist content. Bartkus also insisted that while the internet is being “manipulated,” he was immune to the manipulation. Bartkus also said in the recording that he was a vegan and seem fixated on the welfare of animals, referring in his audio recording to “animals raped on farms,” but then going on to say that nature itself was horrifying in a way that even surpassed the suffering caused by humans. All life was suffering that needed to end, in his book. A YouTube account associated with Bartkus, which is now offline, reportedly contained explosions tests, according to ABC News. The size of the Palm Springs blast was considerable, stretching about 250 yards, with Akil Davis, assistant director at the FBI’s Los Angeles Field Office, describing it to NPR as, “probably the largest bombing scene that we’ve had in Southern California.” The FBI released a report last month about Nihilist Violent Extremists (NVE), though the definition is so loose that it can be applied pretty broadly to all kinds of ideologies. In this case, however, nihilism does seem to fit well as a descriptor for a philosophy grounded in destroying all of humanity for nebulous ends. Daily Newsletter You May Also Like By Lucas Ropek Published April 9, 2025 By Matt Novak Published March 27, 2025 By Lucas Ropek Published March 14, 2025 By Matthew Gault Published March 5, 2025 By AJ Dellinger Published February 14, 2025 By AJ Dellinger Published January 22, 2025
    0 Comentários 0 Compartilhamentos 0 Anterior
Páginas impulsionada
CGShares https://cgshares.com