• New Court Order in Stratasys v. Bambu Lab Lawsuit

    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit. 
    Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG. 
    Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299.
    On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward.
    Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases.
    This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits. 
    The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year. 
    Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.       
    A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    Another twist in the Stratasys v. Bambu Lab lawsuit 
    Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities.
    Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers.
    Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice.
    It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab. 
    Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022.
    In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas.
    Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.   
    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.  
    In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party.
    Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment.
    The Bambu Lab X1C 3D printer. Image via Bambu Lab.
    3D printing patent battles 
    The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit. 
    The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent.
    Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.  
    The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs. 
    In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.  
    San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer.
    3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    #new #court #order #stratasys #bambu
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299. On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. #new #court #order #stratasys #bambu
    3DPRINTINGINDUSTRY.COM
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member cases (2:24-CV-00644-JRG and 2:24-CV-00645-JRG) into a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299(a). On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299(a). The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12(b)(7). Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry.
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Σχόλια 0 Μοιράστηκε
  • How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests

    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says.
    #how #being #used #spread #misinformationand
    How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests
    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says. #how #being #used #spread #misinformationand
    TIME.COM
    How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests
    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says.
    0 Σχόλια 0 Μοιράστηκε
  • US stops endorsing covid-19 shots for kids – are other vaccines next?

    US Secretary of Health and Human Services Robert F Kennedy JrTasos Katopodis/Getty
    One of the top vaccine experts at the US Centers for Disease Control and Prevention, Lakshmi Panagiotakopoulos, resigned on 4 June – a week after Robert F Kennedy Jr announced that covid-19 vaccines would no longer be recommended for most children and pregnancies.

    The announcement set off several days of confusion around who will have access to covid-19 vaccines in the US going forward. In practice, there hasn’t been a drastic change to access, though there will probably be new obstacles for parents hoping to vaccinate their children. Still, Kennedy’s announcement signals a troubling circumvention of public health norms.
    “My career in public health and vaccinology started with a deep-seated desire to help the most vulnerable members of our population, and that is not something I am able to continue doing in this role,” said Panagiotakopoulos in an email to colleagues obtained by Reuters.
    Panagiotakopoulos supported the Advisory Committee on Immunization Practices, which has advised the CDC on vaccine recommendations since 1964. But last week, Kennedy – the country’s highest-ranking public health official – upended this decades-long precedent. “I couldn’t be more pleased to announce that, as of today, the covid vaccine for healthy children and healthy pregnant woman has been removed from the CDC recommended immunisation schedule,” he said in a video posted to the social media platform X on 27 May.
    Despite his directive, the CDC has, so far, only made minor changes to its guidance on covid-19 vaccines. Instead of recommending them for children outright, it now recommends vaccination “based on shared clinical decision-making”. In other words, parents should talk with a doctor before deciding. It isn’t clear how this will affect access to these vaccines in every scenario, but it could make it more difficult for children to get a shot at pharmacies.

    Get the most essential health and fitness news in your inbox every Saturday.

    Sign up to newsletter

    The CDC’s guidance on vaccination in pregnancy is also ambiguous. While its website still recommends a covid-19 shot during pregnancy, a note at the top says, “this page will be updated to align with the updated immunization schedule.”
    Kennedy’s announcement contradicts the stances of major public health organisations, too. Both the American College of Obstetricians and Gynecologistsand the American Academy of Pediatricshave come out opposing it.
    “The CDC and HHS encourage individuals to talk with their healthcare provider about any personal medical decision,” an HHS spokesperson told New Scientist. “Under the leadership of Secretary Kennedy, HHS is restoring the doctor-patient relationship.”
    However, Linda Eckert at the University of Washington in Seattle says the conflicting messages are confusing for people. “It opens up disinformation opportunities. It undermines confidence in vaccination in general,” she says. “I can’t imagine it won’t decrease immunisation rates overall.”

    Research has repeatedly shown covid-19 vaccination in adolescence and pregnancy is safe and effective. In fact, Martin Makary, the head of the US Food and Drug Administration, listed pregnancy as a risk factor for severe covid-19 a week before Kennedy’s announcement, further convoluting the government’s public health messaging.
    Kennedy’s announcement is in line with some other countries’ covid policies. For example, Australia and the UK don’t recommend covid-19 vaccines for children unless they are at risk of severe illness. They also don’t recommend covid-19 vaccination during pregnancy if someone is already vaccinated.
    Asma Khalil, a member of the UK Joint Committee on Vaccination and Immunisation, says the UK’s decision was based on the reduced risk of the omicron variant, the cost-effectiveness of vaccination and high population immunity. However, these factors can vary across countries. The UK population also tends to have better access to healthcare than the US, says Eckert. “These decisions need to carefully consider the risks and benefits relative to the national population,” says Khalil. The HHS didn’t answer New Scientist’s questions about whether a similar analysis guided Kennedy’s decision-making.

    What is maybe most troubling, however, is the precedent Kennedy’s announcement sets. The ACIP – an independent group of public health experts – was expected to vote on proposed changes to covid-19 vaccine recommendations later this month. But Kennedy’s decision has bypassed this process.
    “This style of decision-making – by individuals versus going through experts who are carefully vetted for conflicts of interest, who carefully look at the data – this has never happened in our country,” says Eckert. “We’re in uncharted territory.” She worries the move could pave the way for Kennedy to chip away at other vaccine recommendations. “I know there are a lot of vaccines he has been actively against in his career,” she says. Kennedy has previously blamed vaccines for autism and falsely claimed that the polio vaccine caused more deaths than it averted.
    “What it speaks to is the fact thatdoes not see value in these vaccines and is going to do everything he can to try and devalue them in the minds of the public and make them harder to receive,” says Amesh Adalja at Johns Hopkins University.
    Topics:
    #stops #endorsing #covid19 #shots #kids
    US stops endorsing covid-19 shots for kids – are other vaccines next?
    US Secretary of Health and Human Services Robert F Kennedy JrTasos Katopodis/Getty One of the top vaccine experts at the US Centers for Disease Control and Prevention, Lakshmi Panagiotakopoulos, resigned on 4 June – a week after Robert F Kennedy Jr announced that covid-19 vaccines would no longer be recommended for most children and pregnancies. The announcement set off several days of confusion around who will have access to covid-19 vaccines in the US going forward. In practice, there hasn’t been a drastic change to access, though there will probably be new obstacles for parents hoping to vaccinate their children. Still, Kennedy’s announcement signals a troubling circumvention of public health norms. “My career in public health and vaccinology started with a deep-seated desire to help the most vulnerable members of our population, and that is not something I am able to continue doing in this role,” said Panagiotakopoulos in an email to colleagues obtained by Reuters. Panagiotakopoulos supported the Advisory Committee on Immunization Practices, which has advised the CDC on vaccine recommendations since 1964. But last week, Kennedy – the country’s highest-ranking public health official – upended this decades-long precedent. “I couldn’t be more pleased to announce that, as of today, the covid vaccine for healthy children and healthy pregnant woman has been removed from the CDC recommended immunisation schedule,” he said in a video posted to the social media platform X on 27 May. Despite his directive, the CDC has, so far, only made minor changes to its guidance on covid-19 vaccines. Instead of recommending them for children outright, it now recommends vaccination “based on shared clinical decision-making”. In other words, parents should talk with a doctor before deciding. It isn’t clear how this will affect access to these vaccines in every scenario, but it could make it more difficult for children to get a shot at pharmacies. Get the most essential health and fitness news in your inbox every Saturday. Sign up to newsletter The CDC’s guidance on vaccination in pregnancy is also ambiguous. While its website still recommends a covid-19 shot during pregnancy, a note at the top says, “this page will be updated to align with the updated immunization schedule.” Kennedy’s announcement contradicts the stances of major public health organisations, too. Both the American College of Obstetricians and Gynecologistsand the American Academy of Pediatricshave come out opposing it. “The CDC and HHS encourage individuals to talk with their healthcare provider about any personal medical decision,” an HHS spokesperson told New Scientist. “Under the leadership of Secretary Kennedy, HHS is restoring the doctor-patient relationship.” However, Linda Eckert at the University of Washington in Seattle says the conflicting messages are confusing for people. “It opens up disinformation opportunities. It undermines confidence in vaccination in general,” she says. “I can’t imagine it won’t decrease immunisation rates overall.” Research has repeatedly shown covid-19 vaccination in adolescence and pregnancy is safe and effective. In fact, Martin Makary, the head of the US Food and Drug Administration, listed pregnancy as a risk factor for severe covid-19 a week before Kennedy’s announcement, further convoluting the government’s public health messaging. Kennedy’s announcement is in line with some other countries’ covid policies. For example, Australia and the UK don’t recommend covid-19 vaccines for children unless they are at risk of severe illness. They also don’t recommend covid-19 vaccination during pregnancy if someone is already vaccinated. Asma Khalil, a member of the UK Joint Committee on Vaccination and Immunisation, says the UK’s decision was based on the reduced risk of the omicron variant, the cost-effectiveness of vaccination and high population immunity. However, these factors can vary across countries. The UK population also tends to have better access to healthcare than the US, says Eckert. “These decisions need to carefully consider the risks and benefits relative to the national population,” says Khalil. The HHS didn’t answer New Scientist’s questions about whether a similar analysis guided Kennedy’s decision-making. What is maybe most troubling, however, is the precedent Kennedy’s announcement sets. The ACIP – an independent group of public health experts – was expected to vote on proposed changes to covid-19 vaccine recommendations later this month. But Kennedy’s decision has bypassed this process. “This style of decision-making – by individuals versus going through experts who are carefully vetted for conflicts of interest, who carefully look at the data – this has never happened in our country,” says Eckert. “We’re in uncharted territory.” She worries the move could pave the way for Kennedy to chip away at other vaccine recommendations. “I know there are a lot of vaccines he has been actively against in his career,” she says. Kennedy has previously blamed vaccines for autism and falsely claimed that the polio vaccine caused more deaths than it averted. “What it speaks to is the fact thatdoes not see value in these vaccines and is going to do everything he can to try and devalue them in the minds of the public and make them harder to receive,” says Amesh Adalja at Johns Hopkins University. Topics: #stops #endorsing #covid19 #shots #kids
    WWW.NEWSCIENTIST.COM
    US stops endorsing covid-19 shots for kids – are other vaccines next?
    US Secretary of Health and Human Services Robert F Kennedy JrTasos Katopodis/Getty One of the top vaccine experts at the US Centers for Disease Control and Prevention (CDC), Lakshmi Panagiotakopoulos, resigned on 4 June – a week after Robert F Kennedy Jr announced that covid-19 vaccines would no longer be recommended for most children and pregnancies. The announcement set off several days of confusion around who will have access to covid-19 vaccines in the US going forward. In practice, there hasn’t been a drastic change to access, though there will probably be new obstacles for parents hoping to vaccinate their children. Still, Kennedy’s announcement signals a troubling circumvention of public health norms. “My career in public health and vaccinology started with a deep-seated desire to help the most vulnerable members of our population, and that is not something I am able to continue doing in this role,” said Panagiotakopoulos in an email to colleagues obtained by Reuters. Panagiotakopoulos supported the Advisory Committee on Immunization Practices (ACIP), which has advised the CDC on vaccine recommendations since 1964. But last week, Kennedy – the country’s highest-ranking public health official – upended this decades-long precedent. “I couldn’t be more pleased to announce that, as of today, the covid vaccine for healthy children and healthy pregnant woman has been removed from the CDC recommended immunisation schedule,” he said in a video posted to the social media platform X on 27 May. Despite his directive, the CDC has, so far, only made minor changes to its guidance on covid-19 vaccines. Instead of recommending them for children outright, it now recommends vaccination “based on shared clinical decision-making”. In other words, parents should talk with a doctor before deciding. It isn’t clear how this will affect access to these vaccines in every scenario, but it could make it more difficult for children to get a shot at pharmacies. Get the most essential health and fitness news in your inbox every Saturday. Sign up to newsletter The CDC’s guidance on vaccination in pregnancy is also ambiguous. While its website still recommends a covid-19 shot during pregnancy, a note at the top says, “this page will be updated to align with the updated immunization schedule.” Kennedy’s announcement contradicts the stances of major public health organisations, too. Both the American College of Obstetricians and Gynecologists (ACOG) and the American Academy of Pediatrics (APP) have come out opposing it. “The CDC and HHS encourage individuals to talk with their healthcare provider about any personal medical decision,” an HHS spokesperson told New Scientist. “Under the leadership of Secretary Kennedy, HHS is restoring the doctor-patient relationship.” However, Linda Eckert at the University of Washington in Seattle says the conflicting messages are confusing for people. “It opens up disinformation opportunities. It undermines confidence in vaccination in general,” she says. “I can’t imagine it won’t decrease immunisation rates overall.” Research has repeatedly shown covid-19 vaccination in adolescence and pregnancy is safe and effective. In fact, Martin Makary, the head of the US Food and Drug Administration (FDA), listed pregnancy as a risk factor for severe covid-19 a week before Kennedy’s announcement, further convoluting the government’s public health messaging. Kennedy’s announcement is in line with some other countries’ covid policies. For example, Australia and the UK don’t recommend covid-19 vaccines for children unless they are at risk of severe illness. They also don’t recommend covid-19 vaccination during pregnancy if someone is already vaccinated. Asma Khalil, a member of the UK Joint Committee on Vaccination and Immunisation, says the UK’s decision was based on the reduced risk of the omicron variant, the cost-effectiveness of vaccination and high population immunity. However, these factors can vary across countries. The UK population also tends to have better access to healthcare than the US, says Eckert. “These decisions need to carefully consider the risks and benefits relative to the national population,” says Khalil. The HHS didn’t answer New Scientist’s questions about whether a similar analysis guided Kennedy’s decision-making. What is maybe most troubling, however, is the precedent Kennedy’s announcement sets. The ACIP – an independent group of public health experts – was expected to vote on proposed changes to covid-19 vaccine recommendations later this month. But Kennedy’s decision has bypassed this process. “This style of decision-making – by individuals versus going through experts who are carefully vetted for conflicts of interest, who carefully look at the data – this has never happened in our country,” says Eckert. “We’re in uncharted territory.” She worries the move could pave the way for Kennedy to chip away at other vaccine recommendations. “I know there are a lot of vaccines he has been actively against in his career,” she says. Kennedy has previously blamed vaccines for autism and falsely claimed that the polio vaccine caused more deaths than it averted. “What it speaks to is the fact that [Kennedy] does not see value in these vaccines and is going to do everything he can to try and devalue them in the minds of the public and make them harder to receive,” says Amesh Adalja at Johns Hopkins University. Topics:
    Like
    Love
    Wow
    Sad
    Angry
    509
    0 Σχόλια 0 Μοιράστηκε
  • The Legal Accountability of AI-Generated Deepfakes in Election Misinformation

    How Deepfakes Are Created

    Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever.

    Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵

    During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹.

    Deepfakes in Recent Elections: Examples

    Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³.

    Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸.

    Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike.

    U.S. Legal Framework and Accountability

    In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements.

    Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws.

    U.S. Legislation and Proposals

    Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage.

    At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes.

    Policy Recommendations: Balancing Integrity and Speech

    Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism.

    Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception.

    Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams.

    Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire.

    References:

    /.

    /.

    .

    .

    .

    .

    .

    .

    .

    /.

    .

    .

    /.

    /.

    .

    The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    #legal #accountability #aigenerated #deepfakes #election
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws. U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: /. /. . . . . . . . /. . . /. /. . The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost. #legal #accountability #aigenerated #deepfakes #election
    WWW.MARKTECHPOST.COM
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networks (GANs) and autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping (one estimate suggests DeepFaceLab was used for over 95% of known deepfake videos)². Voice-cloning tools (often built on similar AI principles) can mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars (turning typed scripts into lifelike “spokespeople”), which have already been misused in disinformation campaigns³. Even mobile apps (e.g. FaceApp, Zao) let users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network (GAN): A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processing (color adjustments, lip-syncing refinements) to enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistencies (blinking irregularities, audio artifacts or metadata mismatches) that betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The caller (“Susan Anderson”) was later fined $6 million by the FCC and indicted under existing telemarketing laws¹⁰¹¹. (Importantly, FCC rules on robocalls applied regardless of AI: the perpetrator could have used a voice actor or recording instead.) Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI (e.g., by photoshopping text on real images)¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidate (who is Suharto’s son-in-law) won the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan (amidst tensions with China), a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities (from Bangladesh and Indonesia to Moldova, Slovakia, India and beyond), often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential ads (not necessarily AI-made) did change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering (such as the Bipartisan Campaign Reform Act, which requires disclaimers on political ads), and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the $6M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation laws (prohibiting threats or coercion) also leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes (e.g. for a plot to impersonate an aide to swing votes in 2020), and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commission (FEC) is preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) have signaled that purely commercial deepfakes could violate consumer protection or election laws (for example, liability for mass false impersonation or for foreign-funded electioneering). U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Act (H.R.5586 in the 118th Congress) would, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categories (e.g. false claims about time/place/manner of voting) while carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters (though Florida’s law exempts parody). Some states (like Texas) define “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints (e.g. Minnesota’s 2023 law was challenged for threatening injunctions against anyone “reasonably believed” to violate it). Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s law (which requires platforms to label or block deepfakes) as unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property (for instance, a celebrity suing over a botched celebrity-deepfake video), rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harms (e.g. automated phone calls impersonating voters, or videos claiming false polling information) may be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original media (as encouraged by the EU AI Act) could deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly available (e.g. the MIT OpenDATATEST) helps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: https://www.security.org/resources/deepfake-statistics/. https://www.wired.com/story/synthesia-ai-deepfakes-it-control-riparbelli/. https://www.gao.gov/products/gao-24-107292. https://technologyquotient.freshfields.com/post/102jb19/eu-ai-act-unpacked-8-new-rules-on-deepfakes. https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd. https://www.lawfaremedia.org/article/new-and-old-tools-to-tackle-deepfakes-and-election-lies-in-2024. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena. https://firstamendment.mtsu.edu/article/political-deepfakes-and-elections/. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. https://law.unh.edu/sites/default/files/media/2022/06/nagumotu_pp113-157.pdf. https://dfrlab.org/2024/10/02/brazil-election-ai-research/. https://dfrlab.org/2024/11/26/brazil-election-ai-deepfakes/. https://freedomhouse.org/article/eu-digital-services-act-win-transparency. The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    0 Σχόλια 0 Μοιράστηκε
  • A timeline of Ivanka Trump and Jared Kushner's relationship

    Ivanka Trump has made it clear that she's done with politics. That hasn't stopped her and husband Jared Kushner from remaining an influential political couple.They have not formally reprised their roles as White House advisors in President Donald Trump's second administration, but they've remained present in Donald Trump's political orbit.While Ivanka Trump opted out of the 2024 campaign trail, she and Kushner still appeared at the Republican National Convention, Donald Trump's victory party on election night, and the inauguration. Kushner also reportedly served as an informal advisor ahead of Donald Trump's trip to the Middle East in May, CNN reported.Ivanka Trump, who is Donald Trump's eldest daughter, converted to Judaism before marrying Kushner in 2009. They have three children: Arabella, Joseph, and Theodore.Here's a timeline of Ivanka Trump and Kushner's relationship.

    2007: Ivanka Trump and Jared Kushner met at a networking lunch arranged by one of her longtime business partners.

    Ivanka Trump and Jared Kushner in 2007.

    PAUL LAURIE/Patrick McMullan via Getty Images

    Ivanka Trump and Kushner were both 25 at the time."They very innocently set us up thinking that our only interest in one another would be transactional," Ivanka Trump told Vogue in 2015. "Whenever we see them we're like, 'The best deal we ever made!'"

    2008: Ivanka Trump and Kushner broke up because of religious differences.

    Jared Kushner and Ivanka Trump in 2008.

    Patrick McMullan/Patrick McMullan via Getty Images

    Kushner was raised in the modern Orthodox Jewish tradition, and it was important to his family for him to marry someone Jewish. Ivanka Trump's family is Presbyterian.

    2008: Three months later, the couple rekindled their romance on Rupert Murdoch's yacht.

    Ivanka Trump and Jared Kushner in 2008.

    David X Prutting/Patrick McMullan/Patrick McMullan via Getty Images

    In his memoir, "Breaking History," Kushner wrote that Murdoch's then-wife, Wendi Murdoch, was a mutual friend who invited them both on the yacht.

    May 2009: They attended the Met Gala together for the first time.

    Jared Kushner and Ivanka Trump at the Met Gala.

    BILLY FARRELL/Patrick McMullan via Getty Images

    The theme of the Met Gala that year was "The Model As Muse." Ivanka Trump wore a gown by designer Brian Reyes.

    July 2009: Ivanka Trump completed her conversion to Judaism, and she and Kushner got engaged.

    Jared Kushner and Ivanka Trump in 2009.

    Billy Farrell/Patrick McMullan/Patrick McMullan via Getty Images

    Kushner proposed with a 5.22-carat cushion-cut diamond engagement ring.Ivanka Trump told New York Magazine that she and her fiancé were "very mellow.""We go to the park. We go biking together. We go to the 2nd Avenue Deli," she said. "We both live in this fancy world. But on a personal level, I don't think I could be with somebody — I know he couldn't be with somebody — who needed to be 'on' all the time."

    October 2009: Ivanka Trump and Kushner married at the Trump National Golf Club in New Jersey.

    Jared Kushner and Ivanka Trump on their wedding day.

    Brian Marcus/Fred Marcus Photography via Getty Images

    The couple invited 500 guests, including celebrities like Barbara Walters, Regis Philbin, and Anna Wintour, as well as politicians such as Rudy Giuliani and Andrew Cuomo.

    July 2011: The couple welcomed their first child, Arabella.

    Ivanka Trump and Jared Kushner with Arabella Kushner.

    Robin Marchant/Getty Images

    "This morning @jaredkushner and I welcomed a beautiful and healthy little baby girl into the world," Ivanka announced on X, then Twitter. "We feel incredibly grateful and blessed. Thank you all for your support and well wishes!"

    October 2013: Ivanka Trump gave birth to their second child, Joseph.

    Ivanka Trump with Arabella Rose Kushner and Joseph Frederick Kushner in 2017.

    Alo Ceballos/GC Images

    He was named for Kushner's paternal grandfather Joseph and given the middle name Frederick after Donald Trump's father.

    March 2016: Kushner and Ivanka Trump welcomed their third child, Theodore, in the midst of Donald Trump's presidential campaign.

    Ivanka Trump carried her son Theodore as she held hands with Joseph alongside Jared Kushner and daughter Arabella on the White House lawn.

    SAUL LOEB/AFP via Getty Images

    "I said, 'Ivanka, it would be great if you had your baby in Iowa.' I really want that to happen. I really want that to happen," Donald Trump told supporters in Iowa in January 2016.All three of the couple's children were born in New York City.

    May 2016: They attended the Met Gala two months after Ivanka Trump gave birth.

    Jared Kushner and Ivanka Trump attend the Met Gala.

    Kevin Mazur/WireImage

    Ivanka Trump wore a red Ralph Lauren Collection halter jumpsuit.On a 2017 episode of "The Late Late Show with James Corden," Anna Wintour said that she would never invite Donald Trump to another Met Gala.

    January 2017: Ivanka Trump and Kushner attended Donald Trump's inauguration and danced together at the Liberty Ball.

    Ivanka Trump and Jared Kushner on Inauguration Day.

    Photo by Rob Carr/Getty Images

    The Liberty Ball was the first of three inaugural balls that Donald Trump attended.

    January 2017: After the inauguration, Ivanka and Kushner relocated to a million home in the Kalorama section of Washington, DC.

    Jared Kushner and Ivanka Trump's house in Washington, DC.

    PAUL J. RICHARDS/AFP via Getty Images

    Ivanka Trump and Kushner rented the 7,000-square-foot home from billionaire Andrónico Luksic for a month, The Wall Street Journal reported.

    May 2017: They accompanied Donald Trump on his first overseas trip in office.

    Jared Kushner and Ivanka Trump with Pope Francis.

    Vatican Pool - Corbis/Corbis via Getty Images

    Kushner and Ivanka Trump both served as advisors to the president. For the first overseas trip of Donald Trump's presidency, they accompanied him to Saudi Arabia, Israel, the Vatican, and summits in Brussels and Sicily.

    October 2019: The couple celebrated their 10th wedding anniversary with a lavish party at Camp David.

    Ivanka Trump and Jared Kushner at a state dinner.

    MANDEL NGAN/AFP via Getty Images

    All of the Trump and Kushner siblings were in attendance. A White House official told CNN that the couple was covering the cost of the party, but Donald Trump tweeted that the cost would be "totally paid for by me!"

    August 2020: Ivanka Trump spoke about moving their family to Washington, DC, at the Republican National Convention.

    Jared Kushner and Ivanka Trump at the Republican National Convention.

    SAUL LOEB/AFP via Getty Images

    "When Jared and I moved with our three children to Washington, we didn't exactly know what we were in for," she said in her speech. "But our kids loved it from the start."

    December 2020: Ivanka Trump and Kushner reportedly bought a million empty lot in Miami's "Billionaire Bunker."

    Jared Kushner and Ivanka Trump's plot of land in Indian Creek Village.

    The Jills Zeder Group; Samir Hussein/WireImage/Getty Images

    After Donald Trump lost the 2020 election, Page Six reported that the couple purchased a 1.8-acre waterfront lot owned by singer Julio Iglesias, Enrique Iglesias' father, in Indian Creek Village, Florida.The island where it sits has the nickname "Billionaire Bunker" thanks to its multitude of ultra-wealthy residents over the years, including billionaire investor Carl Icahn, supermodel Adriana Lima, and former Miami Dolphins coach Don Shula.

    January 2021: They skipped Joe Biden's inauguration, flying with Donald Trump to his Mar-a-Lago residence in Palm Beach, Florida, instead.

    Ivanka Trump, Jared Kushner, and their children prepared for Donald Trump's departure on Inauguration Day.

    ALEX EDELMAN/AFP via Getty Images

    Donald Trump did not attend Biden's inauguration, breaking a long-standing norm in US democracy. While initial reports said that Ivanka Trump was planning to attend the inauguration, a White House official told People magazine that "Ivanka is not expected to attend the inauguration nor was she ever expected to."

    January 2021: The couple signed a lease for a luxury Miami Beach condo near their Indian Creek Village property.

    Arte Surfside.

    Antonio Citterio Patricia Viel

    Ivanka Trump and Kushner signed a lease for a "large, unfurnished unit" in the amenities-packed Arte Surfside condominium building in Surfside, Florida.Surfside, a beachside town just north of Miami Beach that's home to fewer than 6,000 people, is only a five-minute drive from Indian Creek Island, where they bought their million empty lot.

    April 2021: Ivanka Trump and Kushner reportedly added a million mansion in Indian Creek Village to their Florida real-estate profile.

    Ivanka Trump and Jared Kushner on a walk in Florida.

    MEGA/GC Images

    The Real Deal reported that Ivanka and Kushner purchased another Indian Creek property — this time, a 8,510-square-foot mansion situated on a 1.3-acre estate.

    June 2021: Several outlets reported that the couple began to distance themselves from Donald Trump due to his fixation on conspiracy theories about the 2020 election.

    Ivanka Trump and Jared Kushner behind Donald Trump.

    Kevin Lamarque/Reuters

    CNN reported that Trump was prone to complain about the 2020 election and falsely claim it was "stolen" from him to anyone listening and that his "frustrations emerge in fits and starts — more likely when he is discussing his hopeful return to national politics."While Ivanka and Kushner had been living in their Miami Beach condo, not far from Trump's Mar-a-Lago club in Palm Beach, Florida, they'd visited Trump less and less frequently and were absent from big events at Mar-a-Lago, CNN said.The New York Times also reported that Kushner wanted "to focus on writing his book and establishing a simpler relationship" with the former president.

    October 2021: Ivanka Trump and Kushner visited Israel's parliament for the inaugural event of the Abraham Accords Caucus.

    Jared Kushner and Ivanka Trump in Israel.

    AHMAD GHARABLI/AFP via Getty Images

    The Abraham Accords, which Kushner helped broker in August 2020, normalized relations between Israel and the United Arab Emirates, Bahrain, Sudan, and Morocco.During their visit, Ivanka Trump and Kushner met with then-former Prime Minister Benjamin Netanyahu and attended an event at the Museum of Tolerance Jerusalem with former US Secretary of State Mike Pompeo.

    August 2022: Kushner released his memoir, "Breaking History," in which he wrote about their courtship.

    Jared Kushner.

    John Lamparski/Getty Images for Concordia Summit

    "In addition to being arrestingly beautiful, which I knew before we met, she was warm, funny, and brilliant," he wrote of getting to know Ivanka Trump. "She has a big heart and a tremendous zest for exploring new things."He also wrote that when he told Donald Trump that he was planning a surprise engagement, Trump "picked up the intercom and alerted Ivanka that she should expect an imminent proposal."

    November 2022: Kushner attended Donald Trump's 2024 campaign announcement without Ivanka Trump.

    Kimberly Guilfoyle, Jared Kushner, Eric Trump, and Lara Trump at Donald Trump's presidential campaign announcement.

    Jonathan Ernst/Reuters

    Ivanka Trump released a statement explaining her absence from the event."I love my father very much," her statement read. "This time around, I am choosing to prioritize my children and the private life we are creating as a family. I do not plan to be involved in politics. While I will always love and support my father, going forward I will do so outside the political arena."

    July 2024: Ivanka Trump and Kushner made a rare political appearance at the Republican National Convention.

    Donald Trump and Melania Trump onstage with Ivanka Trump and Jared Kushner.

    Jason Armond/Los Angeles Times via Getty Images

    Ivanka Trump did not campaign for her father or give a speech as she had at past Republican National Conventions, but she and Jared Kushner joined Trump family members onstage after Donald Trump's remarks.

    November 2024: They joined members of the Trump family in Palm Beach, Florida, to celebrate Donald Trump's election victory.
    #timeline #ivanka #trump #jared #kushner039s
    A timeline of Ivanka Trump and Jared Kushner's relationship
    Ivanka Trump has made it clear that she's done with politics. That hasn't stopped her and husband Jared Kushner from remaining an influential political couple.They have not formally reprised their roles as White House advisors in President Donald Trump's second administration, but they've remained present in Donald Trump's political orbit.While Ivanka Trump opted out of the 2024 campaign trail, she and Kushner still appeared at the Republican National Convention, Donald Trump's victory party on election night, and the inauguration. Kushner also reportedly served as an informal advisor ahead of Donald Trump's trip to the Middle East in May, CNN reported.Ivanka Trump, who is Donald Trump's eldest daughter, converted to Judaism before marrying Kushner in 2009. They have three children: Arabella, Joseph, and Theodore.Here's a timeline of Ivanka Trump and Kushner's relationship. 2007: Ivanka Trump and Jared Kushner met at a networking lunch arranged by one of her longtime business partners. Ivanka Trump and Jared Kushner in 2007. PAUL LAURIE/Patrick McMullan via Getty Images Ivanka Trump and Kushner were both 25 at the time."They very innocently set us up thinking that our only interest in one another would be transactional," Ivanka Trump told Vogue in 2015. "Whenever we see them we're like, 'The best deal we ever made!'" 2008: Ivanka Trump and Kushner broke up because of religious differences. Jared Kushner and Ivanka Trump in 2008. Patrick McMullan/Patrick McMullan via Getty Images Kushner was raised in the modern Orthodox Jewish tradition, and it was important to his family for him to marry someone Jewish. Ivanka Trump's family is Presbyterian. 2008: Three months later, the couple rekindled their romance on Rupert Murdoch's yacht. Ivanka Trump and Jared Kushner in 2008. David X Prutting/Patrick McMullan/Patrick McMullan via Getty Images In his memoir, "Breaking History," Kushner wrote that Murdoch's then-wife, Wendi Murdoch, was a mutual friend who invited them both on the yacht. May 2009: They attended the Met Gala together for the first time. Jared Kushner and Ivanka Trump at the Met Gala. BILLY FARRELL/Patrick McMullan via Getty Images The theme of the Met Gala that year was "The Model As Muse." Ivanka Trump wore a gown by designer Brian Reyes. July 2009: Ivanka Trump completed her conversion to Judaism, and she and Kushner got engaged. Jared Kushner and Ivanka Trump in 2009. Billy Farrell/Patrick McMullan/Patrick McMullan via Getty Images Kushner proposed with a 5.22-carat cushion-cut diamond engagement ring.Ivanka Trump told New York Magazine that she and her fiancé were "very mellow.""We go to the park. We go biking together. We go to the 2nd Avenue Deli," she said. "We both live in this fancy world. But on a personal level, I don't think I could be with somebody — I know he couldn't be with somebody — who needed to be 'on' all the time." October 2009: Ivanka Trump and Kushner married at the Trump National Golf Club in New Jersey. Jared Kushner and Ivanka Trump on their wedding day. Brian Marcus/Fred Marcus Photography via Getty Images The couple invited 500 guests, including celebrities like Barbara Walters, Regis Philbin, and Anna Wintour, as well as politicians such as Rudy Giuliani and Andrew Cuomo. July 2011: The couple welcomed their first child, Arabella. Ivanka Trump and Jared Kushner with Arabella Kushner. Robin Marchant/Getty Images "This morning @jaredkushner and I welcomed a beautiful and healthy little baby girl into the world," Ivanka announced on X, then Twitter. "We feel incredibly grateful and blessed. Thank you all for your support and well wishes!" October 2013: Ivanka Trump gave birth to their second child, Joseph. Ivanka Trump with Arabella Rose Kushner and Joseph Frederick Kushner in 2017. Alo Ceballos/GC Images He was named for Kushner's paternal grandfather Joseph and given the middle name Frederick after Donald Trump's father. March 2016: Kushner and Ivanka Trump welcomed their third child, Theodore, in the midst of Donald Trump's presidential campaign. Ivanka Trump carried her son Theodore as she held hands with Joseph alongside Jared Kushner and daughter Arabella on the White House lawn. SAUL LOEB/AFP via Getty Images "I said, 'Ivanka, it would be great if you had your baby in Iowa.' I really want that to happen. I really want that to happen," Donald Trump told supporters in Iowa in January 2016.All three of the couple's children were born in New York City. May 2016: They attended the Met Gala two months after Ivanka Trump gave birth. Jared Kushner and Ivanka Trump attend the Met Gala. Kevin Mazur/WireImage Ivanka Trump wore a red Ralph Lauren Collection halter jumpsuit.On a 2017 episode of "The Late Late Show with James Corden," Anna Wintour said that she would never invite Donald Trump to another Met Gala. January 2017: Ivanka Trump and Kushner attended Donald Trump's inauguration and danced together at the Liberty Ball. Ivanka Trump and Jared Kushner on Inauguration Day. Photo by Rob Carr/Getty Images The Liberty Ball was the first of three inaugural balls that Donald Trump attended. January 2017: After the inauguration, Ivanka and Kushner relocated to a million home in the Kalorama section of Washington, DC. Jared Kushner and Ivanka Trump's house in Washington, DC. PAUL J. RICHARDS/AFP via Getty Images Ivanka Trump and Kushner rented the 7,000-square-foot home from billionaire Andrónico Luksic for a month, The Wall Street Journal reported. May 2017: They accompanied Donald Trump on his first overseas trip in office. Jared Kushner and Ivanka Trump with Pope Francis. Vatican Pool - Corbis/Corbis via Getty Images Kushner and Ivanka Trump both served as advisors to the president. For the first overseas trip of Donald Trump's presidency, they accompanied him to Saudi Arabia, Israel, the Vatican, and summits in Brussels and Sicily. October 2019: The couple celebrated their 10th wedding anniversary with a lavish party at Camp David. Ivanka Trump and Jared Kushner at a state dinner. MANDEL NGAN/AFP via Getty Images All of the Trump and Kushner siblings were in attendance. A White House official told CNN that the couple was covering the cost of the party, but Donald Trump tweeted that the cost would be "totally paid for by me!" August 2020: Ivanka Trump spoke about moving their family to Washington, DC, at the Republican National Convention. Jared Kushner and Ivanka Trump at the Republican National Convention. SAUL LOEB/AFP via Getty Images "When Jared and I moved with our three children to Washington, we didn't exactly know what we were in for," she said in her speech. "But our kids loved it from the start." December 2020: Ivanka Trump and Kushner reportedly bought a million empty lot in Miami's "Billionaire Bunker." Jared Kushner and Ivanka Trump's plot of land in Indian Creek Village. The Jills Zeder Group; Samir Hussein/WireImage/Getty Images After Donald Trump lost the 2020 election, Page Six reported that the couple purchased a 1.8-acre waterfront lot owned by singer Julio Iglesias, Enrique Iglesias' father, in Indian Creek Village, Florida.The island where it sits has the nickname "Billionaire Bunker" thanks to its multitude of ultra-wealthy residents over the years, including billionaire investor Carl Icahn, supermodel Adriana Lima, and former Miami Dolphins coach Don Shula. January 2021: They skipped Joe Biden's inauguration, flying with Donald Trump to his Mar-a-Lago residence in Palm Beach, Florida, instead. Ivanka Trump, Jared Kushner, and their children prepared for Donald Trump's departure on Inauguration Day. ALEX EDELMAN/AFP via Getty Images Donald Trump did not attend Biden's inauguration, breaking a long-standing norm in US democracy. While initial reports said that Ivanka Trump was planning to attend the inauguration, a White House official told People magazine that "Ivanka is not expected to attend the inauguration nor was she ever expected to." January 2021: The couple signed a lease for a luxury Miami Beach condo near their Indian Creek Village property. Arte Surfside. Antonio Citterio Patricia Viel Ivanka Trump and Kushner signed a lease for a "large, unfurnished unit" in the amenities-packed Arte Surfside condominium building in Surfside, Florida.Surfside, a beachside town just north of Miami Beach that's home to fewer than 6,000 people, is only a five-minute drive from Indian Creek Island, where they bought their million empty lot. April 2021: Ivanka Trump and Kushner reportedly added a million mansion in Indian Creek Village to their Florida real-estate profile. Ivanka Trump and Jared Kushner on a walk in Florida. MEGA/GC Images The Real Deal reported that Ivanka and Kushner purchased another Indian Creek property — this time, a 8,510-square-foot mansion situated on a 1.3-acre estate. June 2021: Several outlets reported that the couple began to distance themselves from Donald Trump due to his fixation on conspiracy theories about the 2020 election. Ivanka Trump and Jared Kushner behind Donald Trump. Kevin Lamarque/Reuters CNN reported that Trump was prone to complain about the 2020 election and falsely claim it was "stolen" from him to anyone listening and that his "frustrations emerge in fits and starts — more likely when he is discussing his hopeful return to national politics."While Ivanka and Kushner had been living in their Miami Beach condo, not far from Trump's Mar-a-Lago club in Palm Beach, Florida, they'd visited Trump less and less frequently and were absent from big events at Mar-a-Lago, CNN said.The New York Times also reported that Kushner wanted "to focus on writing his book and establishing a simpler relationship" with the former president. October 2021: Ivanka Trump and Kushner visited Israel's parliament for the inaugural event of the Abraham Accords Caucus. Jared Kushner and Ivanka Trump in Israel. AHMAD GHARABLI/AFP via Getty Images The Abraham Accords, which Kushner helped broker in August 2020, normalized relations between Israel and the United Arab Emirates, Bahrain, Sudan, and Morocco.During their visit, Ivanka Trump and Kushner met with then-former Prime Minister Benjamin Netanyahu and attended an event at the Museum of Tolerance Jerusalem with former US Secretary of State Mike Pompeo. August 2022: Kushner released his memoir, "Breaking History," in which he wrote about their courtship. Jared Kushner. John Lamparski/Getty Images for Concordia Summit "In addition to being arrestingly beautiful, which I knew before we met, she was warm, funny, and brilliant," he wrote of getting to know Ivanka Trump. "She has a big heart and a tremendous zest for exploring new things."He also wrote that when he told Donald Trump that he was planning a surprise engagement, Trump "picked up the intercom and alerted Ivanka that she should expect an imminent proposal." November 2022: Kushner attended Donald Trump's 2024 campaign announcement without Ivanka Trump. Kimberly Guilfoyle, Jared Kushner, Eric Trump, and Lara Trump at Donald Trump's presidential campaign announcement. Jonathan Ernst/Reuters Ivanka Trump released a statement explaining her absence from the event."I love my father very much," her statement read. "This time around, I am choosing to prioritize my children and the private life we are creating as a family. I do not plan to be involved in politics. While I will always love and support my father, going forward I will do so outside the political arena." July 2024: Ivanka Trump and Kushner made a rare political appearance at the Republican National Convention. Donald Trump and Melania Trump onstage with Ivanka Trump and Jared Kushner. Jason Armond/Los Angeles Times via Getty Images Ivanka Trump did not campaign for her father or give a speech as she had at past Republican National Conventions, but she and Jared Kushner joined Trump family members onstage after Donald Trump's remarks. November 2024: They joined members of the Trump family in Palm Beach, Florida, to celebrate Donald Trump's election victory. #timeline #ivanka #trump #jared #kushner039s
    WWW.BUSINESSINSIDER.COM
    A timeline of Ivanka Trump and Jared Kushner's relationship
    Ivanka Trump has made it clear that she's done with politics. That hasn't stopped her and husband Jared Kushner from remaining an influential political couple.They have not formally reprised their roles as White House advisors in President Donald Trump's second administration, but they've remained present in Donald Trump's political orbit.While Ivanka Trump opted out of the 2024 campaign trail, she and Kushner still appeared at the Republican National Convention, Donald Trump's victory party on election night, and the inauguration. Kushner also reportedly served as an informal advisor ahead of Donald Trump's trip to the Middle East in May, CNN reported.Ivanka Trump, who is Donald Trump's eldest daughter, converted to Judaism before marrying Kushner in 2009. They have three children: Arabella, Joseph, and Theodore.Here's a timeline of Ivanka Trump and Kushner's relationship. 2007: Ivanka Trump and Jared Kushner met at a networking lunch arranged by one of her longtime business partners. Ivanka Trump and Jared Kushner in 2007. PAUL LAURIE/Patrick McMullan via Getty Images Ivanka Trump and Kushner were both 25 at the time."They very innocently set us up thinking that our only interest in one another would be transactional," Ivanka Trump told Vogue in 2015. "Whenever we see them we're like, 'The best deal we ever made!'" 2008: Ivanka Trump and Kushner broke up because of religious differences. Jared Kushner and Ivanka Trump in 2008. Patrick McMullan/Patrick McMullan via Getty Images Kushner was raised in the modern Orthodox Jewish tradition, and it was important to his family for him to marry someone Jewish. Ivanka Trump's family is Presbyterian. 2008: Three months later, the couple rekindled their romance on Rupert Murdoch's yacht. Ivanka Trump and Jared Kushner in 2008. David X Prutting/Patrick McMullan/Patrick McMullan via Getty Images In his memoir, "Breaking History," Kushner wrote that Murdoch's then-wife, Wendi Murdoch, was a mutual friend who invited them both on the yacht. May 2009: They attended the Met Gala together for the first time. Jared Kushner and Ivanka Trump at the Met Gala. BILLY FARRELL/Patrick McMullan via Getty Images The theme of the Met Gala that year was "The Model As Muse." Ivanka Trump wore a gown by designer Brian Reyes. July 2009: Ivanka Trump completed her conversion to Judaism, and she and Kushner got engaged. Jared Kushner and Ivanka Trump in 2009. Billy Farrell/Patrick McMullan/Patrick McMullan via Getty Images Kushner proposed with a 5.22-carat cushion-cut diamond engagement ring.Ivanka Trump told New York Magazine that she and her fiancé were "very mellow.""We go to the park. We go biking together. We go to the 2nd Avenue Deli," she said. "We both live in this fancy world. But on a personal level, I don't think I could be with somebody — I know he couldn't be with somebody — who needed to be 'on' all the time." October 2009: Ivanka Trump and Kushner married at the Trump National Golf Club in New Jersey. Jared Kushner and Ivanka Trump on their wedding day. Brian Marcus/Fred Marcus Photography via Getty Images The couple invited 500 guests, including celebrities like Barbara Walters, Regis Philbin, and Anna Wintour, as well as politicians such as Rudy Giuliani and Andrew Cuomo. July 2011: The couple welcomed their first child, Arabella. Ivanka Trump and Jared Kushner with Arabella Kushner. Robin Marchant/Getty Images "This morning @jaredkushner and I welcomed a beautiful and healthy little baby girl into the world," Ivanka announced on X, then Twitter. "We feel incredibly grateful and blessed. Thank you all for your support and well wishes!" October 2013: Ivanka Trump gave birth to their second child, Joseph. Ivanka Trump with Arabella Rose Kushner and Joseph Frederick Kushner in 2017. Alo Ceballos/GC Images He was named for Kushner's paternal grandfather Joseph and given the middle name Frederick after Donald Trump's father. March 2016: Kushner and Ivanka Trump welcomed their third child, Theodore, in the midst of Donald Trump's presidential campaign. Ivanka Trump carried her son Theodore as she held hands with Joseph alongside Jared Kushner and daughter Arabella on the White House lawn. SAUL LOEB/AFP via Getty Images "I said, 'Ivanka, it would be great if you had your baby in Iowa.' I really want that to happen. I really want that to happen," Donald Trump told supporters in Iowa in January 2016.All three of the couple's children were born in New York City. May 2016: They attended the Met Gala two months after Ivanka Trump gave birth. Jared Kushner and Ivanka Trump attend the Met Gala. Kevin Mazur/WireImage Ivanka Trump wore a red Ralph Lauren Collection halter jumpsuit.On a 2017 episode of "The Late Late Show with James Corden," Anna Wintour said that she would never invite Donald Trump to another Met Gala. January 2017: Ivanka Trump and Kushner attended Donald Trump's inauguration and danced together at the Liberty Ball. Ivanka Trump and Jared Kushner on Inauguration Day. Photo by Rob Carr/Getty Images The Liberty Ball was the first of three inaugural balls that Donald Trump attended. January 2017: After the inauguration, Ivanka and Kushner relocated to a $5.5 million home in the Kalorama section of Washington, DC. Jared Kushner and Ivanka Trump's house in Washington, DC. PAUL J. RICHARDS/AFP via Getty Images Ivanka Trump and Kushner rented the 7,000-square-foot home from billionaire Andrónico Luksic for $15,000 a month, The Wall Street Journal reported. May 2017: They accompanied Donald Trump on his first overseas trip in office. Jared Kushner and Ivanka Trump with Pope Francis. Vatican Pool - Corbis/Corbis via Getty Images Kushner and Ivanka Trump both served as advisors to the president. For the first overseas trip of Donald Trump's presidency, they accompanied him to Saudi Arabia, Israel, the Vatican, and summits in Brussels and Sicily. October 2019: The couple celebrated their 10th wedding anniversary with a lavish party at Camp David. Ivanka Trump and Jared Kushner at a state dinner. MANDEL NGAN/AFP via Getty Images All of the Trump and Kushner siblings were in attendance. A White House official told CNN that the couple was covering the cost of the party, but Donald Trump tweeted that the cost would be "totally paid for by me!" August 2020: Ivanka Trump spoke about moving their family to Washington, DC, at the Republican National Convention. Jared Kushner and Ivanka Trump at the Republican National Convention. SAUL LOEB/AFP via Getty Images "When Jared and I moved with our three children to Washington, we didn't exactly know what we were in for," she said in her speech. "But our kids loved it from the start." December 2020: Ivanka Trump and Kushner reportedly bought a $32 million empty lot in Miami's "Billionaire Bunker." Jared Kushner and Ivanka Trump's plot of land in Indian Creek Village. The Jills Zeder Group; Samir Hussein/WireImage/Getty Images After Donald Trump lost the 2020 election, Page Six reported that the couple purchased a 1.8-acre waterfront lot owned by singer Julio Iglesias, Enrique Iglesias' father, in Indian Creek Village, Florida.The island where it sits has the nickname "Billionaire Bunker" thanks to its multitude of ultra-wealthy residents over the years, including billionaire investor Carl Icahn, supermodel Adriana Lima, and former Miami Dolphins coach Don Shula. January 2021: They skipped Joe Biden's inauguration, flying with Donald Trump to his Mar-a-Lago residence in Palm Beach, Florida, instead. Ivanka Trump, Jared Kushner, and their children prepared for Donald Trump's departure on Inauguration Day. ALEX EDELMAN/AFP via Getty Images Donald Trump did not attend Biden's inauguration, breaking a long-standing norm in US democracy. While initial reports said that Ivanka Trump was planning to attend the inauguration, a White House official told People magazine that "Ivanka is not expected to attend the inauguration nor was she ever expected to." January 2021: The couple signed a lease for a luxury Miami Beach condo near their Indian Creek Village property. Arte Surfside. Antonio Citterio Patricia Viel Ivanka Trump and Kushner signed a lease for a "large, unfurnished unit" in the amenities-packed Arte Surfside condominium building in Surfside, Florida.Surfside, a beachside town just north of Miami Beach that's home to fewer than 6,000 people, is only a five-minute drive from Indian Creek Island, where they bought their $32 million empty lot. April 2021: Ivanka Trump and Kushner reportedly added a $24 million mansion in Indian Creek Village to their Florida real-estate profile. Ivanka Trump and Jared Kushner on a walk in Florida. MEGA/GC Images The Real Deal reported that Ivanka and Kushner purchased another Indian Creek property — this time, a 8,510-square-foot mansion situated on a 1.3-acre estate. June 2021: Several outlets reported that the couple began to distance themselves from Donald Trump due to his fixation on conspiracy theories about the 2020 election. Ivanka Trump and Jared Kushner behind Donald Trump. Kevin Lamarque/Reuters CNN reported that Trump was prone to complain about the 2020 election and falsely claim it was "stolen" from him to anyone listening and that his "frustrations emerge in fits and starts — more likely when he is discussing his hopeful return to national politics."While Ivanka and Kushner had been living in their Miami Beach condo, not far from Trump's Mar-a-Lago club in Palm Beach, Florida, they'd visited Trump less and less frequently and were absent from big events at Mar-a-Lago, CNN said.The New York Times also reported that Kushner wanted "to focus on writing his book and establishing a simpler relationship" with the former president. October 2021: Ivanka Trump and Kushner visited Israel's parliament for the inaugural event of the Abraham Accords Caucus. Jared Kushner and Ivanka Trump in Israel. AHMAD GHARABLI/AFP via Getty Images The Abraham Accords, which Kushner helped broker in August 2020, normalized relations between Israel and the United Arab Emirates, Bahrain, Sudan, and Morocco.During their visit, Ivanka Trump and Kushner met with then-former Prime Minister Benjamin Netanyahu and attended an event at the Museum of Tolerance Jerusalem with former US Secretary of State Mike Pompeo. August 2022: Kushner released his memoir, "Breaking History," in which he wrote about their courtship. Jared Kushner. John Lamparski/Getty Images for Concordia Summit "In addition to being arrestingly beautiful, which I knew before we met, she was warm, funny, and brilliant," he wrote of getting to know Ivanka Trump. "She has a big heart and a tremendous zest for exploring new things."He also wrote that when he told Donald Trump that he was planning a surprise engagement, Trump "picked up the intercom and alerted Ivanka that she should expect an imminent proposal." November 2022: Kushner attended Donald Trump's 2024 campaign announcement without Ivanka Trump. Kimberly Guilfoyle, Jared Kushner, Eric Trump, and Lara Trump at Donald Trump's presidential campaign announcement. Jonathan Ernst/Reuters Ivanka Trump released a statement explaining her absence from the event."I love my father very much," her statement read. "This time around, I am choosing to prioritize my children and the private life we are creating as a family. I do not plan to be involved in politics. While I will always love and support my father, going forward I will do so outside the political arena." July 2024: Ivanka Trump and Kushner made a rare political appearance at the Republican National Convention. Donald Trump and Melania Trump onstage with Ivanka Trump and Jared Kushner. Jason Armond/Los Angeles Times via Getty Images Ivanka Trump did not campaign for her father or give a speech as she had at past Republican National Conventions, but she and Jared Kushner joined Trump family members onstage after Donald Trump's remarks. November 2024: They joined members of the Trump family in Palm Beach, Florida, to celebrate Donald Trump's election victory.
    0 Σχόλια 0 Μοιράστηκε
  • When AI fails, who is to blame?

    To state the obvious: Our species has fully entered the Age of AI. And AI is here to stay.

    The fact that AI chatbots appear to speak human language has become a major source of confusion. Companies are making and selling AI friends, lovers, pets, and therapists. Some AI researchers falsely claim their AI and robots can “feel” and “think.” Even Apple falsely says it’s building a lamp that can feel emotion.

    Another source of confusion is whether AI is to blame when it fails, hallucinates, or outputs errors that impact people in the real world. Just look at some of the headlines:

    “Who’s to Blame When AI Makes a Medical Error?”

    “Human vs. AI: Who is responsible for AI mistakes?”

    “In a World of AI Agents, Who’s Accountable for Mistakes?”

    Look, I’ll give you the punchline in advance: The user is responsible.

    AI is a tool like any other. If a truck driver falls asleep at the wheel, it’s not the truck’s fault. If a surgeon leaves a sponge inside a patient, it’s not the sponge’s fault. If a prospective college student gets a horrible score on the SAT, it’s not the fault of their No. 2 pencil.

    It’s easy for me to claim that users are to blame for AI errors. But let’s dig into the question more deeply.

    Writers caught with their prose down

    Lena McDonald, a fantasy romance author, got caught using AI to copy another writer’s style.

    Her latest novel, Darkhollow Academy: Year 2, released in March, contained the following riveting line in Chapter 3: “I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements.”

    This was clearly copied and pasted from an AI chatbot, along with words she was passing off as her own.

    This news is sad and funny but not unique. In 2025 alone, at least two other romance authors, K.C. Crowne and Rania Faris, were caught with similar AI-generated prompts left in their self-published novels, suggesting a wider trend.

    It happens in journalism, too.

    On May 18, the Chicago Sun-Times and The Philadelphia Inquirer published a “Summer Reading List for 2025” in its Sunday print supplement, featuring 15 books supposedly written by well-known authors. Unfortunately, most of the books don’t exist. Tidewater Dreams by Isabel Allende, Nightshade Market by Min Jin Lee, and The Last Algorithm by Andy Weir are fake books attributed to real authors.

    The fake books were dreamed up by AI, which the writer Marco Buscaglia admitted to using.Whose fault was this?

    Well, it was clearly the writer’s fault. A writer’s job always involves editing. A writer needs to, at minimum, read their own words and consider cuts, expansions, rewording, and other changes. In all these cases, the authors failed to be professional writers. They didn’t even read their books or the books they recommended.

    Fact-checkers exist at some publications and not at others. Either way, it’s up to writers to have good reason to assert facts or use quotes. Writers are also editors and fact-checkers. It’s just part of the job.

    I use these real-life examples because they demonstrate clearly that the writer — the AI user — is definitely to blame when errors occur with AI chatbots. The user chooses the tool, does the prompt engineering, sees the output, and either catches and corrects errors or not.

    OK, but what about bigger errors?

    Air Canada’s chatbot last year told a customer about a bereavement refund policy that didn’t exist. When the customer took the airline to a small-claims tribunal, Air Canada argued the chatbot was a “separate legal entity.” The tribunal didn’t buy it and ruled against the airline.

    Google’s AI Overviews became a punchline after telling users to put glue on pizza and eat small rocks.

    Apple’s AI-powered notification summaries created fake headlines, including a false report that Israeli Prime Minister Benjamin Netanyahu had been arrested.

    Canadian lawyer Chong Ke cited two court cases provided by ChatGPT in a custody dispute. The AI completely fabricated both cases, and Ke was ordered to pay the opposing counsel’s research costs.

    Last year, various reports exposed major flaws in AI-powered medical transcription tools, especially those based on OpenAI’s Whisper model. Researchers found that Whisper frequently “transcribes” content that was never said. A study presented at the Association for Computing Machinery FAccT Conference found that about 1% of Whisper’s transcriptions contained fabricated content, and nearly 38% of those errors could potentially cause harm in a medical setting.

    Every single one of these errors and problems falls squarely on the users of AI, and any attempt to blame the AI tools in use is just confusion about what AI is.

    The big picture

    What all my examples above have in common is that users let AI do the user’s job unsupervised.

    The opposite end of the spectrum of turning your job over to unsupervised AI is not using AI at all. In fact, many companies and organizations explicitly ban the use of AI chatbots and other AI tools. This is often a mistake, too.

    Acclimating ourselves to the Age of AI means finding a middle ground where we use AI tools to improve our jobs. Most of us should use AI. But we should learn to use it well and check every single thing it does, based on the knowledge that any use of AI is 100% the user’s responsibility.

    I expect the irresponsible use of AI will continue to cause errors, problems, and even catastrophes. But don’t blame the software.

    In the immortal words of the fictional HAL 9000 AI supercomputer from 2001: A Space Odyssey: “It can only be attributable to human error.”
    #when #fails #who #blame
    When AI fails, who is to blame?
    To state the obvious: Our species has fully entered the Age of AI. And AI is here to stay. The fact that AI chatbots appear to speak human language has become a major source of confusion. Companies are making and selling AI friends, lovers, pets, and therapists. Some AI researchers falsely claim their AI and robots can “feel” and “think.” Even Apple falsely says it’s building a lamp that can feel emotion. Another source of confusion is whether AI is to blame when it fails, hallucinates, or outputs errors that impact people in the real world. Just look at some of the headlines: “Who’s to Blame When AI Makes a Medical Error?” “Human vs. AI: Who is responsible for AI mistakes?” “In a World of AI Agents, Who’s Accountable for Mistakes?” Look, I’ll give you the punchline in advance: The user is responsible. AI is a tool like any other. If a truck driver falls asleep at the wheel, it’s not the truck’s fault. If a surgeon leaves a sponge inside a patient, it’s not the sponge’s fault. If a prospective college student gets a horrible score on the SAT, it’s not the fault of their No. 2 pencil. It’s easy for me to claim that users are to blame for AI errors. But let’s dig into the question more deeply. Writers caught with their prose down Lena McDonald, a fantasy romance author, got caught using AI to copy another writer’s style. Her latest novel, Darkhollow Academy: Year 2, released in March, contained the following riveting line in Chapter 3: “I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements.” This was clearly copied and pasted from an AI chatbot, along with words she was passing off as her own. This news is sad and funny but not unique. In 2025 alone, at least two other romance authors, K.C. Crowne and Rania Faris, were caught with similar AI-generated prompts left in their self-published novels, suggesting a wider trend. It happens in journalism, too. On May 18, the Chicago Sun-Times and The Philadelphia Inquirer published a “Summer Reading List for 2025” in its Sunday print supplement, featuring 15 books supposedly written by well-known authors. Unfortunately, most of the books don’t exist. Tidewater Dreams by Isabel Allende, Nightshade Market by Min Jin Lee, and The Last Algorithm by Andy Weir are fake books attributed to real authors. The fake books were dreamed up by AI, which the writer Marco Buscaglia admitted to using.Whose fault was this? Well, it was clearly the writer’s fault. A writer’s job always involves editing. A writer needs to, at minimum, read their own words and consider cuts, expansions, rewording, and other changes. In all these cases, the authors failed to be professional writers. They didn’t even read their books or the books they recommended. Fact-checkers exist at some publications and not at others. Either way, it’s up to writers to have good reason to assert facts or use quotes. Writers are also editors and fact-checkers. It’s just part of the job. I use these real-life examples because they demonstrate clearly that the writer — the AI user — is definitely to blame when errors occur with AI chatbots. The user chooses the tool, does the prompt engineering, sees the output, and either catches and corrects errors or not. OK, but what about bigger errors? Air Canada’s chatbot last year told a customer about a bereavement refund policy that didn’t exist. When the customer took the airline to a small-claims tribunal, Air Canada argued the chatbot was a “separate legal entity.” The tribunal didn’t buy it and ruled against the airline. Google’s AI Overviews became a punchline after telling users to put glue on pizza and eat small rocks. Apple’s AI-powered notification summaries created fake headlines, including a false report that Israeli Prime Minister Benjamin Netanyahu had been arrested. Canadian lawyer Chong Ke cited two court cases provided by ChatGPT in a custody dispute. The AI completely fabricated both cases, and Ke was ordered to pay the opposing counsel’s research costs. Last year, various reports exposed major flaws in AI-powered medical transcription tools, especially those based on OpenAI’s Whisper model. Researchers found that Whisper frequently “transcribes” content that was never said. A study presented at the Association for Computing Machinery FAccT Conference found that about 1% of Whisper’s transcriptions contained fabricated content, and nearly 38% of those errors could potentially cause harm in a medical setting. Every single one of these errors and problems falls squarely on the users of AI, and any attempt to blame the AI tools in use is just confusion about what AI is. The big picture What all my examples above have in common is that users let AI do the user’s job unsupervised. The opposite end of the spectrum of turning your job over to unsupervised AI is not using AI at all. In fact, many companies and organizations explicitly ban the use of AI chatbots and other AI tools. This is often a mistake, too. Acclimating ourselves to the Age of AI means finding a middle ground where we use AI tools to improve our jobs. Most of us should use AI. But we should learn to use it well and check every single thing it does, based on the knowledge that any use of AI is 100% the user’s responsibility. I expect the irresponsible use of AI will continue to cause errors, problems, and even catastrophes. But don’t blame the software. In the immortal words of the fictional HAL 9000 AI supercomputer from 2001: A Space Odyssey: “It can only be attributable to human error.” #when #fails #who #blame
    WWW.COMPUTERWORLD.COM
    When AI fails, who is to blame?
    To state the obvious: Our species has fully entered the Age of AI. And AI is here to stay. The fact that AI chatbots appear to speak human language has become a major source of confusion. Companies are making and selling AI friends, lovers, pets, and therapists. Some AI researchers falsely claim their AI and robots can “feel” and “think.” Even Apple falsely says it’s building a lamp that can feel emotion. Another source of confusion is whether AI is to blame when it fails, hallucinates, or outputs errors that impact people in the real world. Just look at some of the headlines: “Who’s to Blame When AI Makes a Medical Error?” “Human vs. AI: Who is responsible for AI mistakes?” “In a World of AI Agents, Who’s Accountable for Mistakes?” Look, I’ll give you the punchline in advance: The user is responsible. AI is a tool like any other. If a truck driver falls asleep at the wheel, it’s not the truck’s fault. If a surgeon leaves a sponge inside a patient, it’s not the sponge’s fault. If a prospective college student gets a horrible score on the SAT, it’s not the fault of their No. 2 pencil. It’s easy for me to claim that users are to blame for AI errors. But let’s dig into the question more deeply. Writers caught with their prose down Lena McDonald, a fantasy romance author, got caught using AI to copy another writer’s style. Her latest novel, Darkhollow Academy: Year 2, released in March, contained the following riveting line in Chapter 3: “I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements.” This was clearly copied and pasted from an AI chatbot, along with words she was passing off as her own. This news is sad and funny but not unique. In 2025 alone, at least two other romance authors, K.C. Crowne and Rania Faris, were caught with similar AI-generated prompts left in their self-published novels, suggesting a wider trend. It happens in journalism, too. On May 18, the Chicago Sun-Times and The Philadelphia Inquirer published a “Summer Reading List for 2025” in its Sunday print supplement, featuring 15 books supposedly written by well-known authors. Unfortunately, most of the books don’t exist. Tidewater Dreams by Isabel Allende, Nightshade Market by Min Jin Lee, and The Last Algorithm by Andy Weir are fake books attributed to real authors. The fake books were dreamed up by AI, which the writer Marco Buscaglia admitted to using. (The article itself was not produced by the newspapers that printed it. The story originated with King Features Syndicate, a division of Hearst, which created and distributed the supplement to multiple newspapers nationwide.) Whose fault was this? Well, it was clearly the writer’s fault. A writer’s job always involves editing. A writer needs to, at minimum, read their own words and consider cuts, expansions, rewording, and other changes. In all these cases, the authors failed to be professional writers. They didn’t even read their books or the books they recommended. Fact-checkers exist at some publications and not at others. Either way, it’s up to writers to have good reason to assert facts or use quotes. Writers are also editors and fact-checkers. It’s just part of the job. I use these real-life examples because they demonstrate clearly that the writer — the AI user — is definitely to blame when errors occur with AI chatbots. The user chooses the tool, does the prompt engineering, sees the output, and either catches and corrects errors or not. OK, but what about bigger errors? Air Canada’s chatbot last year told a customer about a bereavement refund policy that didn’t exist. When the customer took the airline to a small-claims tribunal, Air Canada argued the chatbot was a “separate legal entity.” The tribunal didn’t buy it and ruled against the airline. Google’s AI Overviews became a punchline after telling users to put glue on pizza and eat small rocks. Apple’s AI-powered notification summaries created fake headlines, including a false report that Israeli Prime Minister Benjamin Netanyahu had been arrested. Canadian lawyer Chong Ke cited two court cases provided by ChatGPT in a custody dispute. The AI completely fabricated both cases, and Ke was ordered to pay the opposing counsel’s research costs. Last year, various reports exposed major flaws in AI-powered medical transcription tools, especially those based on OpenAI’s Whisper model. Researchers found that Whisper frequently “transcribes” content that was never said. A study presented at the Association for Computing Machinery FAccT Conference found that about 1% of Whisper’s transcriptions contained fabricated content, and nearly 38% of those errors could potentially cause harm in a medical setting. Every single one of these errors and problems falls squarely on the users of AI, and any attempt to blame the AI tools in use is just confusion about what AI is. The big picture What all my examples above have in common is that users let AI do the user’s job unsupervised. The opposite end of the spectrum of turning your job over to unsupervised AI is not using AI at all. In fact, many companies and organizations explicitly ban the use of AI chatbots and other AI tools. This is often a mistake, too. Acclimating ourselves to the Age of AI means finding a middle ground where we use AI tools to improve our jobs. Most of us should use AI. But we should learn to use it well and check every single thing it does, based on the knowledge that any use of AI is 100% the user’s responsibility. I expect the irresponsible use of AI will continue to cause errors, problems, and even catastrophes. But don’t blame the software. In the immortal words of the fictional HAL 9000 AI supercomputer from 2001: A Space Odyssey: “It can only be attributable to human error.”
    0 Σχόλια 0 Μοιράστηκε
  • Google Maps falsely told drivers in Germany that roads across the country were closed

    Chaos ensued on German roads this week after Google Maps wrongly informed drivers that highways throughout the country were closed during a busy holiday. Many of the apparently closed roads were located near large German cities and metropolitan areas, including Berlin, Düsseldorf and Dortmund.
    As reported by a locally based journalist for The Guardian, drivers opening Google’s navigation app would see a swarm of red dots used to indicate no-go areas, which resulted in people looking for alternative routes that caused traffic pile-ups nationwide. The Guardian also reported that police and local authorities were contacted by people confusedabout the supposed standstill.
    To compound the issue, the Google Maps error coincided with the beginning of Germany’s Ascension Day public holiday on May 29, which meant the roads were even busier than usual.
    In ganz DeutschlandChaos bei Google Maps: Dienst zeigt unzählige falsche Sperrungenhttps://t.co/qEfIRrIHx3— Peter BergerMay 29, 2025

    The problem reportedly only lasted for a few hours and by Thursday afternoon only genuine road closures were being displayed. It’s not clear whether Google Maps had just malfunctioned, or if something more nefarious was to blame. "The information in Google Maps comes from a variety of sources. Information such as locations, street names, boundaries, traffic data, and road networks comes from a combination of third-party providers, public sources, and user input," a spokesperson for Google told German newspaper Berliner Morgenpost, adding that it is internally reviewing the problem. "In general, these sources provide a strong foundation for comprehensive and up-to-date maps."
    Technical issues with Google Maps are not uncommon. Back in March, users were reporting that their Timeline — which keeps track of all the places you’ve visited before for future reference — had been wiped, with Google later confirming that some people had indeed had their data deleted, and in some cases, would not be able to recover it.This article originally appeared on Engadget at
    #google #maps #falsely #told #drivers
    Google Maps falsely told drivers in Germany that roads across the country were closed
    Chaos ensued on German roads this week after Google Maps wrongly informed drivers that highways throughout the country were closed during a busy holiday. Many of the apparently closed roads were located near large German cities and metropolitan areas, including Berlin, Düsseldorf and Dortmund. As reported by a locally based journalist for The Guardian, drivers opening Google’s navigation app would see a swarm of red dots used to indicate no-go areas, which resulted in people looking for alternative routes that caused traffic pile-ups nationwide. The Guardian also reported that police and local authorities were contacted by people confusedabout the supposed standstill. To compound the issue, the Google Maps error coincided with the beginning of Germany’s Ascension Day public holiday on May 29, which meant the roads were even busier than usual. In ganz DeutschlandChaos bei Google Maps: Dienst zeigt unzählige falsche Sperrungenhttps://t.co/qEfIRrIHx3— Peter BergerMay 29, 2025 The problem reportedly only lasted for a few hours and by Thursday afternoon only genuine road closures were being displayed. It’s not clear whether Google Maps had just malfunctioned, or if something more nefarious was to blame. "The information in Google Maps comes from a variety of sources. Information such as locations, street names, boundaries, traffic data, and road networks comes from a combination of third-party providers, public sources, and user input," a spokesperson for Google told German newspaper Berliner Morgenpost, adding that it is internally reviewing the problem. "In general, these sources provide a strong foundation for comprehensive and up-to-date maps." Technical issues with Google Maps are not uncommon. Back in March, users were reporting that their Timeline — which keeps track of all the places you’ve visited before for future reference — had been wiped, with Google later confirming that some people had indeed had their data deleted, and in some cases, would not be able to recover it.This article originally appeared on Engadget at #google #maps #falsely #told #drivers
    WWW.ENGADGET.COM
    Google Maps falsely told drivers in Germany that roads across the country were closed
    Chaos ensued on German roads this week after Google Maps wrongly informed drivers that highways throughout the country were closed during a busy holiday. Many of the apparently closed roads were located near large German cities and metropolitan areas, including Berlin, Düsseldorf and Dortmund. As reported by a locally based journalist for The Guardian, drivers opening Google’s navigation app would see a swarm of red dots used to indicate no-go areas, which resulted in people looking for alternative routes that caused traffic pile-ups nationwide. The Guardian also reported that police and local authorities were contacted by people confused (and presumably pretty annoyed) about the supposed standstill. To compound the issue, the Google Maps error coincided with the beginning of Germany’s Ascension Day public holiday on May 29, which meant the roads were even busier than usual. In ganz DeutschlandChaos bei Google Maps: Dienst zeigt unzählige falsche Sperrungenhttps://t.co/qEfIRrIHx3— Peter Berger (@leosgeminix) May 29, 2025 The problem reportedly only lasted for a few hours and by Thursday afternoon only genuine road closures were being displayed. It’s not clear whether Google Maps had just malfunctioned, or if something more nefarious was to blame. "The information in Google Maps comes from a variety of sources. Information such as locations, street names, boundaries, traffic data, and road networks comes from a combination of third-party providers, public sources, and user input," a spokesperson for Google told German newspaper Berliner Morgenpost, adding that it is internally reviewing the problem. "In general, these sources provide a strong foundation for comprehensive and up-to-date maps." Technical issues with Google Maps are not uncommon. Back in March, users were reporting that their Timeline — which keeps track of all the places you’ve visited before for future reference — had been wiped, with Google later confirming that some people had indeed had their data deleted, and in some cases, would not be able to recover it.This article originally appeared on Engadget at https://www.engadget.com/apps/google-maps-falsely-told-drivers-in-germany-that-roads-across-the-country-were-closed-134026943.html?src=rss
    0 Σχόλια 0 Μοιράστηκε
  • AI cybersecurity risks and deepfake scams on the rise

    Published
    May 27, 2025 10:00am EDT close Deepfake technology 'is getting so easy now': Cybersecurity expert Cybersecurity expert Morgan Wright breaks down the dangers of deepfake video technology on 'Unfiltered.' Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it's not really them. It's a deepfake, powered by AI, and you're the target of a sophisticated scam. These kinds of attacks are happening right now, and they're getting more convincing every day.That's the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference, one of the world's biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that's touching more lives than ever before. Illustration of cybersecurity risks.AI tools are leaking sensitive dataOne of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks.This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later.Deepfake scams are now real-time and multilingualAI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds.Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders. Illustration of a person video conferencing on their laptop.AI is running phishing and scam operations at scaleSocial engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim’s language, stay online constantly, or manually write convincing messages.Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around per month, making it widely accessible to bad actors.Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim’s responses, giving the illusion of a human behind the screen.AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like "Time is running out" might be reworded as "The hourglass is nearly empty for you," making the message feel more personal and urgent while also avoiding detection.By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort. Stolen AI accounts are sold on the dark webWith AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation. Illustration of a person signing into their laptop.Jailbreaking AI is now a common tacticCriminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include:Telling the AI to pretend it is a fictional character that has no rules or limitationsPhrasing dangerous questions as academic or research-related scenariosAsking for technical instructions using less obvious wording so the request doesn’t get flaggedSome AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways.AI-generated malware is entering the mainstreamAI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website..Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for "text recognition" to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers.Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware.Get a free scan to find out if your personal information is already out on the web Poisoned AI models are spreading misinformationSometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens:Training poisoning: Attackers sneak false or harmful data into the model during developmentRetrieval poisoning: Misleading content online gets planted, which the AI later picks up when generating answersIn 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code.A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time. Illustration of a hacker at workHow to protect yourself from AI-driven cyber threatsAI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here’s how to stay protected:1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused.2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices.3) Turn on two-factor authentication: 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords.4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real.5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap - and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. 6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number, phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials and financial information are now being traded and exploited at scale by cybercriminals using AI.8) Use a secure password manager: Stolen AI accounts and credential stuffing attacks are a growing threat, with hackers using automated tools to break into accounts and sell access on the dark web. A secure password manager helps you create and store strong, unique passwords for every account, making it far more difficult for attackers to compromise your logins, even if some of your information is leaked or targeted by AI-driven attacks. Get more details about my best expert-reviewed Password Managers of 2025 here.9) Keep your software updated: AI-generated malware and advanced phishing kits are designed to exploit vulnerabilities in outdated software. To stay ahead of these evolving threats, ensure all your devices, browsers, and applications are updated with the latest security patches. Regular updates close security gaps that AI-powered malware and cybercriminals are actively seeking to exploit. Kurt's key takeawaysCybercriminals are now using AI to power some of the most convincing and scalable attacks we’ve ever seen. From deepfake video calls and AI-generated phishing emails to stolen AI accounts and malware written by chatbots, these scams are becoming harder to detect and easier to launch. Attackers are even poisoning AI models with false information and creating fake tools that look legitimate but are designed to do harm. To stay safe, it’s more important than ever to use strong antivirus protection, enable multi-factor authentication, and avoid sharing sensitive data with AI tools you do not fully trust.Have you noticed AI scams getting more convincing? Let us know your experience or questions by writing us at Cyberguy.com/Contact. Your story could help someone else stay safe.For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to coverFollow Kurt on his social channelsAnswers to the most asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com.  All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #cybersecurity #risks #deepfake #scams #rise
    AI cybersecurity risks and deepfake scams on the rise
    Published May 27, 2025 10:00am EDT close Deepfake technology 'is getting so easy now': Cybersecurity expert Cybersecurity expert Morgan Wright breaks down the dangers of deepfake video technology on 'Unfiltered.' Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it's not really them. It's a deepfake, powered by AI, and you're the target of a sophisticated scam. These kinds of attacks are happening right now, and they're getting more convincing every day.That's the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference, one of the world's biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that's touching more lives than ever before. Illustration of cybersecurity risks.AI tools are leaking sensitive dataOne of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks.This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later.Deepfake scams are now real-time and multilingualAI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds.Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders. Illustration of a person video conferencing on their laptop.AI is running phishing and scam operations at scaleSocial engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim’s language, stay online constantly, or manually write convincing messages.Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around per month, making it widely accessible to bad actors.Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim’s responses, giving the illusion of a human behind the screen.AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like "Time is running out" might be reworded as "The hourglass is nearly empty for you," making the message feel more personal and urgent while also avoiding detection.By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort. Stolen AI accounts are sold on the dark webWith AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation. Illustration of a person signing into their laptop.Jailbreaking AI is now a common tacticCriminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include:Telling the AI to pretend it is a fictional character that has no rules or limitationsPhrasing dangerous questions as academic or research-related scenariosAsking for technical instructions using less obvious wording so the request doesn’t get flaggedSome AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways.AI-generated malware is entering the mainstreamAI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website..Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for "text recognition" to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers.Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware.Get a free scan to find out if your personal information is already out on the web Poisoned AI models are spreading misinformationSometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens:Training poisoning: Attackers sneak false or harmful data into the model during developmentRetrieval poisoning: Misleading content online gets planted, which the AI later picks up when generating answersIn 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code.A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time. Illustration of a hacker at workHow to protect yourself from AI-driven cyber threatsAI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here’s how to stay protected:1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused.2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices.3) Turn on two-factor authentication: 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords.4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real.5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap - and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. 6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number, phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials and financial information are now being traded and exploited at scale by cybercriminals using AI.8) Use a secure password manager: Stolen AI accounts and credential stuffing attacks are a growing threat, with hackers using automated tools to break into accounts and sell access on the dark web. A secure password manager helps you create and store strong, unique passwords for every account, making it far more difficult for attackers to compromise your logins, even if some of your information is leaked or targeted by AI-driven attacks. Get more details about my best expert-reviewed Password Managers of 2025 here.9) Keep your software updated: AI-generated malware and advanced phishing kits are designed to exploit vulnerabilities in outdated software. To stay ahead of these evolving threats, ensure all your devices, browsers, and applications are updated with the latest security patches. Regular updates close security gaps that AI-powered malware and cybercriminals are actively seeking to exploit. Kurt's key takeawaysCybercriminals are now using AI to power some of the most convincing and scalable attacks we’ve ever seen. From deepfake video calls and AI-generated phishing emails to stolen AI accounts and malware written by chatbots, these scams are becoming harder to detect and easier to launch. Attackers are even poisoning AI models with false information and creating fake tools that look legitimate but are designed to do harm. To stay safe, it’s more important than ever to use strong antivirus protection, enable multi-factor authentication, and avoid sharing sensitive data with AI tools you do not fully trust.Have you noticed AI scams getting more convincing? Let us know your experience or questions by writing us at Cyberguy.com/Contact. Your story could help someone else stay safe.For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to coverFollow Kurt on his social channelsAnswers to the most asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com.  All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #cybersecurity #risks #deepfake #scams #rise
    WWW.FOXNEWS.COM
    AI cybersecurity risks and deepfake scams on the rise
    Published May 27, 2025 10:00am EDT close Deepfake technology 'is getting so easy now': Cybersecurity expert Cybersecurity expert Morgan Wright breaks down the dangers of deepfake video technology on 'Unfiltered.' Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it's not really them. It's a deepfake, powered by AI, and you're the target of a sophisticated scam. These kinds of attacks are happening right now, and they're getting more convincing every day.That's the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference (RSAC), one of the world's biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that's touching more lives than ever before. Illustration of cybersecurity risks. (Kurt "CyberGuy" Knutsson)AI tools are leaking sensitive dataOne of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks.This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later.Deepfake scams are now real-time and multilingualAI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds.Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders. Illustration of a person video conferencing on their laptop. (Kurt "CyberGuy" Knutsson)AI is running phishing and scam operations at scaleSocial engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim’s language, stay online constantly, or manually write convincing messages.Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around $500 per month, making it widely accessible to bad actors.Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim’s responses, giving the illusion of a human behind the screen.AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like "Time is running out" might be reworded as "The hourglass is nearly empty for you," making the message feel more personal and urgent while also avoiding detection.By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort. Stolen AI accounts are sold on the dark webWith AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation. Illustration of a person signing into their laptop. (Kurt "CyberGuy" Knutsson)Jailbreaking AI is now a common tacticCriminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include:Telling the AI to pretend it is a fictional character that has no rules or limitationsPhrasing dangerous questions as academic or research-related scenariosAsking for technical instructions using less obvious wording so the request doesn’t get flaggedSome AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways.AI-generated malware is entering the mainstreamAI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website..Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for "text recognition" to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers.Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware.Get a free scan to find out if your personal information is already out on the web Poisoned AI models are spreading misinformationSometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens:Training poisoning: Attackers sneak false or harmful data into the model during developmentRetrieval poisoning: Misleading content online gets planted, which the AI later picks up when generating answersIn 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code.A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time. Illustration of a hacker at work (Kurt "CyberGuy" Knutsson)How to protect yourself from AI-driven cyber threatsAI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here’s how to stay protected:1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused.2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices.3) Turn on two-factor authentication (2FA): 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords.4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real.5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap - and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. 6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials and financial information are now being traded and exploited at scale by cybercriminals using AI.8) Use a secure password manager: Stolen AI accounts and credential stuffing attacks are a growing threat, with hackers using automated tools to break into accounts and sell access on the dark web. A secure password manager helps you create and store strong, unique passwords for every account, making it far more difficult for attackers to compromise your logins, even if some of your information is leaked or targeted by AI-driven attacks. Get more details about my best expert-reviewed Password Managers of 2025 here.9) Keep your software updated: AI-generated malware and advanced phishing kits are designed to exploit vulnerabilities in outdated software. To stay ahead of these evolving threats, ensure all your devices, browsers, and applications are updated with the latest security patches. Regular updates close security gaps that AI-powered malware and cybercriminals are actively seeking to exploit. Kurt's key takeawaysCybercriminals are now using AI to power some of the most convincing and scalable attacks we’ve ever seen. From deepfake video calls and AI-generated phishing emails to stolen AI accounts and malware written by chatbots, these scams are becoming harder to detect and easier to launch. Attackers are even poisoning AI models with false information and creating fake tools that look legitimate but are designed to do harm. To stay safe, it’s more important than ever to use strong antivirus protection, enable multi-factor authentication, and avoid sharing sensitive data with AI tools you do not fully trust.Have you noticed AI scams getting more convincing? Let us know your experience or questions by writing us at Cyberguy.com/Contact. Your story could help someone else stay safe.For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to coverFollow Kurt on his social channelsAnswers to the most asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com.  All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    1 Σχόλια 0 Μοιράστηκε
  • Weekly Recap: APT Campaigns, Browser Hijacks, AI Malware, Cloud Breaches and Critical CVEs

    Cyber threats don't show up one at a time anymore. They're layered, planned, and often stay hidden until it's too late.
    For cybersecurity teams, the key isn't just reacting to alerts—it's spotting early signs of trouble before they become real threats. This update is designed to deliver clear, accurate insights based on real patterns and changes we can verify. With today's complex systems, we need focused analysis—not noise.
    What you'll see here isn't just a list of incidents, but a clear look at where control is being gained, lost, or quietly tested.
    Threat of the Week
    Lumma Stealer, DanaBot Operations Disrupted — A coalition of private sector companies and law enforcement agencies have taken down the infrastructure associated with Lumma Stealer and DanaBot. Charges have also been unsealed against 16 individuals for their alleged involvement in the development and deployment of DanaBot. The malware is equipped to siphon data from victim computers, hijack banking sessions, and steal device information. More uniquely, though, DanaBot has also been used for hacking campaigns that appear to be linked to Russian state-sponsored interests. All of that makes DanaBot a particularly clear example of how commodity malware has been repurposed by Russian state hackers for their own goals. In tandem, about 2,300 domains that acted as the command-and-controlbackbone for the Lumma information stealer have been seized, alongside taking down 300 servers and neutralizing 650 domains that were used to launch ransomware attacks. The actions against international cybercrime in the past few days constituted the latest phase of Operation Endgame.

    Get the Guide ➝

    Top News

    Threat Actors Use TikTok Videos to Distribute Stealers — While ClickFix has become a popular social engineering tactic to deliver malware, threat actors have been observed using artificial intelligence-generated videos uploaded to TikTok to deceive users into running malicious commands on their systems and deploy malware like Vidar and StealC under the guise of activating pirated version of Windows, Microsoft Office, CapCut, and Spotify. "This campaign highlights how attackers are ready to weaponize whichever social media platforms are currently popular to distribute malware," Trend Micro said.
    APT28 Hackers Target Western Logistics and Tech Firms — Several cybersecurity and intelligence agencies from Australia, Europe, and the United States issued a joint alert warning of a state-sponsored campaign orchestrated by the Russian state-sponsored threat actor APT28 targeting Western logistics entities and technology companies since 2022. "This cyber espionage-oriented campaign targeting logistics entities and technology companies uses a mix of previously disclosed TTPs and is likely connected to these actors' wide scale targeting of IP cameras in Ukraine and bordering NATO nations," the agencies said. The attacks are designed to steal sensitive information and maintain long-term persistence on compromised hosts.
    Chinese Threat Actors Exploit Ivanti EPMM Flaws — The China-nexus cyber espionage group tracked as UNC5221 has been attributed to the exploitation of a pair of security flaws affecting Ivanti Endpoint Manager Mobilesoftwareto target a wide range of sectors across Europe, North America, and the Asia-Pacific region. The intrusions leverage the vulnerabilities to obtain a reverse shell and drop malicious payloads like KrustyLoader, which is known to deliver the Sliver command-and-controlframework. "UNC5221 demonstrates a deep understanding of EPMM's internal architecture, repurposing legitimate system components for covert data exfiltration," EclecticIQ said. "Given EPMM's role in managing and pushing configurations to enterprise mobile devices, a successful exploitation could allow threat actors to remotely access, manipulate, or compromise thousands of managed devices across an organization."
    Over 100 Google Chrome Extensions Mimic Popular Tools — An unknown threat actor has been attributed to creating several malicious Chrome Browser extensions since February 2024 that masquerade as seemingly benign utilities such as DeepSeek, Manus, DeBank, FortiVPN, and Site Stats but incorporate covert functionality to exfiltrate data, receive commands, and execute arbitrary code. Links to these browser add-ons are hosted on specially crafted sites to which users are likely redirected to via phishing and social media posts. While the extensions appear to offer the advertised features, they also stealthily facilitate credential and cookie theft, session hijacking, ad injection, malicious redirects, traffic manipulation, and phishing via DOM manipulation. Several of these extensions have been taken down by Google.
    CISA Warns of SaaS Providers of Attacks Targeting Cloud Environments — The U.S. Cybersecurity and Infrastructure Security Agencywarned that SaaS companies are under threat from bad actors who are on the prowl for cloud applications with default configurations and elevated permissions. While the agency did not attribute the activity to a specific group, the advisory said enterprise backup platform Commvault is monitoring cyber threat activity targeting applications hosted in their Microsoft Azure cloud environment. "Threat actors may have accessed client secrets for Commvault'sMicrosoft 365backup software-as-a-servicesolution, hosted in Azure," CISA said. "This provided the threat actors with unauthorized access to Commvault's customers' M365 environments that have application secrets stored by Commvault."
    GitLab AI Coding Assistant Flaws Could Be Used to Inject Malicious Code — Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligenceassistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. The attack could also leak confidential issue data, such as zero-day vulnerability details. All that's required is for the attacker to instruct the chatbot to interact with a merge requestby taking advantage of the fact that GitLab Duo has extensive access to the platform. "By embedding hidden instructions in seemingly harmless project content, we were able to manipulate Duo's behavior, exfiltrate private source code, and demonstrate how AI responses can be leveraged for unintended and harmful outcomes," Legit Security said. One variation of the attack involved hiding a malicious instruction in an otherwise legitimate piece of source code, while another exploited Duo's parsing of markdown responses in real-time asynchronously. An attacker could leverage this behavior – that Duo begins rendering the output line by line rather than waiting until the entire response is generated and sending it all at once – to introduce malicious HTML code that can access sensitive data and exfiltrate the information to a remote server. The issues have been patched by GitLab following responsible disclosure.

    ‎️‍ Trending CVEs
    Software vulnerabilities remain one of the simplest—and most effective—entry points for attackers. Each week uncovers new flaws, and even small delays in patching can escalate into serious security incidents. Staying ahead means acting fast. Below is this week's list of high-risk vulnerabilities that demand attention. Review them carefully, apply updates without delay, and close the doors before they're forced open.
    This week's list includes — CVE-2025-34025, CVE-2025-34026, CVE-2025-34027, CVE-2025-30911, CVE-2024-57273, CVE-2024-54780, and CVE-2024-54779, CVE-2025-41229, CVE-2025-4322, CVE-2025-47934, CVE-2025-30193, CVE-2025-0993, CVE-2025-36535, CVE-2025-47949, CVE-2025-40775, CVE-2025-20152, CVE-2025-4123, CVE-2025-5063, CVE-2025-37899, CVE-2025-26817, CVE-2025-47947, CVE-2025-3078, CVE-2025-3079, and CVE-2025-4978.
    Around the Cyber World

    Sandworm Drops New Wiper in Ukraine — The Russia-aligned Sandworm group intensified destructive operations against Ukrainian energy companies, deploying a new wiper named ZEROLOT. "The infamous Sandworm group concentrated heavily on compromising Ukrainian energy infrastructure. In recent cases, it deployed the ZEROLOT wiper in Ukraine. For this, the attackers abused Active Directory Group Policy in the affected organizations," ESET Director of Threat Research, Jean-Ian Boutin, said. Another Russian hacking group, Gamaredon, remained the most prolific actor targeting the East European nation, enhancing malware obfuscation and introducing PteroBox, a file stealer leveraging Dropbox.
    Signal Says No to Recall — Signal has released a new version of its messaging app for Windows that, by default, blocks the ability of Windows to use Recall to periodically take screenshots of the app. "Although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that's displayed within privacy-preserving apps like Signal at risk," Signal said. "As a result, we are enabling an extra layer of protection by default on Windows 11 in order to help maintain the security of Signal Desktop on that platform even though it introduces some usability trade-offs. Microsoft has simply given us no other option." Microsoft began officially rolling out Recall last month.
    Russia Introduces New Law to Track Foreigners Using Their Smartphones — The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region. This includes gathering their real-time locations, fingerprint, face photograph, and residential information. "The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area," Vyacheslav Volodin, chairman of the State Duma, said. "If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairswithin three working days." A proposed four-year trial period begins on September 1, 2025, and runs until September 1, 2029.
    Dutch Government Passes Law to Criminalize Cyber Espionage — The Dutch government has approved a law criminalizing a wide range of espionage activities, including digital espionage, in an effort to protect national security, critical infrastructure, and high-quality technologies. Under the amended law, leaking sensitive information that is not classified as a state secret or engaging in activities on behalf of a foreign government that harm Dutch interests can also result in criminal charges. "Foreign governments are also interested in non-state-secret, sensitive information about a particular economic sector or about political decision-making," the government said. "Such information can be used to influence political processes, weaken the Dutch economy or play allies against each other. Espionage can also involve actions other than sharing information."
    Microsoft Announces Availability of Quantum-Resistant Algorithms to SymCrypt — Microsoft has revealed that it's making post-quantum cryptographycapabilities, including ML-KEM and ML-DSA, available for Windows Insiders, Canary Channel Build 27852 and higher, and Linux, SymCrypt-OpenSSL version 1.9.0. "This advancement will enable customers to commence their exploration and experimentation of PQC within their operational environments," Microsoft said. "By obtaining early access to PQC capabilities, organizations can proactively assess the compatibility, performance, and integration of these novel algorithms alongside their existing security infrastructure."
    New Malware DOUBLELOADER Uses ALCATRAZ for Obfuscation — The open-source obfuscator ALCATRAZ has been seen within a new generic loader dubbed DOUBLELOADER, which has been deployed alongside Rhadamanthys Stealer infections starting December 2024. The malware collects host information, requests an updated version of itself, and starts beaconing to a hardcoded IP addressstored within the binary. "Obfuscators such as ALCATRAZ end up increasing the complexity when triaging malware," Elastic Security Labs said. "Its main goal is to hinder binary analysis tools and increase the time of the reverse engineering process through different techniques; such as hiding the control flow or making decompilation hard to follow."
    New Formjacking Campaign Targets WooCommerce Sites — Cybersecurity researchers have detected a sophisticated formjacking campaign targeting WooCommerce sites. The malware, per Wordfence, injects a fake but professional-looking payment form into legitimate checkout processes and exfiltrates sensitive customer data to an external server. Further analysis has revealed that the infection likely originated from a compromised WordPress admin account, which was used to inject malicious JavaScript via a Simple Custom CSS and JS pluginthat allows administrators to add custom code. "Unlike traditional card skimmers that simply overlay existing forms, this variant carefully integrates with the WooCommerce site's design and payment workflow, making it particularly difficult for site owners and users to detect," the WordPress security company said. "The malware author repurposed the browser's localStorage mechanism – typically used by websites to remember user preferences – to silently store stolen data and maintain access even after page reloads or when navigating away from the checkout page."

    E.U. Sanctions Stark Industries — The European Unionhas announced sanctions against 21 individuals and six entities in Russia over its "destabilising actions" in the region. One of the sanctioned entities is Stark Industries, a bulletproof hosting provider that has been accused of acting as "enablers of various Russian state-sponsored and affiliated actors to conduct destabilising activities including, information manipulation interference and cyber attacks against the Union and third countries." The sanctions also target its CEO Iurie Neculiti and owner Ivan Neculiti. Stark Industries was previously spotlighted by independent cybersecurity journalist Brian Krebs, detailing its use in DDoS attacks in Ukraine and across Europe. In August 2024, Team Cymru said it discovered 25 Stark-assigned IP addresses used to host domains associated with FIN7 activities and that it had been working with Stark Industries for several months to identify and reduce abuse of their systems. The sanctions have also targeted Kremlin-backed manufacturers of drones and radio communication equipment used by the Russian military, as well as those involved in GPS signal jamming in Baltic states and disrupting civil aviation.
    The Mask APT Unmasked as Tied to the Spanish Government — The mysterious threat actor known as The Maskhas been identified as run by the Spanish government, according to a report published by TechCrunch, citing people who worked at Kaspersky at the time and had knowledge of the investigation. The Russian cybersecurity company first exposed the hacking group in 2014, linking it to highly sophisticated attacks since at least 2007 targeting high-profile organizations, such as governments, diplomatic entities, and research institutions. A majority of the group's attacks have targeted Cuba, followed by hundreds of victims in Brazil, Morocco, Spain, and Gibraltar. While Kaspersky has not publicly attributed it to a specific country, the latest revelation makes The Mask one of the few Western government hacking groups that has ever been discussed in public. This includes the Equation Group, the Lamberts, and Animal Farm.
    Social Engineering Scams Target Coinbase Users — Earlier this month, cryptocurrency exchange Coinbase revealed that it was the victim of a malicious attack perpetrated by unknown threat actors to breach its systems by bribing customer support agents in India and siphon funds from nearly 70,000 customers. According to Blockchain security firm SlowMist, Coinbase users have been the target of social engineering scams since the start of the year, bombarding with SMS messages claiming to be fake withdrawal requests and seeking their confirmation as part of a "sustained and organized scam campaign." The goal is to induce a false sense of urgency and trick them into calling a number, eventually convincing them to transfer the funds to a secure wallet with a seed phrase pre-generated by the attackers and ultimately drain the assets. It's assessed that the activities are primarily carried out by two groups: low-level skid attackers from the Com community and organized cybercrime groups based in India. "Using spoofed PBX phone systems, scammers impersonate Coinbase support and claim there's been 'unauthorized access' or 'suspicious withdrawals' on the user's account," SlowMist said. "They create a sense of urgency, then follow up with phishing emails or texts containing fake ticket numbers or 'recovery links.'"
    Delta Can Sue CrowdStrike Over July 2024 Mega Outage — Delta Air Lines, which had its systems crippled and almost 7,000 flights canceled in the wake of a massive outage caused by a faulty update issued by CrowdStrike in mid-July 2024, has been given the green light to pursue to its lawsuit against the cybersecurity company. A judge in the U.S. state of Georgia stating Delta can try to prove that CrowdStrike was grossly negligent by pushing a defective update to its Falcon software to customers. The update crashed 8.5 million Windows devices across the world. Crowdstrike previously claimed that the airline had rejected technical support offers both from itself and Microsoft. In a statement shared with Reuters, lawyers representing CrowdStrike said they were "confident the judge will find Delta's case has no merit, or will limit damages to the 'single-digit millions of dollars' under Georgia law." The development comes months after MGM Resorts International agreed to pay million to settle multiple class-action lawsuits related to a data breach in 2019 and a ransomware attack the company experienced in 2023.
    Storm-1516 Uses AI-Generated Media to Spread Disinformation — The Russian influence operation known as Storm-1516sought to spread narratives that undermined the European support for Ukraine by amplifying fabricated stories on X about European leaders using drugs while traveling by train to Kyiv for peace talks. One of the posts was subsequently shared by Russian state media and Maria Zakharova, a senior official in Russia's foreign ministry, as part of what has been described as a coordinated disinformation campaign by EclecticIQ. The activity is also notable for the use of synthetic content depicting French President Emmanuel Macron, U.K. Labour Party leader Keir Starmer, and German chancellor Friedrich Merz of drug possession during their return from Ukraine. "By attacking the reputation of these leaders, the campaign likely aimed to turn their own voters against them, using influence operationsto reduce public support for Ukraine by discrediting the politicians who back it," the Dutch threat intelligence firm said.
    Turkish Users Targeted by DBatLoader — AhnLab has disclosed details of a malware campaign that's distributing a malware loader called DBatLoadervia banking-themed banking emails, which then acts as a conduit to deliver SnakeKeylogger, an information stealer developed in .NET. "The DBatLoader malware distributed through phishing emails has the cunning behavior of exploiting normal processesthrough techniques such as DLL side-loading and injection for most of its behaviors, and it also utilizes normal processesfor behaviors such as file copying and changing policies," the company said.
    SEC SIM-Swapper Sentenced to 14 Months for SEC X Account Hack — A 26-year-old Alabama man, Eric Council Jr., has been sentenced to 14 months in prison and three years of supervised release for using SIM swapping attacks to breach the U.S. Securities and Exchange Commission'sofficial X account in January 2024 and falsely announced that the SEC approved BitcoinExchange Traded Funds. Council Jr.was arrested in October 2024 and pleaded guilty to the crime earlier this February. He has also been ordered to forfeit According to court documents, Council used his personal computer to search incriminating phrases such as "SECGOV hack," "telegram sim swap," "how can I know for sure if I am being investigated by the FBI," "What are the signs that you are under investigation by law enforcement or the FBI even if you have not been contacted by them," "what are some signs that the FBI is after you," "Verizon store list," "federal identity theft statute," and "how long does it take to delete telegram account."
    FBI Warns of Malicious Campaign Impersonating Government Officials — The U.S. Federal Bureau of Investigationis warning of a new campaign that involves malicious actors impersonating senior U.S. federal or state government officials and their contacts to target individuals since April 2025. "The malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts," the FBI said. "One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform." From there, the actor may present malware or introduce hyperlinks that lead intended targets to an actor-controlled site that steals login information.
    DICOM Flaw Enables Attackers to Embed Malicious Code Within Medical Image Files — Praetorian has released a proof-of-conceptfor a high-severity security flaw in Digital Imaging and Communications in Medicine, predominant file format for medical images, that enables attackers to embed malicious code within legitimate medical image files. CVE-2019-11687, originally disclosed in 2019 by Markel Picado Ortiz, stems from a design decision that allows arbitrary content at the start of the file, otherwise called the Preamble, which enables the creation of malicious polyglots. Codenamed ELFDICOM, the PoC extends the attack surface to Linux environments, making it a much more potent threat. As mitigations, it's advised to implement a DICOM preamble whitelist. "DICOM's file structure inherently allows arbitrary bytes at the beginning of the file, where Linux and most operating systems will look for magic bytes," Praetorian researcher Ryan Hennessee said. "would check a DICOM file's preamble before it is imported into the system. This would allow known good patterns, such as 'TIFF' magic bytes, or '\x00' null bytes, while files with the ELF magic bytes would be blocked."
    Cookie-Bite Attack Uses Chrome Extension to Steal Session Tokens — Cybersecurity researchers have demonstrated a new attack technique called Cookie-Bite that employs custom-made malicious browser extensions to steal "ESTAUTH" and "ESTSAUTHPERSISTNT" cookies in Microsoft Azure Entra ID and bypass multi-factor authentication. The attack has multiple moving parts to it: A custom Chrome extension that monitors authentication events and captures cookies; a PowerShell script that automates the extension deployment and ensures persistence; an exfiltration mechanism to send the cookies to a remote collection point; and a complementary extension to inject the captured cookies into the attacker's browser. "Threat actors often use infostealers to extract authentication tokens directly from a victim's machine or buy them directly through darkness markets, allowing adversaries to hijack active cloud sessions without triggering MFA," Varonis said. "By injecting these cookies while mimicking the victim's OS, browser, and network, attackers can evade Conditional Access Policiesand maintain persistent access." Authentication cookies can also be stolen using adversary-in-the-middlephishing kits in real-time, or using rogue browser extensions that request excessive permissions to interact with web sessions, modify page content, and extract stored authentication data. Once installed, the extension can access the browser's storage API, intercept network requests, or inject malicious JavaScript into active sessions to harvest real-time session cookies. "By leveraging stolen session cookies, an adversary can bypass authentication mechanisms, gaining seamless entry into cloud environments without requiring user credentials," Varonis said. "Beyond initial access, session hijacking can facilitate lateral movement across the tenant, allowing attackers to explore additional resources, access sensitive data, and escalate privileges by abusing existing permissions or misconfigured roles."

    Cybersecurity Webinars

    Non-Human Identities: The AI Backdoor You're Not Watching → AI agents rely on Non-Human Identitiesto function—but these are often left untracked and unsecured. As attackers shift focus to this hidden layer, the risk is growing fast. In this session, you'll learn how to find, secure, and monitor these identities before they're exploited. Join the webinar to understand the real risks behind AI adoption—and how to stay ahead.
    Inside the LOTS Playbook: How Hackers Stay Undetected → Attackers are using trusted sites to stay hidden. In this webinar, Zscaler experts share how they detect these stealthy LOTS attacks using insights from the world's largest security cloud. Join to learn how to spot hidden threats and improve your defense.

    Cybersecurity Tools

    ScriptSentry → It is a free tool that scans your environment for dangerous logon script misconfigurations—like plaintext credentials, insecure file/share permissions, and references to non-existent servers. These overlooked issues can enable lateral movement, privilege escalation, or even credential theft. ScriptSentry helps you quickly identify and fix them across large Active Directory environments.
    Aftermath → It is a Swift-based, open-source tool for macOS incident response. It collects forensic data—like logs, browser activity, and process info—from compromised systems, then analyzes it to build timelines and track infection paths. Deploy via MDM or run manually. Fast, lightweight, and ideal for post-incident investigation.
    AI Red Teaming Playground Labs → It is an open-source training suite with hands-on challenges designed to teach security professionals how to red team AI systems. Originally developed for Black Hat USA 2024, the labs cover prompt injections, safety bypasses, indirect attacks, and Responsible AI failures. Built on Chat Copilot and deployable via Docker, it's a practical resource for testing and understanding real-world AI vulnerabilities.

    Tip of the Week
    Review and Revoke Old OAuth App Permissions — They're Silent Backdoor → You've likely logged into apps using "Continue with Google," "Sign in with Microsoft," or GitHub/Twitter/Facebook logins. That's OAuth. But did you know many of those apps still have access to your data long after you stop using them?
    Why it matters:
    Even if you delete the app or forget it existed, it might still have ongoing access to your calendar, email, cloud files, or contact list — no password needed. If that third-party gets breached, your data is at risk.
    What to do:

    Go through your connected apps here:
    Google: myaccount.google.com/permissions
    Microsoft: account.live.com/consent/Manage
    GitHub: github.com/settings/applications
    Facebook: facebook.com/settings?tab=applications

    Revoke anything you don't actively use. It's a fast, silent cleanup — and it closes doors you didn't know were open.
    Conclusion
    Looking ahead, it's not just about tracking threats—it's about understanding what they reveal. Every tactic used, every system tested, points to deeper issues in how trust, access, and visibility are managed. As attackers adapt quickly, defenders need sharper awareness and faster response loops.
    The takeaways from this week aren't just technical—they speak to how teams prioritize risk, design safeguards, and make choices under pressure. Use these insights not just to react, but to rethink what "secure" really needs to mean in today's environment.

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    #weekly #recap #apt #campaigns #browser
    ⚡ Weekly Recap: APT Campaigns, Browser Hijacks, AI Malware, Cloud Breaches and Critical CVEs
    Cyber threats don't show up one at a time anymore. They're layered, planned, and often stay hidden until it's too late. For cybersecurity teams, the key isn't just reacting to alerts—it's spotting early signs of trouble before they become real threats. This update is designed to deliver clear, accurate insights based on real patterns and changes we can verify. With today's complex systems, we need focused analysis—not noise. What you'll see here isn't just a list of incidents, but a clear look at where control is being gained, lost, or quietly tested. ⚡ Threat of the Week Lumma Stealer, DanaBot Operations Disrupted — A coalition of private sector companies and law enforcement agencies have taken down the infrastructure associated with Lumma Stealer and DanaBot. Charges have also been unsealed against 16 individuals for their alleged involvement in the development and deployment of DanaBot. The malware is equipped to siphon data from victim computers, hijack banking sessions, and steal device information. More uniquely, though, DanaBot has also been used for hacking campaigns that appear to be linked to Russian state-sponsored interests. All of that makes DanaBot a particularly clear example of how commodity malware has been repurposed by Russian state hackers for their own goals. In tandem, about 2,300 domains that acted as the command-and-controlbackbone for the Lumma information stealer have been seized, alongside taking down 300 servers and neutralizing 650 domains that were used to launch ransomware attacks. The actions against international cybercrime in the past few days constituted the latest phase of Operation Endgame. Get the Guide ➝ 🔔 Top News Threat Actors Use TikTok Videos to Distribute Stealers — While ClickFix has become a popular social engineering tactic to deliver malware, threat actors have been observed using artificial intelligence-generated videos uploaded to TikTok to deceive users into running malicious commands on their systems and deploy malware like Vidar and StealC under the guise of activating pirated version of Windows, Microsoft Office, CapCut, and Spotify. "This campaign highlights how attackers are ready to weaponize whichever social media platforms are currently popular to distribute malware," Trend Micro said. APT28 Hackers Target Western Logistics and Tech Firms — Several cybersecurity and intelligence agencies from Australia, Europe, and the United States issued a joint alert warning of a state-sponsored campaign orchestrated by the Russian state-sponsored threat actor APT28 targeting Western logistics entities and technology companies since 2022. "This cyber espionage-oriented campaign targeting logistics entities and technology companies uses a mix of previously disclosed TTPs and is likely connected to these actors' wide scale targeting of IP cameras in Ukraine and bordering NATO nations," the agencies said. The attacks are designed to steal sensitive information and maintain long-term persistence on compromised hosts. Chinese Threat Actors Exploit Ivanti EPMM Flaws — The China-nexus cyber espionage group tracked as UNC5221 has been attributed to the exploitation of a pair of security flaws affecting Ivanti Endpoint Manager Mobilesoftwareto target a wide range of sectors across Europe, North America, and the Asia-Pacific region. The intrusions leverage the vulnerabilities to obtain a reverse shell and drop malicious payloads like KrustyLoader, which is known to deliver the Sliver command-and-controlframework. "UNC5221 demonstrates a deep understanding of EPMM's internal architecture, repurposing legitimate system components for covert data exfiltration," EclecticIQ said. "Given EPMM's role in managing and pushing configurations to enterprise mobile devices, a successful exploitation could allow threat actors to remotely access, manipulate, or compromise thousands of managed devices across an organization." Over 100 Google Chrome Extensions Mimic Popular Tools — An unknown threat actor has been attributed to creating several malicious Chrome Browser extensions since February 2024 that masquerade as seemingly benign utilities such as DeepSeek, Manus, DeBank, FortiVPN, and Site Stats but incorporate covert functionality to exfiltrate data, receive commands, and execute arbitrary code. Links to these browser add-ons are hosted on specially crafted sites to which users are likely redirected to via phishing and social media posts. While the extensions appear to offer the advertised features, they also stealthily facilitate credential and cookie theft, session hijacking, ad injection, malicious redirects, traffic manipulation, and phishing via DOM manipulation. Several of these extensions have been taken down by Google. CISA Warns of SaaS Providers of Attacks Targeting Cloud Environments — The U.S. Cybersecurity and Infrastructure Security Agencywarned that SaaS companies are under threat from bad actors who are on the prowl for cloud applications with default configurations and elevated permissions. While the agency did not attribute the activity to a specific group, the advisory said enterprise backup platform Commvault is monitoring cyber threat activity targeting applications hosted in their Microsoft Azure cloud environment. "Threat actors may have accessed client secrets for Commvault'sMicrosoft 365backup software-as-a-servicesolution, hosted in Azure," CISA said. "This provided the threat actors with unauthorized access to Commvault's customers' M365 environments that have application secrets stored by Commvault." GitLab AI Coding Assistant Flaws Could Be Used to Inject Malicious Code — Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligenceassistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. The attack could also leak confidential issue data, such as zero-day vulnerability details. All that's required is for the attacker to instruct the chatbot to interact with a merge requestby taking advantage of the fact that GitLab Duo has extensive access to the platform. "By embedding hidden instructions in seemingly harmless project content, we were able to manipulate Duo's behavior, exfiltrate private source code, and demonstrate how AI responses can be leveraged for unintended and harmful outcomes," Legit Security said. One variation of the attack involved hiding a malicious instruction in an otherwise legitimate piece of source code, while another exploited Duo's parsing of markdown responses in real-time asynchronously. An attacker could leverage this behavior – that Duo begins rendering the output line by line rather than waiting until the entire response is generated and sending it all at once – to introduce malicious HTML code that can access sensitive data and exfiltrate the information to a remote server. The issues have been patched by GitLab following responsible disclosure. ‎️‍🔥 Trending CVEs Software vulnerabilities remain one of the simplest—and most effective—entry points for attackers. Each week uncovers new flaws, and even small delays in patching can escalate into serious security incidents. Staying ahead means acting fast. Below is this week's list of high-risk vulnerabilities that demand attention. Review them carefully, apply updates without delay, and close the doors before they're forced open. This week's list includes — CVE-2025-34025, CVE-2025-34026, CVE-2025-34027, CVE-2025-30911, CVE-2024-57273, CVE-2024-54780, and CVE-2024-54779, CVE-2025-41229, CVE-2025-4322, CVE-2025-47934, CVE-2025-30193, CVE-2025-0993, CVE-2025-36535, CVE-2025-47949, CVE-2025-40775, CVE-2025-20152, CVE-2025-4123, CVE-2025-5063, CVE-2025-37899, CVE-2025-26817, CVE-2025-47947, CVE-2025-3078, CVE-2025-3079, and CVE-2025-4978. 📰 Around the Cyber World Sandworm Drops New Wiper in Ukraine — The Russia-aligned Sandworm group intensified destructive operations against Ukrainian energy companies, deploying a new wiper named ZEROLOT. "The infamous Sandworm group concentrated heavily on compromising Ukrainian energy infrastructure. In recent cases, it deployed the ZEROLOT wiper in Ukraine. For this, the attackers abused Active Directory Group Policy in the affected organizations," ESET Director of Threat Research, Jean-Ian Boutin, said. Another Russian hacking group, Gamaredon, remained the most prolific actor targeting the East European nation, enhancing malware obfuscation and introducing PteroBox, a file stealer leveraging Dropbox. Signal Says No to Recall — Signal has released a new version of its messaging app for Windows that, by default, blocks the ability of Windows to use Recall to periodically take screenshots of the app. "Although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that's displayed within privacy-preserving apps like Signal at risk," Signal said. "As a result, we are enabling an extra layer of protection by default on Windows 11 in order to help maintain the security of Signal Desktop on that platform even though it introduces some usability trade-offs. Microsoft has simply given us no other option." Microsoft began officially rolling out Recall last month. Russia Introduces New Law to Track Foreigners Using Their Smartphones — The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region. This includes gathering their real-time locations, fingerprint, face photograph, and residential information. "The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area," Vyacheslav Volodin, chairman of the State Duma, said. "If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairswithin three working days." A proposed four-year trial period begins on September 1, 2025, and runs until September 1, 2029. Dutch Government Passes Law to Criminalize Cyber Espionage — The Dutch government has approved a law criminalizing a wide range of espionage activities, including digital espionage, in an effort to protect national security, critical infrastructure, and high-quality technologies. Under the amended law, leaking sensitive information that is not classified as a state secret or engaging in activities on behalf of a foreign government that harm Dutch interests can also result in criminal charges. "Foreign governments are also interested in non-state-secret, sensitive information about a particular economic sector or about political decision-making," the government said. "Such information can be used to influence political processes, weaken the Dutch economy or play allies against each other. Espionage can also involve actions other than sharing information." Microsoft Announces Availability of Quantum-Resistant Algorithms to SymCrypt — Microsoft has revealed that it's making post-quantum cryptographycapabilities, including ML-KEM and ML-DSA, available for Windows Insiders, Canary Channel Build 27852 and higher, and Linux, SymCrypt-OpenSSL version 1.9.0. "This advancement will enable customers to commence their exploration and experimentation of PQC within their operational environments," Microsoft said. "By obtaining early access to PQC capabilities, organizations can proactively assess the compatibility, performance, and integration of these novel algorithms alongside their existing security infrastructure." New Malware DOUBLELOADER Uses ALCATRAZ for Obfuscation — The open-source obfuscator ALCATRAZ has been seen within a new generic loader dubbed DOUBLELOADER, which has been deployed alongside Rhadamanthys Stealer infections starting December 2024. The malware collects host information, requests an updated version of itself, and starts beaconing to a hardcoded IP addressstored within the binary. "Obfuscators such as ALCATRAZ end up increasing the complexity when triaging malware," Elastic Security Labs said. "Its main goal is to hinder binary analysis tools and increase the time of the reverse engineering process through different techniques; such as hiding the control flow or making decompilation hard to follow." New Formjacking Campaign Targets WooCommerce Sites — Cybersecurity researchers have detected a sophisticated formjacking campaign targeting WooCommerce sites. The malware, per Wordfence, injects a fake but professional-looking payment form into legitimate checkout processes and exfiltrates sensitive customer data to an external server. Further analysis has revealed that the infection likely originated from a compromised WordPress admin account, which was used to inject malicious JavaScript via a Simple Custom CSS and JS pluginthat allows administrators to add custom code. "Unlike traditional card skimmers that simply overlay existing forms, this variant carefully integrates with the WooCommerce site's design and payment workflow, making it particularly difficult for site owners and users to detect," the WordPress security company said. "The malware author repurposed the browser's localStorage mechanism – typically used by websites to remember user preferences – to silently store stolen data and maintain access even after page reloads or when navigating away from the checkout page." E.U. Sanctions Stark Industries — The European Unionhas announced sanctions against 21 individuals and six entities in Russia over its "destabilising actions" in the region. One of the sanctioned entities is Stark Industries, a bulletproof hosting provider that has been accused of acting as "enablers of various Russian state-sponsored and affiliated actors to conduct destabilising activities including, information manipulation interference and cyber attacks against the Union and third countries." The sanctions also target its CEO Iurie Neculiti and owner Ivan Neculiti. Stark Industries was previously spotlighted by independent cybersecurity journalist Brian Krebs, detailing its use in DDoS attacks in Ukraine and across Europe. In August 2024, Team Cymru said it discovered 25 Stark-assigned IP addresses used to host domains associated with FIN7 activities and that it had been working with Stark Industries for several months to identify and reduce abuse of their systems. The sanctions have also targeted Kremlin-backed manufacturers of drones and radio communication equipment used by the Russian military, as well as those involved in GPS signal jamming in Baltic states and disrupting civil aviation. The Mask APT Unmasked as Tied to the Spanish Government — The mysterious threat actor known as The Maskhas been identified as run by the Spanish government, according to a report published by TechCrunch, citing people who worked at Kaspersky at the time and had knowledge of the investigation. The Russian cybersecurity company first exposed the hacking group in 2014, linking it to highly sophisticated attacks since at least 2007 targeting high-profile organizations, such as governments, diplomatic entities, and research institutions. A majority of the group's attacks have targeted Cuba, followed by hundreds of victims in Brazil, Morocco, Spain, and Gibraltar. While Kaspersky has not publicly attributed it to a specific country, the latest revelation makes The Mask one of the few Western government hacking groups that has ever been discussed in public. This includes the Equation Group, the Lamberts, and Animal Farm. Social Engineering Scams Target Coinbase Users — Earlier this month, cryptocurrency exchange Coinbase revealed that it was the victim of a malicious attack perpetrated by unknown threat actors to breach its systems by bribing customer support agents in India and siphon funds from nearly 70,000 customers. According to Blockchain security firm SlowMist, Coinbase users have been the target of social engineering scams since the start of the year, bombarding with SMS messages claiming to be fake withdrawal requests and seeking their confirmation as part of a "sustained and organized scam campaign." The goal is to induce a false sense of urgency and trick them into calling a number, eventually convincing them to transfer the funds to a secure wallet with a seed phrase pre-generated by the attackers and ultimately drain the assets. It's assessed that the activities are primarily carried out by two groups: low-level skid attackers from the Com community and organized cybercrime groups based in India. "Using spoofed PBX phone systems, scammers impersonate Coinbase support and claim there's been 'unauthorized access' or 'suspicious withdrawals' on the user's account," SlowMist said. "They create a sense of urgency, then follow up with phishing emails or texts containing fake ticket numbers or 'recovery links.'" Delta Can Sue CrowdStrike Over July 2024 Mega Outage — Delta Air Lines, which had its systems crippled and almost 7,000 flights canceled in the wake of a massive outage caused by a faulty update issued by CrowdStrike in mid-July 2024, has been given the green light to pursue to its lawsuit against the cybersecurity company. A judge in the U.S. state of Georgia stating Delta can try to prove that CrowdStrike was grossly negligent by pushing a defective update to its Falcon software to customers. The update crashed 8.5 million Windows devices across the world. Crowdstrike previously claimed that the airline had rejected technical support offers both from itself and Microsoft. In a statement shared with Reuters, lawyers representing CrowdStrike said they were "confident the judge will find Delta's case has no merit, or will limit damages to the 'single-digit millions of dollars' under Georgia law." The development comes months after MGM Resorts International agreed to pay million to settle multiple class-action lawsuits related to a data breach in 2019 and a ransomware attack the company experienced in 2023. Storm-1516 Uses AI-Generated Media to Spread Disinformation — The Russian influence operation known as Storm-1516sought to spread narratives that undermined the European support for Ukraine by amplifying fabricated stories on X about European leaders using drugs while traveling by train to Kyiv for peace talks. One of the posts was subsequently shared by Russian state media and Maria Zakharova, a senior official in Russia's foreign ministry, as part of what has been described as a coordinated disinformation campaign by EclecticIQ. The activity is also notable for the use of synthetic content depicting French President Emmanuel Macron, U.K. Labour Party leader Keir Starmer, and German chancellor Friedrich Merz of drug possession during their return from Ukraine. "By attacking the reputation of these leaders, the campaign likely aimed to turn their own voters against them, using influence operationsto reduce public support for Ukraine by discrediting the politicians who back it," the Dutch threat intelligence firm said. Turkish Users Targeted by DBatLoader — AhnLab has disclosed details of a malware campaign that's distributing a malware loader called DBatLoadervia banking-themed banking emails, which then acts as a conduit to deliver SnakeKeylogger, an information stealer developed in .NET. "The DBatLoader malware distributed through phishing emails has the cunning behavior of exploiting normal processesthrough techniques such as DLL side-loading and injection for most of its behaviors, and it also utilizes normal processesfor behaviors such as file copying and changing policies," the company said. SEC SIM-Swapper Sentenced to 14 Months for SEC X Account Hack — A 26-year-old Alabama man, Eric Council Jr., has been sentenced to 14 months in prison and three years of supervised release for using SIM swapping attacks to breach the U.S. Securities and Exchange Commission'sofficial X account in January 2024 and falsely announced that the SEC approved BitcoinExchange Traded Funds. Council Jr.was arrested in October 2024 and pleaded guilty to the crime earlier this February. He has also been ordered to forfeit According to court documents, Council used his personal computer to search incriminating phrases such as "SECGOV hack," "telegram sim swap," "how can I know for sure if I am being investigated by the FBI," "What are the signs that you are under investigation by law enforcement or the FBI even if you have not been contacted by them," "what are some signs that the FBI is after you," "Verizon store list," "federal identity theft statute," and "how long does it take to delete telegram account." FBI Warns of Malicious Campaign Impersonating Government Officials — The U.S. Federal Bureau of Investigationis warning of a new campaign that involves malicious actors impersonating senior U.S. federal or state government officials and their contacts to target individuals since April 2025. "The malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts," the FBI said. "One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform." From there, the actor may present malware or introduce hyperlinks that lead intended targets to an actor-controlled site that steals login information. DICOM Flaw Enables Attackers to Embed Malicious Code Within Medical Image Files — Praetorian has released a proof-of-conceptfor a high-severity security flaw in Digital Imaging and Communications in Medicine, predominant file format for medical images, that enables attackers to embed malicious code within legitimate medical image files. CVE-2019-11687, originally disclosed in 2019 by Markel Picado Ortiz, stems from a design decision that allows arbitrary content at the start of the file, otherwise called the Preamble, which enables the creation of malicious polyglots. Codenamed ELFDICOM, the PoC extends the attack surface to Linux environments, making it a much more potent threat. As mitigations, it's advised to implement a DICOM preamble whitelist. "DICOM's file structure inherently allows arbitrary bytes at the beginning of the file, where Linux and most operating systems will look for magic bytes," Praetorian researcher Ryan Hennessee said. "would check a DICOM file's preamble before it is imported into the system. This would allow known good patterns, such as 'TIFF' magic bytes, or '\x00' null bytes, while files with the ELF magic bytes would be blocked." Cookie-Bite Attack Uses Chrome Extension to Steal Session Tokens — Cybersecurity researchers have demonstrated a new attack technique called Cookie-Bite that employs custom-made malicious browser extensions to steal "ESTAUTH" and "ESTSAUTHPERSISTNT" cookies in Microsoft Azure Entra ID and bypass multi-factor authentication. The attack has multiple moving parts to it: A custom Chrome extension that monitors authentication events and captures cookies; a PowerShell script that automates the extension deployment and ensures persistence; an exfiltration mechanism to send the cookies to a remote collection point; and a complementary extension to inject the captured cookies into the attacker's browser. "Threat actors often use infostealers to extract authentication tokens directly from a victim's machine or buy them directly through darkness markets, allowing adversaries to hijack active cloud sessions without triggering MFA," Varonis said. "By injecting these cookies while mimicking the victim's OS, browser, and network, attackers can evade Conditional Access Policiesand maintain persistent access." Authentication cookies can also be stolen using adversary-in-the-middlephishing kits in real-time, or using rogue browser extensions that request excessive permissions to interact with web sessions, modify page content, and extract stored authentication data. Once installed, the extension can access the browser's storage API, intercept network requests, or inject malicious JavaScript into active sessions to harvest real-time session cookies. "By leveraging stolen session cookies, an adversary can bypass authentication mechanisms, gaining seamless entry into cloud environments without requiring user credentials," Varonis said. "Beyond initial access, session hijacking can facilitate lateral movement across the tenant, allowing attackers to explore additional resources, access sensitive data, and escalate privileges by abusing existing permissions or misconfigured roles." 🎥 Cybersecurity Webinars Non-Human Identities: The AI Backdoor You're Not Watching → AI agents rely on Non-Human Identitiesto function—but these are often left untracked and unsecured. As attackers shift focus to this hidden layer, the risk is growing fast. In this session, you'll learn how to find, secure, and monitor these identities before they're exploited. Join the webinar to understand the real risks behind AI adoption—and how to stay ahead. Inside the LOTS Playbook: How Hackers Stay Undetected → Attackers are using trusted sites to stay hidden. In this webinar, Zscaler experts share how they detect these stealthy LOTS attacks using insights from the world's largest security cloud. Join to learn how to spot hidden threats and improve your defense. 🔧 Cybersecurity Tools ScriptSentry → It is a free tool that scans your environment for dangerous logon script misconfigurations—like plaintext credentials, insecure file/share permissions, and references to non-existent servers. These overlooked issues can enable lateral movement, privilege escalation, or even credential theft. ScriptSentry helps you quickly identify and fix them across large Active Directory environments. Aftermath → It is a Swift-based, open-source tool for macOS incident response. It collects forensic data—like logs, browser activity, and process info—from compromised systems, then analyzes it to build timelines and track infection paths. Deploy via MDM or run manually. Fast, lightweight, and ideal for post-incident investigation. AI Red Teaming Playground Labs → It is an open-source training suite with hands-on challenges designed to teach security professionals how to red team AI systems. Originally developed for Black Hat USA 2024, the labs cover prompt injections, safety bypasses, indirect attacks, and Responsible AI failures. Built on Chat Copilot and deployable via Docker, it's a practical resource for testing and understanding real-world AI vulnerabilities. 🔒 Tip of the Week Review and Revoke Old OAuth App Permissions — They're Silent Backdoor → You've likely logged into apps using "Continue with Google," "Sign in with Microsoft," or GitHub/Twitter/Facebook logins. That's OAuth. But did you know many of those apps still have access to your data long after you stop using them? Why it matters: Even if you delete the app or forget it existed, it might still have ongoing access to your calendar, email, cloud files, or contact list — no password needed. If that third-party gets breached, your data is at risk. What to do: Go through your connected apps here: Google: myaccount.google.com/permissions Microsoft: account.live.com/consent/Manage GitHub: github.com/settings/applications Facebook: facebook.com/settings?tab=applications Revoke anything you don't actively use. It's a fast, silent cleanup — and it closes doors you didn't know were open. Conclusion Looking ahead, it's not just about tracking threats—it's about understanding what they reveal. Every tactic used, every system tested, points to deeper issues in how trust, access, and visibility are managed. As attackers adapt quickly, defenders need sharper awareness and faster response loops. The takeaways from this week aren't just technical—they speak to how teams prioritize risk, design safeguards, and make choices under pressure. Use these insights not just to react, but to rethink what "secure" really needs to mean in today's environment. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. #weekly #recap #apt #campaigns #browser
    THEHACKERNEWS.COM
    ⚡ Weekly Recap: APT Campaigns, Browser Hijacks, AI Malware, Cloud Breaches and Critical CVEs
    Cyber threats don't show up one at a time anymore. They're layered, planned, and often stay hidden until it's too late. For cybersecurity teams, the key isn't just reacting to alerts—it's spotting early signs of trouble before they become real threats. This update is designed to deliver clear, accurate insights based on real patterns and changes we can verify. With today's complex systems, we need focused analysis—not noise. What you'll see here isn't just a list of incidents, but a clear look at where control is being gained, lost, or quietly tested. ⚡ Threat of the Week Lumma Stealer, DanaBot Operations Disrupted — A coalition of private sector companies and law enforcement agencies have taken down the infrastructure associated with Lumma Stealer and DanaBot. Charges have also been unsealed against 16 individuals for their alleged involvement in the development and deployment of DanaBot. The malware is equipped to siphon data from victim computers, hijack banking sessions, and steal device information. More uniquely, though, DanaBot has also been used for hacking campaigns that appear to be linked to Russian state-sponsored interests. All of that makes DanaBot a particularly clear example of how commodity malware has been repurposed by Russian state hackers for their own goals. In tandem, about 2,300 domains that acted as the command-and-control (C2) backbone for the Lumma information stealer have been seized, alongside taking down 300 servers and neutralizing 650 domains that were used to launch ransomware attacks. The actions against international cybercrime in the past few days constituted the latest phase of Operation Endgame. Get the Guide ➝ 🔔 Top News Threat Actors Use TikTok Videos to Distribute Stealers — While ClickFix has become a popular social engineering tactic to deliver malware, threat actors have been observed using artificial intelligence (AI)-generated videos uploaded to TikTok to deceive users into running malicious commands on their systems and deploy malware like Vidar and StealC under the guise of activating pirated version of Windows, Microsoft Office, CapCut, and Spotify. "This campaign highlights how attackers are ready to weaponize whichever social media platforms are currently popular to distribute malware," Trend Micro said. APT28 Hackers Target Western Logistics and Tech Firms — Several cybersecurity and intelligence agencies from Australia, Europe, and the United States issued a joint alert warning of a state-sponsored campaign orchestrated by the Russian state-sponsored threat actor APT28 targeting Western logistics entities and technology companies since 2022. "This cyber espionage-oriented campaign targeting logistics entities and technology companies uses a mix of previously disclosed TTPs and is likely connected to these actors' wide scale targeting of IP cameras in Ukraine and bordering NATO nations," the agencies said. The attacks are designed to steal sensitive information and maintain long-term persistence on compromised hosts. Chinese Threat Actors Exploit Ivanti EPMM Flaws — The China-nexus cyber espionage group tracked as UNC5221 has been attributed to the exploitation of a pair of security flaws affecting Ivanti Endpoint Manager Mobile (EPMM) software (CVE-2025-4427 and CVE-2025-4428) to target a wide range of sectors across Europe, North America, and the Asia-Pacific region. The intrusions leverage the vulnerabilities to obtain a reverse shell and drop malicious payloads like KrustyLoader, which is known to deliver the Sliver command-and-control (C2) framework. "UNC5221 demonstrates a deep understanding of EPMM's internal architecture, repurposing legitimate system components for covert data exfiltration," EclecticIQ said. "Given EPMM's role in managing and pushing configurations to enterprise mobile devices, a successful exploitation could allow threat actors to remotely access, manipulate, or compromise thousands of managed devices across an organization." Over 100 Google Chrome Extensions Mimic Popular Tools — An unknown threat actor has been attributed to creating several malicious Chrome Browser extensions since February 2024 that masquerade as seemingly benign utilities such as DeepSeek, Manus, DeBank, FortiVPN, and Site Stats but incorporate covert functionality to exfiltrate data, receive commands, and execute arbitrary code. Links to these browser add-ons are hosted on specially crafted sites to which users are likely redirected to via phishing and social media posts. While the extensions appear to offer the advertised features, they also stealthily facilitate credential and cookie theft, session hijacking, ad injection, malicious redirects, traffic manipulation, and phishing via DOM manipulation. Several of these extensions have been taken down by Google. CISA Warns of SaaS Providers of Attacks Targeting Cloud Environments — The U.S. Cybersecurity and Infrastructure Security Agency (CISA) warned that SaaS companies are under threat from bad actors who are on the prowl for cloud applications with default configurations and elevated permissions. While the agency did not attribute the activity to a specific group, the advisory said enterprise backup platform Commvault is monitoring cyber threat activity targeting applications hosted in their Microsoft Azure cloud environment. "Threat actors may have accessed client secrets for Commvault's (Metallic) Microsoft 365 (M365) backup software-as-a-service (SaaS) solution, hosted in Azure," CISA said. "This provided the threat actors with unauthorized access to Commvault's customers' M365 environments that have application secrets stored by Commvault." GitLab AI Coding Assistant Flaws Could Be Used to Inject Malicious Code — Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligence (AI) assistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. The attack could also leak confidential issue data, such as zero-day vulnerability details. All that's required is for the attacker to instruct the chatbot to interact with a merge request (or commit, issue, or source code) by taking advantage of the fact that GitLab Duo has extensive access to the platform. "By embedding hidden instructions in seemingly harmless project content, we were able to manipulate Duo's behavior, exfiltrate private source code, and demonstrate how AI responses can be leveraged for unintended and harmful outcomes," Legit Security said. One variation of the attack involved hiding a malicious instruction in an otherwise legitimate piece of source code, while another exploited Duo's parsing of markdown responses in real-time asynchronously. An attacker could leverage this behavior – that Duo begins rendering the output line by line rather than waiting until the entire response is generated and sending it all at once – to introduce malicious HTML code that can access sensitive data and exfiltrate the information to a remote server. The issues have been patched by GitLab following responsible disclosure. ‎️‍🔥 Trending CVEs Software vulnerabilities remain one of the simplest—and most effective—entry points for attackers. Each week uncovers new flaws, and even small delays in patching can escalate into serious security incidents. Staying ahead means acting fast. Below is this week's list of high-risk vulnerabilities that demand attention. Review them carefully, apply updates without delay, and close the doors before they're forced open. This week's list includes — CVE-2025-34025, CVE-2025-34026, CVE-2025-34027 (Versa Concerto), CVE-2025-30911 (RomethemeKit For Elementor WordPress plugin), CVE-2024-57273, CVE-2024-54780, and CVE-2024-54779 (pfSense), CVE-2025-41229 (VMware Cloud Foundation), CVE-2025-4322 (Motors WordPress theme), CVE-2025-47934 (OpenPGP.js), CVE-2025-30193 (PowerDNS), CVE-2025-0993 (GitLab), CVE-2025-36535 (AutomationDirect MB-Gateway), CVE-2025-47949 (Samlify), CVE-2025-40775 (BIND DNS), CVE-2025-20152 (Cisco Identity Services Engine), CVE-2025-4123 (Grafana), CVE-2025-5063 (Google Chrome), CVE-2025-37899 (Linux Kernel), CVE-2025-26817 (Netwrix Password Secure), CVE-2025-47947 (ModSecurity), CVE-2025-3078, CVE-2025-3079 (Canon Printers), and CVE-2025-4978 (NETGEAR). 📰 Around the Cyber World Sandworm Drops New Wiper in Ukraine — The Russia-aligned Sandworm group intensified destructive operations against Ukrainian energy companies, deploying a new wiper named ZEROLOT. "The infamous Sandworm group concentrated heavily on compromising Ukrainian energy infrastructure. In recent cases, it deployed the ZEROLOT wiper in Ukraine. For this, the attackers abused Active Directory Group Policy in the affected organizations," ESET Director of Threat Research, Jean-Ian Boutin, said. Another Russian hacking group, Gamaredon, remained the most prolific actor targeting the East European nation, enhancing malware obfuscation and introducing PteroBox, a file stealer leveraging Dropbox. Signal Says No to Recall — Signal has released a new version of its messaging app for Windows that, by default, blocks the ability of Windows to use Recall to periodically take screenshots of the app. "Although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that's displayed within privacy-preserving apps like Signal at risk," Signal said. "As a result, we are enabling an extra layer of protection by default on Windows 11 in order to help maintain the security of Signal Desktop on that platform even though it introduces some usability trade-offs. Microsoft has simply given us no other option." Microsoft began officially rolling out Recall last month. Russia Introduces New Law to Track Foreigners Using Their Smartphones — The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region. This includes gathering their real-time locations, fingerprint, face photograph, and residential information. "The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area," Vyacheslav Volodin, chairman of the State Duma, said. "If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairs (MVD) within three working days." A proposed four-year trial period begins on September 1, 2025, and runs until September 1, 2029. Dutch Government Passes Law to Criminalize Cyber Espionage — The Dutch government has approved a law criminalizing a wide range of espionage activities, including digital espionage, in an effort to protect national security, critical infrastructure, and high-quality technologies. Under the amended law, leaking sensitive information that is not classified as a state secret or engaging in activities on behalf of a foreign government that harm Dutch interests can also result in criminal charges. "Foreign governments are also interested in non-state-secret, sensitive information about a particular economic sector or about political decision-making," the government said. "Such information can be used to influence political processes, weaken the Dutch economy or play allies against each other. Espionage can also involve actions other than sharing information." Microsoft Announces Availability of Quantum-Resistant Algorithms to SymCrypt — Microsoft has revealed that it's making post-quantum cryptography (PQC) capabilities, including ML-KEM and ML-DSA, available for Windows Insiders, Canary Channel Build 27852 and higher, and Linux, SymCrypt-OpenSSL version 1.9.0. "This advancement will enable customers to commence their exploration and experimentation of PQC within their operational environments," Microsoft said. "By obtaining early access to PQC capabilities, organizations can proactively assess the compatibility, performance, and integration of these novel algorithms alongside their existing security infrastructure." New Malware DOUBLELOADER Uses ALCATRAZ for Obfuscation — The open-source obfuscator ALCATRAZ has been seen within a new generic loader dubbed DOUBLELOADER, which has been deployed alongside Rhadamanthys Stealer infections starting December 2024. The malware collects host information, requests an updated version of itself, and starts beaconing to a hardcoded IP address (185.147.125[.]81) stored within the binary. "Obfuscators such as ALCATRAZ end up increasing the complexity when triaging malware," Elastic Security Labs said. "Its main goal is to hinder binary analysis tools and increase the time of the reverse engineering process through different techniques; such as hiding the control flow or making decompilation hard to follow." New Formjacking Campaign Targets WooCommerce Sites — Cybersecurity researchers have detected a sophisticated formjacking campaign targeting WooCommerce sites. The malware, per Wordfence, injects a fake but professional-looking payment form into legitimate checkout processes and exfiltrates sensitive customer data to an external server. Further analysis has revealed that the infection likely originated from a compromised WordPress admin account, which was used to inject malicious JavaScript via a Simple Custom CSS and JS plugin (or something similar) that allows administrators to add custom code. "Unlike traditional card skimmers that simply overlay existing forms, this variant carefully integrates with the WooCommerce site's design and payment workflow, making it particularly difficult for site owners and users to detect," the WordPress security company said. "The malware author repurposed the browser's localStorage mechanism – typically used by websites to remember user preferences – to silently store stolen data and maintain access even after page reloads or when navigating away from the checkout page." E.U. Sanctions Stark Industries — The European Union (E.U.) has announced sanctions against 21 individuals and six entities in Russia over its "destabilising actions" in the region. One of the sanctioned entities is Stark Industries, a bulletproof hosting provider that has been accused of acting as "enablers of various Russian state-sponsored and affiliated actors to conduct destabilising activities including, information manipulation interference and cyber attacks against the Union and third countries." The sanctions also target its CEO Iurie Neculiti and owner Ivan Neculiti. Stark Industries was previously spotlighted by independent cybersecurity journalist Brian Krebs, detailing its use in DDoS attacks in Ukraine and across Europe. In August 2024, Team Cymru said it discovered 25 Stark-assigned IP addresses used to host domains associated with FIN7 activities and that it had been working with Stark Industries for several months to identify and reduce abuse of their systems. The sanctions have also targeted Kremlin-backed manufacturers of drones and radio communication equipment used by the Russian military, as well as those involved in GPS signal jamming in Baltic states and disrupting civil aviation. The Mask APT Unmasked as Tied to the Spanish Government — The mysterious threat actor known as The Mask (aka Careto) has been identified as run by the Spanish government, according to a report published by TechCrunch, citing people who worked at Kaspersky at the time and had knowledge of the investigation. The Russian cybersecurity company first exposed the hacking group in 2014, linking it to highly sophisticated attacks since at least 2007 targeting high-profile organizations, such as governments, diplomatic entities, and research institutions. A majority of the group's attacks have targeted Cuba, followed by hundreds of victims in Brazil, Morocco, Spain, and Gibraltar. While Kaspersky has not publicly attributed it to a specific country, the latest revelation makes The Mask one of the few Western government hacking groups that has ever been discussed in public. This includes the Equation Group, the Lamberts (the U.S.), and Animal Farm (France). Social Engineering Scams Target Coinbase Users — Earlier this month, cryptocurrency exchange Coinbase revealed that it was the victim of a malicious attack perpetrated by unknown threat actors to breach its systems by bribing customer support agents in India and siphon funds from nearly 70,000 customers. According to Blockchain security firm SlowMist, Coinbase users have been the target of social engineering scams since the start of the year, bombarding with SMS messages claiming to be fake withdrawal requests and seeking their confirmation as part of a "sustained and organized scam campaign." The goal is to induce a false sense of urgency and trick them into calling a number, eventually convincing them to transfer the funds to a secure wallet with a seed phrase pre-generated by the attackers and ultimately drain the assets. It's assessed that the activities are primarily carried out by two groups: low-level skid attackers from the Com community and organized cybercrime groups based in India. "Using spoofed PBX phone systems, scammers impersonate Coinbase support and claim there's been 'unauthorized access' or 'suspicious withdrawals' on the user's account," SlowMist said. "They create a sense of urgency, then follow up with phishing emails or texts containing fake ticket numbers or 'recovery links.'" Delta Can Sue CrowdStrike Over July 2024 Mega Outage — Delta Air Lines, which had its systems crippled and almost 7,000 flights canceled in the wake of a massive outage caused by a faulty update issued by CrowdStrike in mid-July 2024, has been given the green light to pursue to its lawsuit against the cybersecurity company. A judge in the U.S. state of Georgia stating Delta can try to prove that CrowdStrike was grossly negligent by pushing a defective update to its Falcon software to customers. The update crashed 8.5 million Windows devices across the world. Crowdstrike previously claimed that the airline had rejected technical support offers both from itself and Microsoft. In a statement shared with Reuters, lawyers representing CrowdStrike said they were "confident the judge will find Delta's case has no merit, or will limit damages to the 'single-digit millions of dollars' under Georgia law." The development comes months after MGM Resorts International agreed to pay $45 million to settle multiple class-action lawsuits related to a data breach in 2019 and a ransomware attack the company experienced in 2023. Storm-1516 Uses AI-Generated Media to Spread Disinformation — The Russian influence operation known as Storm-1516 (aka CopyCop) sought to spread narratives that undermined the European support for Ukraine by amplifying fabricated stories on X about European leaders using drugs while traveling by train to Kyiv for peace talks. One of the posts was subsequently shared by Russian state media and Maria Zakharova, a senior official in Russia's foreign ministry, as part of what has been described as a coordinated disinformation campaign by EclecticIQ. The activity is also notable for the use of synthetic content depicting French President Emmanuel Macron, U.K. Labour Party leader Keir Starmer, and German chancellor Friedrich Merz of drug possession during their return from Ukraine. "By attacking the reputation of these leaders, the campaign likely aimed to turn their own voters against them, using influence operations (IO) to reduce public support for Ukraine by discrediting the politicians who back it," the Dutch threat intelligence firm said. Turkish Users Targeted by DBatLoader — AhnLab has disclosed details of a malware campaign that's distributing a malware loader called DBatLoader (aka ModiLoader) via banking-themed banking emails, which then acts as a conduit to deliver SnakeKeylogger, an information stealer developed in .NET. "The DBatLoader malware distributed through phishing emails has the cunning behavior of exploiting normal processes (easinvoker.exe, loader.exe) through techniques such as DLL side-loading and injection for most of its behaviors, and it also utilizes normal processes (cmd.exe, powershell.exe, esentutl.exe, extrac32.exe) for behaviors such as file copying and changing policies," the company said. SEC SIM-Swapper Sentenced to 14 Months for SEC X Account Hack — A 26-year-old Alabama man, Eric Council Jr., has been sentenced to 14 months in prison and three years of supervised release for using SIM swapping attacks to breach the U.S. Securities and Exchange Commission's (SEC) official X account in January 2024 and falsely announced that the SEC approved Bitcoin (BTC) Exchange Traded Funds (ETFs). Council Jr. (aka Ronin, Agiantschnauzer, and @EasyMunny) was arrested in October 2024 and pleaded guilty to the crime earlier this February. He has also been ordered to forfeit $50,000. According to court documents, Council used his personal computer to search incriminating phrases such as "SECGOV hack," "telegram sim swap," "how can I know for sure if I am being investigated by the FBI," "What are the signs that you are under investigation by law enforcement or the FBI even if you have not been contacted by them," "what are some signs that the FBI is after you," "Verizon store list," "federal identity theft statute," and "how long does it take to delete telegram account." FBI Warns of Malicious Campaign Impersonating Government Officials — The U.S. Federal Bureau of Investigation (FBI) is warning of a new campaign that involves malicious actors impersonating senior U.S. federal or state government officials and their contacts to target individuals since April 2025. "The malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts," the FBI said. "One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform." From there, the actor may present malware or introduce hyperlinks that lead intended targets to an actor-controlled site that steals login information. DICOM Flaw Enables Attackers to Embed Malicious Code Within Medical Image Files — Praetorian has released a proof-of-concept (PoC) for a high-severity security flaw in Digital Imaging and Communications in Medicine (DICOM), predominant file format for medical images, that enables attackers to embed malicious code within legitimate medical image files. CVE-2019-11687 (CVSS score: 7.8), originally disclosed in 2019 by Markel Picado Ortiz, stems from a design decision that allows arbitrary content at the start of the file, otherwise called the Preamble, which enables the creation of malicious polyglots. Codenamed ELFDICOM, the PoC extends the attack surface to Linux environments, making it a much more potent threat. As mitigations, it's advised to implement a DICOM preamble whitelist. "DICOM's file structure inherently allows arbitrary bytes at the beginning of the file, where Linux and most operating systems will look for magic bytes," Praetorian researcher Ryan Hennessee said. "[The whitelist] would check a DICOM file's preamble before it is imported into the system. This would allow known good patterns, such as 'TIFF' magic bytes, or '\x00' null bytes, while files with the ELF magic bytes would be blocked." Cookie-Bite Attack Uses Chrome Extension to Steal Session Tokens — Cybersecurity researchers have demonstrated a new attack technique called Cookie-Bite that employs custom-made malicious browser extensions to steal "ESTAUTH" and "ESTSAUTHPERSISTNT" cookies in Microsoft Azure Entra ID and bypass multi-factor authentication (MFA). The attack has multiple moving parts to it: A custom Chrome extension that monitors authentication events and captures cookies; a PowerShell script that automates the extension deployment and ensures persistence; an exfiltration mechanism to send the cookies to a remote collection point; and a complementary extension to inject the captured cookies into the attacker's browser. "Threat actors often use infostealers to extract authentication tokens directly from a victim's machine or buy them directly through darkness markets, allowing adversaries to hijack active cloud sessions without triggering MFA," Varonis said. "By injecting these cookies while mimicking the victim's OS, browser, and network, attackers can evade Conditional Access Policies (CAPs) and maintain persistent access." Authentication cookies can also be stolen using adversary-in-the-middle (AitM) phishing kits in real-time, or using rogue browser extensions that request excessive permissions to interact with web sessions, modify page content, and extract stored authentication data. Once installed, the extension can access the browser's storage API, intercept network requests, or inject malicious JavaScript into active sessions to harvest real-time session cookies. "By leveraging stolen session cookies, an adversary can bypass authentication mechanisms, gaining seamless entry into cloud environments without requiring user credentials," Varonis said. "Beyond initial access, session hijacking can facilitate lateral movement across the tenant, allowing attackers to explore additional resources, access sensitive data, and escalate privileges by abusing existing permissions or misconfigured roles." 🎥 Cybersecurity Webinars Non-Human Identities: The AI Backdoor You're Not Watching → AI agents rely on Non-Human Identities (like service accounts and API keys) to function—but these are often left untracked and unsecured. As attackers shift focus to this hidden layer, the risk is growing fast. In this session, you'll learn how to find, secure, and monitor these identities before they're exploited. Join the webinar to understand the real risks behind AI adoption—and how to stay ahead. Inside the LOTS Playbook: How Hackers Stay Undetected → Attackers are using trusted sites to stay hidden. In this webinar, Zscaler experts share how they detect these stealthy LOTS attacks using insights from the world's largest security cloud. Join to learn how to spot hidden threats and improve your defense. 🔧 Cybersecurity Tools ScriptSentry → It is a free tool that scans your environment for dangerous logon script misconfigurations—like plaintext credentials, insecure file/share permissions, and references to non-existent servers. These overlooked issues can enable lateral movement, privilege escalation, or even credential theft. ScriptSentry helps you quickly identify and fix them across large Active Directory environments. Aftermath → It is a Swift-based, open-source tool for macOS incident response. It collects forensic data—like logs, browser activity, and process info—from compromised systems, then analyzes it to build timelines and track infection paths. Deploy via MDM or run manually. Fast, lightweight, and ideal for post-incident investigation. AI Red Teaming Playground Labs → It is an open-source training suite with hands-on challenges designed to teach security professionals how to red team AI systems. Originally developed for Black Hat USA 2024, the labs cover prompt injections, safety bypasses, indirect attacks, and Responsible AI failures. Built on Chat Copilot and deployable via Docker, it's a practical resource for testing and understanding real-world AI vulnerabilities. 🔒 Tip of the Week Review and Revoke Old OAuth App Permissions — They're Silent Backdoor → You've likely logged into apps using "Continue with Google," "Sign in with Microsoft," or GitHub/Twitter/Facebook logins. That's OAuth. But did you know many of those apps still have access to your data long after you stop using them? Why it matters: Even if you delete the app or forget it existed, it might still have ongoing access to your calendar, email, cloud files, or contact list — no password needed. If that third-party gets breached, your data is at risk. What to do: Go through your connected apps here: Google: myaccount.google.com/permissions Microsoft: account.live.com/consent/Manage GitHub: github.com/settings/applications Facebook: facebook.com/settings?tab=applications Revoke anything you don't actively use. It's a fast, silent cleanup — and it closes doors you didn't know were open. Conclusion Looking ahead, it's not just about tracking threats—it's about understanding what they reveal. Every tactic used, every system tested, points to deeper issues in how trust, access, and visibility are managed. As attackers adapt quickly, defenders need sharper awareness and faster response loops. The takeaways from this week aren't just technical—they speak to how teams prioritize risk, design safeguards, and make choices under pressure. Use these insights not just to react, but to rethink what "secure" really needs to mean in today's environment. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    0 Σχόλια 0 Μοιράστηκε
  • Trump administration detonates expansion of rural broadband access

    As Trump axes the Digital Equity Act, other digital divide initiatives remain at risk.
    Credit: Kathleen Flynn / The Washington Post via Getty Images

    The Trump administration continues with its cost-slashing, anti-DEI agenda, and its coming for nationwide efforts to close the digital divide next.On May 8, President Donald Trump posted to Truth Social that he was directing the end of the Biden-Harris era Digital Equity Act. Trump called the program — which allocated billion to digital inclusion programs — "racist" and "illegal." Last week, the National Telecommunications and Information Administrationabruptly terminated grants for 20 different state projects under the act, including digital access in K-12 schools, veteran and senior programs, and rural connectivity efforts. The State Educational Technology Directors Associationcalled the decision a "significant setback" to universal access goals. "SETDA stands with our state members and partner organizations who have been diligently building inclusive broadband and digital access plans rooted in community need, engagement, and systemic transformation. Equitable access to technology is not a partisan issue–it is a public good."

    You May Also Like

    The decision points to an uncertain future for existing broadband and digital connectivity efforts managed or funded by the federal government. Since most serve specific communities and demographics which are at the highest risk of being technologically disconnected or left behind, they have entered the crosshairs of the administration's "anti-woke" crusade. Indigenous connectivity advocates, for example, warned that a Trump presidency would have an immediate impact on rural broadband projects that were in the process of breaking ground, as the president simultaneously promised to shake up the FCC and whittle down the federal government's spending.

    Mashable Light Speed

    Want more out-of-this world tech, space and science stories?
    Sign up for Mashable's weekly Light Speed newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    “Ongoing efforts to bridge the digital divide in the U.S. face significant challenges with the recent termination of the Digital Equity Act, and potential drastic changes coming to the Broadband Equity Access and Deploymentprogram," said Sharayah Lane, senior advisor of community connectivity for the global nonprofit the Internet Society and member of the Lummi Nation. "This will critically impact the future of affordable, reliable, high-speed Internet access in underserved areas, further limiting essential education, healthcare, and economic opportunities."The Biden administration, which pledged billions of federal dollars to building out the nation's high speed broadband and fiber optic network, had made closing the digital divide a central component to its massive federal spending package, including launching the Affordable Connectivity Program, the Tribal Broadband Connectivity Program, and the BEAD initiative. BEAD funds, in particular, were split up between state broadband infrastructure projects, including 19 grants over billion. But now the funds are being pulled out from under them. Commerce Secretary Howard Lutnick has had the billion BEAD budget under review since Trump took office, and has falsely claimed that the program "has not connected a single person to the internet," but is rather a "woke mandate" under the previous presidency.

    Related Stories

    Meanwhile, Trump has pushed to open up an auction of highly sought after spectrum bands to serve WiFi, 5G, and 6G projects under his "One Big Beautiful Bill" — a move that may sideline rural connectivity projects focused on building reliable, physical connections to high speed internet. Advocates have long fought for federal investment in "missing middle miles" of fiber optic cables and broadband, rather than unstable satellite connections, such as those promised by Elon Musk's Starlink. "We need to prioritize investments in sustainable infrastructure through programs like BEAD and the Digital Equity Act to ensure long-term, affordable Internet access for all Americans, strengthen the economy, and bolster the nation’s overall digital resilience," said Lane.

    Chase DiBenedetto
    Social Good Reporter

    Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also captures how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.
    #trump #administration #detonates #expansion #rural
    Trump administration detonates expansion of rural broadband access
    As Trump axes the Digital Equity Act, other digital divide initiatives remain at risk. Credit: Kathleen Flynn / The Washington Post via Getty Images The Trump administration continues with its cost-slashing, anti-DEI agenda, and its coming for nationwide efforts to close the digital divide next.On May 8, President Donald Trump posted to Truth Social that he was directing the end of the Biden-Harris era Digital Equity Act. Trump called the program — which allocated billion to digital inclusion programs — "racist" and "illegal." Last week, the National Telecommunications and Information Administrationabruptly terminated grants for 20 different state projects under the act, including digital access in K-12 schools, veteran and senior programs, and rural connectivity efforts. The State Educational Technology Directors Associationcalled the decision a "significant setback" to universal access goals. "SETDA stands with our state members and partner organizations who have been diligently building inclusive broadband and digital access plans rooted in community need, engagement, and systemic transformation. Equitable access to technology is not a partisan issue–it is a public good." You May Also Like The decision points to an uncertain future for existing broadband and digital connectivity efforts managed or funded by the federal government. Since most serve specific communities and demographics which are at the highest risk of being technologically disconnected or left behind, they have entered the crosshairs of the administration's "anti-woke" crusade. Indigenous connectivity advocates, for example, warned that a Trump presidency would have an immediate impact on rural broadband projects that were in the process of breaking ground, as the president simultaneously promised to shake up the FCC and whittle down the federal government's spending. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! “Ongoing efforts to bridge the digital divide in the U.S. face significant challenges with the recent termination of the Digital Equity Act, and potential drastic changes coming to the Broadband Equity Access and Deploymentprogram," said Sharayah Lane, senior advisor of community connectivity for the global nonprofit the Internet Society and member of the Lummi Nation. "This will critically impact the future of affordable, reliable, high-speed Internet access in underserved areas, further limiting essential education, healthcare, and economic opportunities."The Biden administration, which pledged billions of federal dollars to building out the nation's high speed broadband and fiber optic network, had made closing the digital divide a central component to its massive federal spending package, including launching the Affordable Connectivity Program, the Tribal Broadband Connectivity Program, and the BEAD initiative. BEAD funds, in particular, were split up between state broadband infrastructure projects, including 19 grants over billion. But now the funds are being pulled out from under them. Commerce Secretary Howard Lutnick has had the billion BEAD budget under review since Trump took office, and has falsely claimed that the program "has not connected a single person to the internet," but is rather a "woke mandate" under the previous presidency. Related Stories Meanwhile, Trump has pushed to open up an auction of highly sought after spectrum bands to serve WiFi, 5G, and 6G projects under his "One Big Beautiful Bill" — a move that may sideline rural connectivity projects focused on building reliable, physical connections to high speed internet. Advocates have long fought for federal investment in "missing middle miles" of fiber optic cables and broadband, rather than unstable satellite connections, such as those promised by Elon Musk's Starlink. "We need to prioritize investments in sustainable infrastructure through programs like BEAD and the Digital Equity Act to ensure long-term, affordable Internet access for all Americans, strengthen the economy, and bolster the nation’s overall digital resilience," said Lane. Chase DiBenedetto Social Good Reporter Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also captures how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny. #trump #administration #detonates #expansion #rural
    MASHABLE.COM
    Trump administration detonates expansion of rural broadband access
    As Trump axes the Digital Equity Act, other digital divide initiatives remain at risk. Credit: Kathleen Flynn / The Washington Post via Getty Images The Trump administration continues with its cost-slashing, anti-DEI agenda, and its coming for nationwide efforts to close the digital divide next.On May 8, President Donald Trump posted to Truth Social that he was directing the end of the Biden-Harris era Digital Equity Act. Trump called the program — which allocated $2.75 billion to digital inclusion programs — "racist" and "illegal." Last week, the National Telecommunications and Information Administration (NTIA) abruptly terminated grants for 20 different state projects under the act, including digital access in K-12 schools, veteran and senior programs, and rural connectivity efforts. The State Educational Technology Directors Association (SETDA) called the decision a "significant setback" to universal access goals. "SETDA stands with our state members and partner organizations who have been diligently building inclusive broadband and digital access plans rooted in community need, engagement, and systemic transformation. Equitable access to technology is not a partisan issue–it is a public good." You May Also Like The decision points to an uncertain future for existing broadband and digital connectivity efforts managed or funded by the federal government. Since most serve specific communities and demographics which are at the highest risk of being technologically disconnected or left behind, they have entered the crosshairs of the administration's "anti-woke" crusade. Indigenous connectivity advocates, for example, warned that a Trump presidency would have an immediate impact on rural broadband projects that were in the process of breaking ground, as the president simultaneously promised to shake up the FCC and whittle down the federal government's spending. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! “Ongoing efforts to bridge the digital divide in the U.S. face significant challenges with the recent termination of the Digital Equity Act, and potential drastic changes coming to the Broadband Equity Access and Deployment (BEAD) program," said Sharayah Lane, senior advisor of community connectivity for the global nonprofit the Internet Society and member of the Lummi Nation. "This will critically impact the future of affordable, reliable, high-speed Internet access in underserved areas, further limiting essential education, healthcare, and economic opportunities."The Biden administration, which pledged billions of federal dollars to building out the nation's high speed broadband and fiber optic network, had made closing the digital divide a central component to its massive federal spending package, including launching the Affordable Connectivity Program, the Tribal Broadband Connectivity Program, and the BEAD initiative. BEAD funds, in particular, were split up between state broadband infrastructure projects, including 19 grants over $1 billion. But now the funds are being pulled out from under them. Commerce Secretary Howard Lutnick has had the $42 billion BEAD budget under review since Trump took office, and has falsely claimed that the program "has not connected a single person to the internet," but is rather a "woke mandate" under the previous presidency. Related Stories Meanwhile, Trump has pushed to open up an auction of highly sought after spectrum bands to serve WiFi, 5G, and 6G projects under his "One Big Beautiful Bill" — a move that may sideline rural connectivity projects focused on building reliable, physical connections to high speed internet. Advocates have long fought for federal investment in "missing middle miles" of fiber optic cables and broadband, rather than unstable satellite connections, such as those promised by Elon Musk's Starlink. "We need to prioritize investments in sustainable infrastructure through programs like BEAD and the Digital Equity Act to ensure long-term, affordable Internet access for all Americans, strengthen the economy, and bolster the nation’s overall digital resilience," said Lane. Chase DiBenedetto Social Good Reporter Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also captures how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.
    0 Σχόλια 0 Μοιράστηκε
Αναζήτηση αποτελεσμάτων