• New Court Order in Stratasys v. Bambu Lab Lawsuit

    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit. 
    Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG. 
    Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299.
    On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward.
    Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases.
    This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits. 
    The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year. 
    Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.       
    A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    Another twist in the Stratasys v. Bambu Lab lawsuit 
    Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities.
    Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers.
    Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice.
    It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab. 
    Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022.
    In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas.
    Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.   
    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.  
    In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party.
    Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment.
    The Bambu Lab X1C 3D printer. Image via Bambu Lab.
    3D printing patent battles 
    The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit. 
    The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent.
    Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.  
    The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs. 
    In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.  
    San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer.
    3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    #new #court #order #stratasys #bambu
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299. On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. #new #court #order #stratasys #bambu
    3DPRINTINGINDUSTRY.COM
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member cases (2:24-CV-00644-JRG and 2:24-CV-00645-JRG) into a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299(a). On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299(a). The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12(b)(7). Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry.
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Commentarios 0 Acciones
  • How to delete your 23andMe data

    DNA testing service 23andMe has undergone serious upheaval in recent months, creating concerns for the 15 million customers who entrusted the company with their personal biological information. After filing for Chapter 11 bankruptcy protection in March, the company became the center of a bidding war that ended Friday when co-founder Anne Wojcicki said she’d successfully reacquired control through her nonprofit TTAM Research Institute for million.
    The bankruptcy proceedings had sent shockwaves through the genetic testing industry and among privacy advocates, with security experts and lawmakers urging customers to take immediate action to safeguard their data. The company’s interim CEO revealed this week that 1.9 million people, around 15% of 23andMe’s customer base, have already requested their genetic data be deleted from the company’s servers.
    The situation became even more complex last week after more than two dozen states filed lawsuits challenging the sale of customers’ private data, arguing that 23andMe must obtain explicit consent before transferring or selling personal information to any new entity.
    While the company’s policies mean you cannot delete all traces of your genetic data — particularly information that may have already been shared with research partners or stored in backup systems — if you’re one of the 15 million people who shared their DNA with 23andMe, there are still meaningful steps you can take to protect yourself and minimize your exposure.
    How to delete your 23andMe data
    To delete your data from 23andMe, you need to log in to your account and then follow these steps:

    Navigate to the Settings section of your profile.
    Scroll down to the selection labeled 23andMe Data. 
    Click the View option and scroll to the Delete Data section.
    Select the Permanently Delete Data button.

    You will then receive an email from 23andMe with a link that will allow you to confirm your deletion request. 
    You can choose to download a copy of your data before deleting it.
    There is an important caveat, as 23andMe’s privacy policy states that the company and its labs “will retain your Genetic Information, date of birth, and sex as required for compliance with applicable legal obligations.”
    The policy continues: “23andMe will also retain limited information related to your account and data deletion request, including but not limited to, your email address, account deletion request identifier, communications related to inquiries or complaints and legal agreements for a limited period of time as required by law, contractual obligations, and/or as necessary for the establishment, exercise or defense of legal claims and for audit and compliance purposes.”
    This essentially means that 23andMe may keep some of your information for an unspecified amount of time. 
    How to destroy your 23andMe test sample and revoke permission for your data to be used for research
    If you previously opted to have your saliva sample and DNA stored by 23andMe, you can change this setting.
    To revoke your permission, go into your 23andMe account settings page and then navigate to Preferences. 
    In addition, if you previously agreed to 23andMe and third-party researchers using your genetic data and sample for research, you can withdraw consent from the Research and Product Consents section in your account settings. 
    While you can reverse that consent, there’s no way for you to delete that information.
    Check in with your family members
    Once you have requested the deletion of your data, it’s important to check in with your family members and encourage them to do the same because it’s not just their DNA that’s at risk of sale — it also affects people they are related to. 
    And while you’re at it, it’s worth checking in with your friends to ensure that all of your loved ones are taking steps to protect their data. 
    This story originally published on March 25 and was updated June 11 with new information.
    #how #delete #your #23andme #data
    How to delete your 23andMe data
    DNA testing service 23andMe has undergone serious upheaval in recent months, creating concerns for the 15 million customers who entrusted the company with their personal biological information. After filing for Chapter 11 bankruptcy protection in March, the company became the center of a bidding war that ended Friday when co-founder Anne Wojcicki said she’d successfully reacquired control through her nonprofit TTAM Research Institute for million. The bankruptcy proceedings had sent shockwaves through the genetic testing industry and among privacy advocates, with security experts and lawmakers urging customers to take immediate action to safeguard their data. The company’s interim CEO revealed this week that 1.9 million people, around 15% of 23andMe’s customer base, have already requested their genetic data be deleted from the company’s servers. The situation became even more complex last week after more than two dozen states filed lawsuits challenging the sale of customers’ private data, arguing that 23andMe must obtain explicit consent before transferring or selling personal information to any new entity. While the company’s policies mean you cannot delete all traces of your genetic data — particularly information that may have already been shared with research partners or stored in backup systems — if you’re one of the 15 million people who shared their DNA with 23andMe, there are still meaningful steps you can take to protect yourself and minimize your exposure. How to delete your 23andMe data To delete your data from 23andMe, you need to log in to your account and then follow these steps: Navigate to the Settings section of your profile. Scroll down to the selection labeled 23andMe Data.  Click the View option and scroll to the Delete Data section. Select the Permanently Delete Data button. You will then receive an email from 23andMe with a link that will allow you to confirm your deletion request.  You can choose to download a copy of your data before deleting it. There is an important caveat, as 23andMe’s privacy policy states that the company and its labs “will retain your Genetic Information, date of birth, and sex as required for compliance with applicable legal obligations.” The policy continues: “23andMe will also retain limited information related to your account and data deletion request, including but not limited to, your email address, account deletion request identifier, communications related to inquiries or complaints and legal agreements for a limited period of time as required by law, contractual obligations, and/or as necessary for the establishment, exercise or defense of legal claims and for audit and compliance purposes.” This essentially means that 23andMe may keep some of your information for an unspecified amount of time.  How to destroy your 23andMe test sample and revoke permission for your data to be used for research If you previously opted to have your saliva sample and DNA stored by 23andMe, you can change this setting. To revoke your permission, go into your 23andMe account settings page and then navigate to Preferences.  In addition, if you previously agreed to 23andMe and third-party researchers using your genetic data and sample for research, you can withdraw consent from the Research and Product Consents section in your account settings.  While you can reverse that consent, there’s no way for you to delete that information. Check in with your family members Once you have requested the deletion of your data, it’s important to check in with your family members and encourage them to do the same because it’s not just their DNA that’s at risk of sale — it also affects people they are related to.  And while you’re at it, it’s worth checking in with your friends to ensure that all of your loved ones are taking steps to protect their data.  This story originally published on March 25 and was updated June 11 with new information. #how #delete #your #23andme #data
    TECHCRUNCH.COM
    How to delete your 23andMe data
    DNA testing service 23andMe has undergone serious upheaval in recent months, creating concerns for the 15 million customers who entrusted the company with their personal biological information. After filing for Chapter 11 bankruptcy protection in March, the company became the center of a bidding war that ended Friday when co-founder Anne Wojcicki said she’d successfully reacquired control through her nonprofit TTAM Research Institute for $305 million. The bankruptcy proceedings had sent shockwaves through the genetic testing industry and among privacy advocates, with security experts and lawmakers urging customers to take immediate action to safeguard their data. The company’s interim CEO revealed this week that 1.9 million people, around 15% of 23andMe’s customer base, have already requested their genetic data be deleted from the company’s servers. The situation became even more complex last week after more than two dozen states filed lawsuits challenging the sale of customers’ private data, arguing that 23andMe must obtain explicit consent before transferring or selling personal information to any new entity. While the company’s policies mean you cannot delete all traces of your genetic data — particularly information that may have already been shared with research partners or stored in backup systems — if you’re one of the 15 million people who shared their DNA with 23andMe, there are still meaningful steps you can take to protect yourself and minimize your exposure. How to delete your 23andMe data To delete your data from 23andMe, you need to log in to your account and then follow these steps: Navigate to the Settings section of your profile. Scroll down to the selection labeled 23andMe Data.  Click the View option and scroll to the Delete Data section. Select the Permanently Delete Data button. You will then receive an email from 23andMe with a link that will allow you to confirm your deletion request.  You can choose to download a copy of your data before deleting it. There is an important caveat, as 23andMe’s privacy policy states that the company and its labs “will retain your Genetic Information, date of birth, and sex as required for compliance with applicable legal obligations.” The policy continues: “23andMe will also retain limited information related to your account and data deletion request, including but not limited to, your email address, account deletion request identifier, communications related to inquiries or complaints and legal agreements for a limited period of time as required by law, contractual obligations, and/or as necessary for the establishment, exercise or defense of legal claims and for audit and compliance purposes.” This essentially means that 23andMe may keep some of your information for an unspecified amount of time.  How to destroy your 23andMe test sample and revoke permission for your data to be used for research If you previously opted to have your saliva sample and DNA stored by 23andMe, you can change this setting. To revoke your permission, go into your 23andMe account settings page and then navigate to Preferences.  In addition, if you previously agreed to 23andMe and third-party researchers using your genetic data and sample for research, you can withdraw consent from the Research and Product Consents section in your account settings.  While you can reverse that consent, there’s no way for you to delete that information. Check in with your family members Once you have requested the deletion of your data, it’s important to check in with your family members and encourage them to do the same because it’s not just their DNA that’s at risk of sale — it also affects people they are related to.  And while you’re at it, it’s worth checking in with your friends to ensure that all of your loved ones are taking steps to protect their data.  This story originally published on March 25 and was updated June 11 with new information.
    0 Commentarios 0 Acciones
  • How addresses are collected and put on people finder sites

    Published
    June 14, 2025 10:00am EDT close Top lawmaker on cybersecurity panel talks threats to US agriculture Senate Armed Services Committee member Mike Rounds, R-S.D., speaks to Fox News Digital NEWYou can now listen to Fox News articles!
    Your home address might be easier to find online than you think. A quick search of your name could turn up past and current locations, all thanks to people finder sites. These data broker sites quietly collect and publish personal details without your consent, making your privacy vulnerable with just a few clicks.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. A woman searching for herself online.How your address gets exposed online and who’s using itIf you’ve ever searched for your name and found personal details, like your address, on unfamiliar websites, you’re not alone. People finder platforms collect this information from public records and third-party data brokers, then publish and share it widely. They often link your address to other details such as phone numbers, email addresses and even relatives.11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025While this data may already be public in various places, these sites make it far easier to access and monetize it at scale. In one recent breach, more than 183 million login credentials were exposed through an unsecured database. Many of these records were linked to physical addresses, raising concerns about how multiple sources of personal data can be combined and exploited.Although people finder sites claim to help reconnect friends or locate lost contacts, they also make sensitive personal information available to anyone willing to pay. This includes scammers, spammers and identity thieves who use it for fraud, harassment, and targeted scams. A woman searching for herself online.How do people search sites get your home address?First, let’s define two sources of information; public and private databases that people search sites use to get your detailed profile, including your home address. They run an automated search on these databases with key information about you and add your home address from the search results. 1. Public sourcesYour home address can appear in:Property deeds: When you buy or sell a home, your name and address become part of the public record.Voter registration: You need to list your address when voting.Court documents: Addresses appear in legal filings or lawsuits.Marriage and divorce records: These often include current or past addresses.Business licenses and professional registrations: If you own a business or hold a license, your address can be listed.WHAT IS ARTIFICIAL INTELLIGENCE?These records are legal to access, and people finder sites collect and repackage them into detailed personal profiles.2. Private sourcesOther sites buy your data from companies you’ve interacted with:Online purchases: When you buy something online, your address is recorded and can be sold to marketing companies.Subscriptions and memberships: Magazines, clubs and loyalty programs often share your information.Social media platforms: Your location or address details can be gathered indirectly from posts, photos or shared information.Mobile apps and websites: Some apps track your location.People finder sites buy this data from other data brokers and combine it with public records to build complete profiles that include address information. A woman searching for herself online.What are the risks of having your address on people finder sites?The Federal Trade Commissionadvises people to request the removal of their private data, including home addresses, from people search sites due to the associated risks of stalking, scamming and other crimes.People search sites are a goldmine for cybercriminals looking to target and profile potential victims as well as plan comprehensive cyberattacks. Losses due to targeted phishing attacks increased by 33% in 2024, according to the FBI. So, having your home address publicly accessible can lead to several risks:Stalking and harassment: Criminals can easily find your home address and threaten you.Identity theft: Scammers can use your address and other personal information to impersonate you or fraudulently open accounts.Unwanted contact: Marketers and scammers can use your address to send junk mail or phishing or brushing scams.Increased financial risks: Insurance companies or lenders can use publicly available address information to unfairly decide your rates or eligibility.Burglary and home invasion: Criminals can use your location to target your home when you’re away or vulnerable.How to protect your home addressThe good news is that you can take steps to reduce the risks and keep your address private. However, keep in mind that data brokers and people search sites can re-list your information after some time, so you might need to request data removal periodically.I recommend a few ways to delete your private information, including your home address, from such websites.1. Use personal data removal services: Data brokers can sell your home address and other personal data to multiple businesses and individuals, so the key is to act fast. If you’re looking for an easier way to protect your privacy, a data removal service can do the heavy lifting for you, automatically requesting data removal from brokers and tracking compliance.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web2. Opt out manually : Use a free scanner provided by a data removal service to check which people search sites that list your address. Then, visit each of these websites and look for an opt-out procedure or form: keywords like "opt out," "delete my information," etc., point the way.Follow each site’s opt-out process carefully, and confirm they’ve removed all your personal info, otherwise, it may get relisted.3. Monitor your digital footprint: I recommend regularly searching online for your name to see if your location is publicly available. If only your social media profile pops up, there’s no need to worry. However, people finder sites tend to relist your private information, including your home address, after some time.4. Limit sharing your address online: Be careful about sharing your home address on social media, online forms and apps. Review privacy settings regularly, and only provide your address when absolutely necessary. Also, adjust your phone settings so that apps don’t track your location.Kurt’s key takeawaysYour home address is more vulnerable than you think. People finder sites aggregate data from public records and private sources to display your address online, often without your knowledge or consent. This can lead to serious privacy and safety risks. Taking proactive steps to protect your home address is essential. Do it manually or use a data removal tool for an easier process. By understanding how your location is collected and taking measures to remove your address from online sites, you can reclaim control over your personal data.CLICK HERE TO GET THE FOX NEWS APPHow do you feel about companies making your home address so easy to find? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #how #addresses #are #collected #put
    How addresses are collected and put on people finder sites
    Published June 14, 2025 10:00am EDT close Top lawmaker on cybersecurity panel talks threats to US agriculture Senate Armed Services Committee member Mike Rounds, R-S.D., speaks to Fox News Digital NEWYou can now listen to Fox News articles! Your home address might be easier to find online than you think. A quick search of your name could turn up past and current locations, all thanks to people finder sites. These data broker sites quietly collect and publish personal details without your consent, making your privacy vulnerable with just a few clicks.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. A woman searching for herself online.How your address gets exposed online and who’s using itIf you’ve ever searched for your name and found personal details, like your address, on unfamiliar websites, you’re not alone. People finder platforms collect this information from public records and third-party data brokers, then publish and share it widely. They often link your address to other details such as phone numbers, email addresses and even relatives.11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025While this data may already be public in various places, these sites make it far easier to access and monetize it at scale. In one recent breach, more than 183 million login credentials were exposed through an unsecured database. Many of these records were linked to physical addresses, raising concerns about how multiple sources of personal data can be combined and exploited.Although people finder sites claim to help reconnect friends or locate lost contacts, they also make sensitive personal information available to anyone willing to pay. This includes scammers, spammers and identity thieves who use it for fraud, harassment, and targeted scams. A woman searching for herself online.How do people search sites get your home address?First, let’s define two sources of information; public and private databases that people search sites use to get your detailed profile, including your home address. They run an automated search on these databases with key information about you and add your home address from the search results. 1. Public sourcesYour home address can appear in:Property deeds: When you buy or sell a home, your name and address become part of the public record.Voter registration: You need to list your address when voting.Court documents: Addresses appear in legal filings or lawsuits.Marriage and divorce records: These often include current or past addresses.Business licenses and professional registrations: If you own a business or hold a license, your address can be listed.WHAT IS ARTIFICIAL INTELLIGENCE?These records are legal to access, and people finder sites collect and repackage them into detailed personal profiles.2. Private sourcesOther sites buy your data from companies you’ve interacted with:Online purchases: When you buy something online, your address is recorded and can be sold to marketing companies.Subscriptions and memberships: Magazines, clubs and loyalty programs often share your information.Social media platforms: Your location or address details can be gathered indirectly from posts, photos or shared information.Mobile apps and websites: Some apps track your location.People finder sites buy this data from other data brokers and combine it with public records to build complete profiles that include address information. A woman searching for herself online.What are the risks of having your address on people finder sites?The Federal Trade Commissionadvises people to request the removal of their private data, including home addresses, from people search sites due to the associated risks of stalking, scamming and other crimes.People search sites are a goldmine for cybercriminals looking to target and profile potential victims as well as plan comprehensive cyberattacks. Losses due to targeted phishing attacks increased by 33% in 2024, according to the FBI. So, having your home address publicly accessible can lead to several risks:Stalking and harassment: Criminals can easily find your home address and threaten you.Identity theft: Scammers can use your address and other personal information to impersonate you or fraudulently open accounts.Unwanted contact: Marketers and scammers can use your address to send junk mail or phishing or brushing scams.Increased financial risks: Insurance companies or lenders can use publicly available address information to unfairly decide your rates or eligibility.Burglary and home invasion: Criminals can use your location to target your home when you’re away or vulnerable.How to protect your home addressThe good news is that you can take steps to reduce the risks and keep your address private. However, keep in mind that data brokers and people search sites can re-list your information after some time, so you might need to request data removal periodically.I recommend a few ways to delete your private information, including your home address, from such websites.1. Use personal data removal services: Data brokers can sell your home address and other personal data to multiple businesses and individuals, so the key is to act fast. If you’re looking for an easier way to protect your privacy, a data removal service can do the heavy lifting for you, automatically requesting data removal from brokers and tracking compliance.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web2. Opt out manually : Use a free scanner provided by a data removal service to check which people search sites that list your address. Then, visit each of these websites and look for an opt-out procedure or form: keywords like "opt out," "delete my information," etc., point the way.Follow each site’s opt-out process carefully, and confirm they’ve removed all your personal info, otherwise, it may get relisted.3. Monitor your digital footprint: I recommend regularly searching online for your name to see if your location is publicly available. If only your social media profile pops up, there’s no need to worry. However, people finder sites tend to relist your private information, including your home address, after some time.4. Limit sharing your address online: Be careful about sharing your home address on social media, online forms and apps. Review privacy settings regularly, and only provide your address when absolutely necessary. Also, adjust your phone settings so that apps don’t track your location.Kurt’s key takeawaysYour home address is more vulnerable than you think. People finder sites aggregate data from public records and private sources to display your address online, often without your knowledge or consent. This can lead to serious privacy and safety risks. Taking proactive steps to protect your home address is essential. Do it manually or use a data removal tool for an easier process. By understanding how your location is collected and taking measures to remove your address from online sites, you can reclaim control over your personal data.CLICK HERE TO GET THE FOX NEWS APPHow do you feel about companies making your home address so easy to find? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #how #addresses #are #collected #put
    WWW.FOXNEWS.COM
    How addresses are collected and put on people finder sites
    Published June 14, 2025 10:00am EDT close Top lawmaker on cybersecurity panel talks threats to US agriculture Senate Armed Services Committee member Mike Rounds, R-S.D., speaks to Fox News Digital NEWYou can now listen to Fox News articles! Your home address might be easier to find online than you think. A quick search of your name could turn up past and current locations, all thanks to people finder sites. These data broker sites quietly collect and publish personal details without your consent, making your privacy vulnerable with just a few clicks.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. A woman searching for herself online. (Kurt "CyberGuy" Knutsson)How your address gets exposed online and who’s using itIf you’ve ever searched for your name and found personal details, like your address, on unfamiliar websites, you’re not alone. People finder platforms collect this information from public records and third-party data brokers, then publish and share it widely. They often link your address to other details such as phone numbers, email addresses and even relatives.11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025While this data may already be public in various places, these sites make it far easier to access and monetize it at scale. In one recent breach, more than 183 million login credentials were exposed through an unsecured database. Many of these records were linked to physical addresses, raising concerns about how multiple sources of personal data can be combined and exploited.Although people finder sites claim to help reconnect friends or locate lost contacts, they also make sensitive personal information available to anyone willing to pay. This includes scammers, spammers and identity thieves who use it for fraud, harassment, and targeted scams. A woman searching for herself online. (Kurt "CyberGuy" Knutsson)How do people search sites get your home address?First, let’s define two sources of information; public and private databases that people search sites use to get your detailed profile, including your home address. They run an automated search on these databases with key information about you and add your home address from the search results. 1. Public sourcesYour home address can appear in:Property deeds: When you buy or sell a home, your name and address become part of the public record.Voter registration: You need to list your address when voting.Court documents: Addresses appear in legal filings or lawsuits.Marriage and divorce records: These often include current or past addresses.Business licenses and professional registrations: If you own a business or hold a license, your address can be listed.WHAT IS ARTIFICIAL INTELLIGENCE (AI)?These records are legal to access, and people finder sites collect and repackage them into detailed personal profiles.2. Private sourcesOther sites buy your data from companies you’ve interacted with:Online purchases: When you buy something online, your address is recorded and can be sold to marketing companies.Subscriptions and memberships: Magazines, clubs and loyalty programs often share your information.Social media platforms: Your location or address details can be gathered indirectly from posts, photos or shared information.Mobile apps and websites: Some apps track your location.People finder sites buy this data from other data brokers and combine it with public records to build complete profiles that include address information. A woman searching for herself online. (Kurt "CyberGuy" Knutsson)What are the risks of having your address on people finder sites?The Federal Trade Commission (FTC) advises people to request the removal of their private data, including home addresses, from people search sites due to the associated risks of stalking, scamming and other crimes.People search sites are a goldmine for cybercriminals looking to target and profile potential victims as well as plan comprehensive cyberattacks. Losses due to targeted phishing attacks increased by 33% in 2024, according to the FBI. So, having your home address publicly accessible can lead to several risks:Stalking and harassment: Criminals can easily find your home address and threaten you.Identity theft: Scammers can use your address and other personal information to impersonate you or fraudulently open accounts.Unwanted contact: Marketers and scammers can use your address to send junk mail or phishing or brushing scams.Increased financial risks: Insurance companies or lenders can use publicly available address information to unfairly decide your rates or eligibility.Burglary and home invasion: Criminals can use your location to target your home when you’re away or vulnerable.How to protect your home addressThe good news is that you can take steps to reduce the risks and keep your address private. However, keep in mind that data brokers and people search sites can re-list your information after some time, so you might need to request data removal periodically.I recommend a few ways to delete your private information, including your home address, from such websites.1. Use personal data removal services: Data brokers can sell your home address and other personal data to multiple businesses and individuals, so the key is to act fast. If you’re looking for an easier way to protect your privacy, a data removal service can do the heavy lifting for you, automatically requesting data removal from brokers and tracking compliance.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web2. Opt out manually : Use a free scanner provided by a data removal service to check which people search sites that list your address. Then, visit each of these websites and look for an opt-out procedure or form: keywords like "opt out," "delete my information," etc., point the way.Follow each site’s opt-out process carefully, and confirm they’ve removed all your personal info, otherwise, it may get relisted.3. Monitor your digital footprint: I recommend regularly searching online for your name to see if your location is publicly available. If only your social media profile pops up, there’s no need to worry. However, people finder sites tend to relist your private information, including your home address, after some time.4. Limit sharing your address online: Be careful about sharing your home address on social media, online forms and apps. Review privacy settings regularly, and only provide your address when absolutely necessary. Also, adjust your phone settings so that apps don’t track your location.Kurt’s key takeawaysYour home address is more vulnerable than you think. People finder sites aggregate data from public records and private sources to display your address online, often without your knowledge or consent. This can lead to serious privacy and safety risks. Taking proactive steps to protect your home address is essential. Do it manually or use a data removal tool for an easier process. By understanding how your location is collected and taking measures to remove your address from online sites, you can reclaim control over your personal data.CLICK HERE TO GET THE FOX NEWS APPHow do you feel about companies making your home address so easy to find? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    0 Commentarios 0 Acciones
  • Cloud Security Best Practices Protecting Business Data in a Multi-Cloud World

    The cloud has changed everything. It’s faster, cheaper, and easier to scale than traditional infrastructure. Initially, most companies chose a single cloud provider. That’s no longer enough. Now, nearly 86% of businesses use more than one cloud.
    This approach—called multi-cloud—lets teams choose the best features from each provider. But it also opens the door to new security risks. When apps, data, and tools are scattered across platforms, managing security gets harder. And in today's world of constant cyber threats, ignoring cloud security is not an option.
    Let’s walk through real-world challenges and the best ways to protect business data in a multi-cloud environment.

    1. Know What You’re Working With
    Start with visibility. Make a full inventory of the cloud platforms, apps, and storage your business uses. Ask every department—marketing, finance, HR—what tools they’ve signed up for. Many use services without informing IT. This is shadow IT, and it’s risky.
    Once you have the list, figure out what data lives where. Some workloads are low-risk. Others involve customer records, credit card data, or legal files. Prioritize those.

    2. Build a Unified Security Strategy
    One of the biggest mistakes companies make is treating each cloud provider as a separate system. Every provider has its own rules, tools, and settings. If your security strategy is broken up, gaps will appear.
    Instead, aim for a single, connected approach. Use the same access rules, encryption standards, and monitoring tools across all clouds. You don’t want different policies on AWS and Azure—it just invites trouble.
    Tools like centralized dashboards, SIEM, and SOARhelp you keep everything in one place.

    3. Enforce Strict Access Controls
    In a multi-cloud world, identity and access control are one of the hardest things to get right. Every platform has its own login system. Without proper integration, mistakes happen. Someone might get more access than they need, or never lose access when they leave the company.
    Stick to these practices:

    Use role-based access control.
    Limit permissions to the bare minimum.
    Turn on multi-factor authentication.
    Link logins across platforms using identity federation.

    The more consistent your access rules are, the easier it is to control who gets in and what they can do.

    4. Use the Zero Trust Model
    Zero Trust means never assume anything is safe. Every user, device, and app must prove itself—every time. Even if a user is on your network, don’t trust them by default.
    This model reduces risk. It checks each request. It verifies users. And it looks for signs of abnormal behavior, like someone logging in from a new device or country.
    Zero Trust works well with automation and real-time monitoring. It also forces teams to rethink how data is shared and accessed.

    5. Encrypt Data—Always
    Encryption is a basic but powerful layer of defense. It protects data whether it’s sitting in storage or moving between systems. If attackers get in, encrypted data is useless without the keys.
    Most cloud platforms offer built-in encryption. But don’t rely only on that. You can manage your own keys with tools like AWS KMS or Azure Key Vault. That gives you more control.
    To stay safe:

    Encrypt both at rest and in transit.
    Avoid default settings.
    Rotate encryption keys regularly.

    6. Monitor in Real Time
    Security is not a one-time task. You need to watch your systems around the clock. Set alerts for things like large file downloads, unusual logins, or traffic spikes.
    Centralized monitoring helps a lot. It pulls logs from all your platforms and tools into one place. That way, your security team isn’t flipping between dashboards when something goes wrong.
    Also, use automation to filter out noise and surface real threats faster.

    7. Set Up Regular Audits and Compliance Checks
    Multi-cloud setups are great for flexibility, but complex when it comes to compliance. Each platform has its own set of controls and certifications. Managing them all can be overwhelming.
    That’s why audits matter.
    Run security checks on a regular schedule—monthly, quarterly, or after every major change. Look for misconfigured permissions, missing patches, or unsecured data. And document everything.
    Also, make sure your tools help meet regulations like GDPR, HIPAA, or PCI DSS. Automated compliance scans can help stay on top of this.

    8. Prevent Data Loss with Smart Policies
    Sensitive data is always at risk. Employees might share it by mistake. Attackers might try to steal it. That’s where Data Loss Preventioncomes in.
    DLP tools block unauthorized sharing of personal data, financial records, or internal files. You can create rules like “Don’t send customer SSNs over email” or “Block uploads of credit card data to personal drives.”
    DLP also supports compliance and helps avoid lawsuits or fines when accidents happen.

    9. Automate Where You Can
    Manual work slows things down, and mistakes happen. That’s why automation is key in cloud security.
    Automate things like:

    Patch management
    Access reviews
    Backup schedules
    Security alerts

    Automation speeds up your response time. It also frees your security team to focus on serious issues, not routine tasks.

    10. Centralized Security Control
    One major downside of multi-cloud isa lack of visibility. If you’re jumping between different tools for each cloud, you miss things.
    Instead, use a centralized security management system. It collects data from all clouds, shows risk levels, flags issues, and helps you fix them from one place.
    This unified view makes a huge difference. It helps you react faster and stay ahead of threats.

    Final Thought
    Cloud providers have made data storage and computing easier than ever. But with great power comes risk. Using multiple clouds gives more choice, but also more responsibility.
    Most businesses today are not ready. Only 15% have a mature multi-cloud security plan, says the 2023 Cisco Cyber Security Readiness Index. That means many are exposed.
    The good news? You can fix this. Start with simple steps. Know what you use. Lock it down. Watch it closely. Keep improving. And above all, treat cloud security not as a technical box to check, but as something critical to your business.
    Because in today’s world, a single breach can shut you down. And that’s too big a risk to ignore.
    #cloud #security #best #practices #protecting
    Cloud Security Best Practices Protecting Business Data in a Multi-Cloud World
    The cloud has changed everything. It’s faster, cheaper, and easier to scale than traditional infrastructure. Initially, most companies chose a single cloud provider. That’s no longer enough. Now, nearly 86% of businesses use more than one cloud. This approach—called multi-cloud—lets teams choose the best features from each provider. But it also opens the door to new security risks. When apps, data, and tools are scattered across platforms, managing security gets harder. And in today's world of constant cyber threats, ignoring cloud security is not an option. Let’s walk through real-world challenges and the best ways to protect business data in a multi-cloud environment. 1. Know What You’re Working With Start with visibility. Make a full inventory of the cloud platforms, apps, and storage your business uses. Ask every department—marketing, finance, HR—what tools they’ve signed up for. Many use services without informing IT. This is shadow IT, and it’s risky. Once you have the list, figure out what data lives where. Some workloads are low-risk. Others involve customer records, credit card data, or legal files. Prioritize those. 2. Build a Unified Security Strategy One of the biggest mistakes companies make is treating each cloud provider as a separate system. Every provider has its own rules, tools, and settings. If your security strategy is broken up, gaps will appear. Instead, aim for a single, connected approach. Use the same access rules, encryption standards, and monitoring tools across all clouds. You don’t want different policies on AWS and Azure—it just invites trouble. Tools like centralized dashboards, SIEM, and SOARhelp you keep everything in one place. 3. Enforce Strict Access Controls In a multi-cloud world, identity and access control are one of the hardest things to get right. Every platform has its own login system. Without proper integration, mistakes happen. Someone might get more access than they need, or never lose access when they leave the company. Stick to these practices: Use role-based access control. Limit permissions to the bare minimum. Turn on multi-factor authentication. Link logins across platforms using identity federation. The more consistent your access rules are, the easier it is to control who gets in and what they can do. 4. Use the Zero Trust Model Zero Trust means never assume anything is safe. Every user, device, and app must prove itself—every time. Even if a user is on your network, don’t trust them by default. This model reduces risk. It checks each request. It verifies users. And it looks for signs of abnormal behavior, like someone logging in from a new device or country. Zero Trust works well with automation and real-time monitoring. It also forces teams to rethink how data is shared and accessed. 5. Encrypt Data—Always Encryption is a basic but powerful layer of defense. It protects data whether it’s sitting in storage or moving between systems. If attackers get in, encrypted data is useless without the keys. Most cloud platforms offer built-in encryption. But don’t rely only on that. You can manage your own keys with tools like AWS KMS or Azure Key Vault. That gives you more control. To stay safe: Encrypt both at rest and in transit. Avoid default settings. Rotate encryption keys regularly. 6. Monitor in Real Time Security is not a one-time task. You need to watch your systems around the clock. Set alerts for things like large file downloads, unusual logins, or traffic spikes. Centralized monitoring helps a lot. It pulls logs from all your platforms and tools into one place. That way, your security team isn’t flipping between dashboards when something goes wrong. Also, use automation to filter out noise and surface real threats faster. 7. Set Up Regular Audits and Compliance Checks Multi-cloud setups are great for flexibility, but complex when it comes to compliance. Each platform has its own set of controls and certifications. Managing them all can be overwhelming. That’s why audits matter. Run security checks on a regular schedule—monthly, quarterly, or after every major change. Look for misconfigured permissions, missing patches, or unsecured data. And document everything. Also, make sure your tools help meet regulations like GDPR, HIPAA, or PCI DSS. Automated compliance scans can help stay on top of this. 8. Prevent Data Loss with Smart Policies Sensitive data is always at risk. Employees might share it by mistake. Attackers might try to steal it. That’s where Data Loss Preventioncomes in. DLP tools block unauthorized sharing of personal data, financial records, or internal files. You can create rules like “Don’t send customer SSNs over email” or “Block uploads of credit card data to personal drives.” DLP also supports compliance and helps avoid lawsuits or fines when accidents happen. 9. Automate Where You Can Manual work slows things down, and mistakes happen. That’s why automation is key in cloud security. Automate things like: Patch management Access reviews Backup schedules Security alerts Automation speeds up your response time. It also frees your security team to focus on serious issues, not routine tasks. 10. Centralized Security Control One major downside of multi-cloud isa lack of visibility. If you’re jumping between different tools for each cloud, you miss things. Instead, use a centralized security management system. It collects data from all clouds, shows risk levels, flags issues, and helps you fix them from one place. This unified view makes a huge difference. It helps you react faster and stay ahead of threats. Final Thought Cloud providers have made data storage and computing easier than ever. But with great power comes risk. Using multiple clouds gives more choice, but also more responsibility. Most businesses today are not ready. Only 15% have a mature multi-cloud security plan, says the 2023 Cisco Cyber Security Readiness Index. That means many are exposed. The good news? You can fix this. Start with simple steps. Know what you use. Lock it down. Watch it closely. Keep improving. And above all, treat cloud security not as a technical box to check, but as something critical to your business. Because in today’s world, a single breach can shut you down. And that’s too big a risk to ignore. #cloud #security #best #practices #protecting
    JUSTTOTALTECH.COM
    Cloud Security Best Practices Protecting Business Data in a Multi-Cloud World
    The cloud has changed everything. It’s faster, cheaper, and easier to scale than traditional infrastructure. Initially, most companies chose a single cloud provider. That’s no longer enough. Now, nearly 86% of businesses use more than one cloud. This approach—called multi-cloud—lets teams choose the best features from each provider. But it also opens the door to new security risks. When apps, data, and tools are scattered across platforms, managing security gets harder. And in today's world of constant cyber threats, ignoring cloud security is not an option. Let’s walk through real-world challenges and the best ways to protect business data in a multi-cloud environment. 1. Know What You’re Working With Start with visibility. Make a full inventory of the cloud platforms, apps, and storage your business uses. Ask every department—marketing, finance, HR—what tools they’ve signed up for. Many use services without informing IT. This is shadow IT, and it’s risky. Once you have the list, figure out what data lives where. Some workloads are low-risk. Others involve customer records, credit card data, or legal files. Prioritize those. 2. Build a Unified Security Strategy One of the biggest mistakes companies make is treating each cloud provider as a separate system. Every provider has its own rules, tools, and settings. If your security strategy is broken up, gaps will appear. Instead, aim for a single, connected approach. Use the same access rules, encryption standards, and monitoring tools across all clouds. You don’t want different policies on AWS and Azure—it just invites trouble. Tools like centralized dashboards, SIEM (Security Information and Event Management), and SOAR (Security Orchestration, Automation, and Response) help you keep everything in one place. 3. Enforce Strict Access Controls In a multi-cloud world, identity and access control are one of the hardest things to get right. Every platform has its own login system. Without proper integration, mistakes happen. Someone might get more access than they need, or never lose access when they leave the company. Stick to these practices: Use role-based access control. Limit permissions to the bare minimum. Turn on multi-factor authentication. Link logins across platforms using identity federation. The more consistent your access rules are, the easier it is to control who gets in and what they can do. 4. Use the Zero Trust Model Zero Trust means never assume anything is safe. Every user, device, and app must prove itself—every time. Even if a user is on your network, don’t trust them by default. This model reduces risk. It checks each request. It verifies users. And it looks for signs of abnormal behavior, like someone logging in from a new device or country. Zero Trust works well with automation and real-time monitoring. It also forces teams to rethink how data is shared and accessed. 5. Encrypt Data—Always Encryption is a basic but powerful layer of defense. It protects data whether it’s sitting in storage or moving between systems. If attackers get in, encrypted data is useless without the keys. Most cloud platforms offer built-in encryption. But don’t rely only on that. You can manage your own keys with tools like AWS KMS or Azure Key Vault. That gives you more control. To stay safe: Encrypt both at rest and in transit. Avoid default settings. Rotate encryption keys regularly. 6. Monitor in Real Time Security is not a one-time task. You need to watch your systems around the clock. Set alerts for things like large file downloads, unusual logins, or traffic spikes. Centralized monitoring helps a lot. It pulls logs from all your platforms and tools into one place. That way, your security team isn’t flipping between dashboards when something goes wrong. Also, use automation to filter out noise and surface real threats faster. 7. Set Up Regular Audits and Compliance Checks Multi-cloud setups are great for flexibility, but complex when it comes to compliance. Each platform has its own set of controls and certifications. Managing them all can be overwhelming. That’s why audits matter. Run security checks on a regular schedule—monthly, quarterly, or after every major change. Look for misconfigured permissions, missing patches, or unsecured data. And document everything. Also, make sure your tools help meet regulations like GDPR, HIPAA, or PCI DSS. Automated compliance scans can help stay on top of this. 8. Prevent Data Loss with Smart Policies Sensitive data is always at risk. Employees might share it by mistake. Attackers might try to steal it. That’s where Data Loss Prevention (DLP) comes in. DLP tools block unauthorized sharing of personal data, financial records, or internal files. You can create rules like “Don’t send customer SSNs over email” or “Block uploads of credit card data to personal drives.” DLP also supports compliance and helps avoid lawsuits or fines when accidents happen. 9. Automate Where You Can Manual work slows things down, and mistakes happen. That’s why automation is key in cloud security. Automate things like: Patch management Access reviews Backup schedules Security alerts Automation speeds up your response time. It also frees your security team to focus on serious issues, not routine tasks. 10. Centralized Security Control One major downside of multi-cloud isa lack of visibility. If you’re jumping between different tools for each cloud, you miss things. Instead, use a centralized security management system. It collects data from all clouds, shows risk levels, flags issues, and helps you fix them from one place. This unified view makes a huge difference. It helps you react faster and stay ahead of threats. Final Thought Cloud providers have made data storage and computing easier than ever. But with great power comes risk. Using multiple clouds gives more choice, but also more responsibility. Most businesses today are not ready. Only 15% have a mature multi-cloud security plan, says the 2023 Cisco Cyber Security Readiness Index. That means many are exposed. The good news? You can fix this. Start with simple steps. Know what you use. Lock it down. Watch it closely. Keep improving. And above all, treat cloud security not as a technical box to check, but as something critical to your business. Because in today’s world, a single breach can shut you down. And that’s too big a risk to ignore.
    Like
    Wow
    Love
    Angry
    Sad
    297
    0 Commentarios 0 Acciones
  • Can AI Mistakes Lead to Real Legal Exposure?

    Posted on : June 5, 2025

    By

    Tech World Times

    AI 

    Rate this post

    Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner.
    What Types of AI Errors Create Legal Liability?
    AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts.
    When Is a Business Owner Liable for AI Mistakes?
    Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility.
    How Do AI Errors Harm Your Reputation and Operations?
    AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk.
    What Steps Reduce Legal Risk From AI Deployments?
    Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset.
    Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization.
    You should review these AI risk mitigation strategies below.

    Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement.
    Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved.
    Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties.
    Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach.
    Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks.

    How Do Attorneys Shield Your Business From AI Legal Risks?
    Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #can #mistakes #lead #real #legal
    Can AI Mistakes Lead to Real Legal Exposure?
    Posted on : June 5, 2025 By Tech World Times AI  Rate this post Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner. What Types of AI Errors Create Legal Liability? AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts. When Is a Business Owner Liable for AI Mistakes? Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility. How Do AI Errors Harm Your Reputation and Operations? AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk. What Steps Reduce Legal Risk From AI Deployments? Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset. Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization. You should review these AI risk mitigation strategies below. Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement. Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved. Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties. Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach. Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks. How Do Attorneys Shield Your Business From AI Legal Risks? Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #can #mistakes #lead #real #legal
    TECHWORLDTIMES.COM
    Can AI Mistakes Lead to Real Legal Exposure?
    Posted on : June 5, 2025 By Tech World Times AI  Rate this post Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner. What Types of AI Errors Create Legal Liability? AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts. When Is a Business Owner Liable for AI Mistakes? Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility. How Do AI Errors Harm Your Reputation and Operations? AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk. What Steps Reduce Legal Risk From AI Deployments? Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset. Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization. You should review these AI risk mitigation strategies below. Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement. Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved. Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties. Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach. Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks. How Do Attorneys Shield Your Business From AI Legal Risks? Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    Like
    Love
    Wow
    Sad
    Angry
    272
    0 Commentarios 0 Acciones
  • Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    News

    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    7 min read

    Published: June 4, 2025

    Key Takeaways

    Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices.
    The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it.
    A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation.

    Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent.
    The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets.
    This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device.
    Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode.
    How Does It Work?
    Here’s the method used by Meta to spy on Android devices:

    As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour.
    When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites.
    However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP portand a UDP port, on your phone in the background. 
    Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost. 
    Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms.

    The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites.
    Yandex also uses a similar method to harvest your personal data.

    Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone. 
    When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters. 
    These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1.
    Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains.
    The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK.

    Meta’s Infamous History with Privacy Norms
    This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations. 
    For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying B. 
    Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission. 
    Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid M and promised to delete the collected data. 
    In 2024, South Korea also fined Meta M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users.
    In September 2024, Meta was fined M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally.
    So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place.
    That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities. 
    The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data.
    Meta’s Timid Response
    Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication. 

    We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson

    This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports.
    Here’s what will possibly happen next:

    A lawsuit may be filed based on the report.
    An investigating committee might be formed to question the matter.
    The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines.
    Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done. 

    The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes.
    More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

    More from News

    View all

    View all
    #meta #yandex #spying #android #users
    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy
    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy News Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy 7 min read Published: June 4, 2025 Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent. The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets. This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device. Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode. How Does It Work? Here’s the method used by Meta to spy on Android devices: As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour. When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites. However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP portand a UDP port, on your phone in the background.  Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost.  Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms. The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites. Yandex also uses a similar method to harvest your personal data. Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone.  When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters.  These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1. Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains. The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK. Meta’s Infamous History with Privacy Norms This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations.  For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying B.  Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission.  Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid M and promised to delete the collected data.  In 2024, South Korea also fined Meta M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users. In September 2024, Meta was fined M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally. So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place. That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities.  The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data. Meta’s Timid Response Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication.  We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports. Here’s what will possibly happen next: A lawsuit may be filed based on the report. An investigating committee might be formed to question the matter. The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines. Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done.  The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes. More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all #meta #yandex #spying #android #users
    TECHREPORT.COM
    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy
    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy News Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy 7 min read Published: June 4, 2025 Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent. The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets. This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device. Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode. How Does It Work? Here’s the method used by Meta to spy on Android devices: As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour. When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites. However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580-12585), on your phone in the background.  Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost.  Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms. The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites. Yandex also uses a similar method to harvest your personal data. Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone.  When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters.  These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1. Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains. The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK. Meta’s Infamous History with Privacy Norms This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations.  For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying $1.4B.  Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta $5B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission.  Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid $90M and promised to delete the collected data.  In 2024, South Korea also fined Meta $15M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users. In September 2024, Meta was fined $101.6M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally. So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place. That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities.  The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data. Meta’s Timid Response Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication.  We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’ (yep, that’s what they are calling it) as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports. Here’s what will possibly happen next: A lawsuit may be filed based on the report. An investigating committee might be formed to question the matter. The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines. Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done.  The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes. More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all
    Like
    Love
    Wow
    Sad
    Angry
    193
    0 Commentarios 0 Acciones
  • The Legal Accountability of AI-Generated Deepfakes in Election Misinformation

    How Deepfakes Are Created

    Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever.

    Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵

    During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹.

    Deepfakes in Recent Elections: Examples

    Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³.

    Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸.

    Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike.

    U.S. Legal Framework and Accountability

    In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements.

    Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws.

    U.S. Legislation and Proposals

    Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage.

    At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes.

    Policy Recommendations: Balancing Integrity and Speech

    Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism.

    Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception.

    Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams.

    Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire.

    References:

    /.

    /.

    .

    .

    .

    .

    .

    .

    .

    /.

    .

    .

    /.

    /.

    .

    The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    #legal #accountability #aigenerated #deepfakes #election
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws. U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: /. /. . . . . . . . /. . . /. /. . The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost. #legal #accountability #aigenerated #deepfakes #election
    WWW.MARKTECHPOST.COM
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networks (GANs) and autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping (one estimate suggests DeepFaceLab was used for over 95% of known deepfake videos)². Voice-cloning tools (often built on similar AI principles) can mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars (turning typed scripts into lifelike “spokespeople”), which have already been misused in disinformation campaigns³. Even mobile apps (e.g. FaceApp, Zao) let users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network (GAN): A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processing (color adjustments, lip-syncing refinements) to enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistencies (blinking irregularities, audio artifacts or metadata mismatches) that betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The caller (“Susan Anderson”) was later fined $6 million by the FCC and indicted under existing telemarketing laws¹⁰¹¹. (Importantly, FCC rules on robocalls applied regardless of AI: the perpetrator could have used a voice actor or recording instead.) Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI (e.g., by photoshopping text on real images)¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidate (who is Suharto’s son-in-law) won the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan (amidst tensions with China), a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities (from Bangladesh and Indonesia to Moldova, Slovakia, India and beyond), often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential ads (not necessarily AI-made) did change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering (such as the Bipartisan Campaign Reform Act, which requires disclaimers on political ads), and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the $6M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation laws (prohibiting threats or coercion) also leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes (e.g. for a plot to impersonate an aide to swing votes in 2020), and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commission (FEC) is preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) have signaled that purely commercial deepfakes could violate consumer protection or election laws (for example, liability for mass false impersonation or for foreign-funded electioneering). U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Act (H.R.5586 in the 118th Congress) would, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categories (e.g. false claims about time/place/manner of voting) while carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters (though Florida’s law exempts parody). Some states (like Texas) define “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints (e.g. Minnesota’s 2023 law was challenged for threatening injunctions against anyone “reasonably believed” to violate it). Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s law (which requires platforms to label or block deepfakes) as unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property (for instance, a celebrity suing over a botched celebrity-deepfake video), rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harms (e.g. automated phone calls impersonating voters, or videos claiming false polling information) may be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original media (as encouraged by the EU AI Act) could deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly available (e.g. the MIT OpenDATATEST) helps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: https://www.security.org/resources/deepfake-statistics/. https://www.wired.com/story/synthesia-ai-deepfakes-it-control-riparbelli/. https://www.gao.gov/products/gao-24-107292. https://technologyquotient.freshfields.com/post/102jb19/eu-ai-act-unpacked-8-new-rules-on-deepfakes. https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd. https://www.lawfaremedia.org/article/new-and-old-tools-to-tackle-deepfakes-and-election-lies-in-2024. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena. https://firstamendment.mtsu.edu/article/political-deepfakes-and-elections/. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. https://law.unh.edu/sites/default/files/media/2022/06/nagumotu_pp113-157.pdf. https://dfrlab.org/2024/10/02/brazil-election-ai-research/. https://dfrlab.org/2024/11/26/brazil-election-ai-deepfakes/. https://freedomhouse.org/article/eu-digital-services-act-win-transparency. The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    0 Commentarios 0 Acciones
  • Why do lawyers keep using ChatGPT?

    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    #why #lawyers #keep #using #chatgpt
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More: #why #lawyers #keep #using #chatgpt
    WWW.THEVERGE.COM
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.” (That said, the cases do at least typically exist.)Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    0 Commentarios 0 Acciones