• New Court Order in Stratasys v. Bambu Lab Lawsuit

    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit. 
    Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG. 
    Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299.
    On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward.
    Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases.
    This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits. 
    The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year. 
    Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.       
    A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    Another twist in the Stratasys v. Bambu Lab lawsuit 
    Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities.
    Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers.
    Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice.
    It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab. 
    Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022.
    In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas.
    Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.   
    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.  
    In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party.
    Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment.
    The Bambu Lab X1C 3D printer. Image via Bambu Lab.
    3D printing patent battles 
    The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit. 
    The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent.
    Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.  
    The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs. 
    In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.  
    San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer.
    3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    #new #court #order #stratasys #bambu
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299. On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. #new #court #order #stratasys #bambu
    3DPRINTINGINDUSTRY.COM
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member cases (2:24-CV-00644-JRG and 2:24-CV-00645-JRG) into a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299(a). On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299(a). The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12(b)(7). Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry.
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Kommentare 0 Anteile
  • Can AI Mistakes Lead to Real Legal Exposure?

    Posted on : June 5, 2025

    By

    Tech World Times

    AI 

    Rate this post

    Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner.
    What Types of AI Errors Create Legal Liability?
    AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts.
    When Is a Business Owner Liable for AI Mistakes?
    Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility.
    How Do AI Errors Harm Your Reputation and Operations?
    AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk.
    What Steps Reduce Legal Risk From AI Deployments?
    Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset.
    Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization.
    You should review these AI risk mitigation strategies below.

    Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement.
    Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved.
    Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties.
    Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach.
    Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks.

    How Do Attorneys Shield Your Business From AI Legal Risks?
    Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #can #mistakes #lead #real #legal
    Can AI Mistakes Lead to Real Legal Exposure?
    Posted on : June 5, 2025 By Tech World Times AI  Rate this post Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner. What Types of AI Errors Create Legal Liability? AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts. When Is a Business Owner Liable for AI Mistakes? Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility. How Do AI Errors Harm Your Reputation and Operations? AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk. What Steps Reduce Legal Risk From AI Deployments? Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset. Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization. You should review these AI risk mitigation strategies below. Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement. Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved. Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties. Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach. Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks. How Do Attorneys Shield Your Business From AI Legal Risks? Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #can #mistakes #lead #real #legal
    TECHWORLDTIMES.COM
    Can AI Mistakes Lead to Real Legal Exposure?
    Posted on : June 5, 2025 By Tech World Times AI  Rate this post Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner. What Types of AI Errors Create Legal Liability? AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts. When Is a Business Owner Liable for AI Mistakes? Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility. How Do AI Errors Harm Your Reputation and Operations? AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk. What Steps Reduce Legal Risk From AI Deployments? Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset. Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization. You should review these AI risk mitigation strategies below. Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement. Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved. Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties. Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach. Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks. How Do Attorneys Shield Your Business From AI Legal Risks? Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    Like
    Love
    Wow
    Sad
    Angry
    272
    0 Kommentare 0 Anteile
  • A federal court’s novel proposal to rein in Trump’s power grab

    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Boardhears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the jobtheir unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More:
    #federal #courts #novel #proposal #rein
    A federal court’s novel proposal to rein in Trump’s power grab
    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Boardhears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the jobtheir unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More: #federal #courts #novel #proposal #rein
    WWW.VOX.COM
    A federal court’s novel proposal to rein in Trump’s power grab
    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Board (MSPB) hears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the job [after] their unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More:
    Like
    Love
    Wow
    Sad
    Angry
    286
    0 Kommentare 0 Anteile
  • Meta Apps Have Been Covertly Tracking Android Users' Web Activity for Months

    I don't expect Meta to respect my data or my privacy, but the company continues to surprise me with how low they're willing to go in the name of data collection. The latest such story comes to us from a report titled "Disclosure: Covert Web-to-App Tracking via Localhost on Android." In short, Meta and Yandexhave been tracking potentially billions of Android users by abusing a security loophole in Android. That loophole allows the companies to access identifying browsing data from your web browser as long as you have their Android apps installed. How does this tracking work?As the report explains, Android allows any installed app with internet permissions to access the "loopback address" or localhost, an address a device uses to communicate with itself. As it happens, your web browser also has access to the localhost, which allows JavaScripts embedded on certain websites to connect to Android apps and share browsing data and identifiers.What are those JavaScripts, you might ask? In this case, that's Meta Pixel and Yandex Metrica, scripts that let companies track users on their sites. Trackers are an unfortunate part of the modern internet, but Meta Pixel is only supposed to be able to follow you while you browse the web. This loop lets Meta Pixel scripts send your browsing data, cookies, and identifiers back to installed Meta apps like Facebook and Instagram. The same goes for Yandex with its apps like Maps and Browser.You certainly didn't sign up for that when you installed Instagram on your Android device. But once you logged in, the next time you visited a website that embedded Meta Pixel, the script beamed your information back to the app. All of a sudden, Meta had identifying browsing data from your web activity, not via the browsing itself, but from the "unrelated" Instagram app. Chrome, Firefox, and Edge were all affected in these findings. DuckDuckGo blocked some but not all of the domains here, so it was "minimally affected." Brave does block requests to the localhost if you don't consent to it, so it did successfully protect users from this tracking.Researchers say Yandex has been doing this since February of 2017 on HTTP sites, and May of 2018 on HTTPS sites. Meta Pixel, on the other hand, hasn't been tracking this way for long: It only started September of 2024 for HTTP, and ended that practice in October. It started via Websocket and WebRTC STUN in November, and WebRTC TURN in May. Website owners apparently complained to Meta starting in September, asking why Meta Pixel communicates with the localhost. As far as researchers could find, Meta never responded.Researchers make it clear that the type of tracking is possible on iOS, as developers can establish localhost connections and apps can "listen in" too. However, they found no evidence of this tracking on iOS devices, and hypothesize that it has to do with how iOS restricts native apps running in the background.Meta has officially stopped this tracking The good news is, as of June 3, researchers say they have not observed Meta Pixel communicating with the localhost. They didn't say the same for Yandex Metrika, though Yandex told Ars Technica it was "discontinuing the practice." Ars Technica also reports that Google has opened an investigation into these actions that "blatantly violate our security and privacy principles."However, even if Meta has stopped this tracking following the report, the damage could be widespread. As highlighted in the report, estimates put Meta Pixel adoption anywhere from 2.4 million to 5.8 million sites. From here, researchers found that just over 17,000 Meta Pixel sites in the U.S. attempt to connect to the localhost, and over 78% of those do so without any user consent needed, including sites like AP News, Buzzfeed, and The Verge. That's a lot of websites that could have been sending your data back to your Facebook and Instagram apps. The report features a tool that you can use to look for affected sites, but notes the list is not exhaustive, and absence doesn't mean the site is safe.Meta sent me the following statement in response to my request for comment: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”
    #meta #apps #have #been #covertly
    Meta Apps Have Been Covertly Tracking Android Users' Web Activity for Months
    I don't expect Meta to respect my data or my privacy, but the company continues to surprise me with how low they're willing to go in the name of data collection. The latest such story comes to us from a report titled "Disclosure: Covert Web-to-App Tracking via Localhost on Android." In short, Meta and Yandexhave been tracking potentially billions of Android users by abusing a security loophole in Android. That loophole allows the companies to access identifying browsing data from your web browser as long as you have their Android apps installed. How does this tracking work?As the report explains, Android allows any installed app with internet permissions to access the "loopback address" or localhost, an address a device uses to communicate with itself. As it happens, your web browser also has access to the localhost, which allows JavaScripts embedded on certain websites to connect to Android apps and share browsing data and identifiers.What are those JavaScripts, you might ask? In this case, that's Meta Pixel and Yandex Metrica, scripts that let companies track users on their sites. Trackers are an unfortunate part of the modern internet, but Meta Pixel is only supposed to be able to follow you while you browse the web. This loop lets Meta Pixel scripts send your browsing data, cookies, and identifiers back to installed Meta apps like Facebook and Instagram. The same goes for Yandex with its apps like Maps and Browser.You certainly didn't sign up for that when you installed Instagram on your Android device. But once you logged in, the next time you visited a website that embedded Meta Pixel, the script beamed your information back to the app. All of a sudden, Meta had identifying browsing data from your web activity, not via the browsing itself, but from the "unrelated" Instagram app. Chrome, Firefox, and Edge were all affected in these findings. DuckDuckGo blocked some but not all of the domains here, so it was "minimally affected." Brave does block requests to the localhost if you don't consent to it, so it did successfully protect users from this tracking.Researchers say Yandex has been doing this since February of 2017 on HTTP sites, and May of 2018 on HTTPS sites. Meta Pixel, on the other hand, hasn't been tracking this way for long: It only started September of 2024 for HTTP, and ended that practice in October. It started via Websocket and WebRTC STUN in November, and WebRTC TURN in May. Website owners apparently complained to Meta starting in September, asking why Meta Pixel communicates with the localhost. As far as researchers could find, Meta never responded.Researchers make it clear that the type of tracking is possible on iOS, as developers can establish localhost connections and apps can "listen in" too. However, they found no evidence of this tracking on iOS devices, and hypothesize that it has to do with how iOS restricts native apps running in the background.Meta has officially stopped this tracking The good news is, as of June 3, researchers say they have not observed Meta Pixel communicating with the localhost. They didn't say the same for Yandex Metrika, though Yandex told Ars Technica it was "discontinuing the practice." Ars Technica also reports that Google has opened an investigation into these actions that "blatantly violate our security and privacy principles."However, even if Meta has stopped this tracking following the report, the damage could be widespread. As highlighted in the report, estimates put Meta Pixel adoption anywhere from 2.4 million to 5.8 million sites. From here, researchers found that just over 17,000 Meta Pixel sites in the U.S. attempt to connect to the localhost, and over 78% of those do so without any user consent needed, including sites like AP News, Buzzfeed, and The Verge. That's a lot of websites that could have been sending your data back to your Facebook and Instagram apps. The report features a tool that you can use to look for affected sites, but notes the list is not exhaustive, and absence doesn't mean the site is safe.Meta sent me the following statement in response to my request for comment: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.” #meta #apps #have #been #covertly
    LIFEHACKER.COM
    Meta Apps Have Been Covertly Tracking Android Users' Web Activity for Months
    I don't expect Meta to respect my data or my privacy, but the company continues to surprise me with how low they're willing to go in the name of data collection. The latest such story comes to us from a report titled "Disclosure: Covert Web-to-App Tracking via Localhost on Android." In short, Meta and Yandex (a Russian technology company) have been tracking potentially billions of Android users by abusing a security loophole in Android. That loophole allows the companies to access identifying browsing data from your web browser as long as you have their Android apps installed. How does this tracking work?As the report explains, Android allows any installed app with internet permissions to access the "loopback address" or localhost, an address a device uses to communicate with itself. As it happens, your web browser also has access to the localhost, which allows JavaScripts embedded on certain websites to connect to Android apps and share browsing data and identifiers.What are those JavaScripts, you might ask? In this case, that's Meta Pixel and Yandex Metrica, scripts that let companies track users on their sites. Trackers are an unfortunate part of the modern internet, but Meta Pixel is only supposed to be able to follow you while you browse the web. This loop lets Meta Pixel scripts send your browsing data, cookies, and identifiers back to installed Meta apps like Facebook and Instagram. The same goes for Yandex with its apps like Maps and Browser.You certainly didn't sign up for that when you installed Instagram on your Android device. But once you logged in, the next time you visited a website that embedded Meta Pixel, the script beamed your information back to the app. All of a sudden, Meta had identifying browsing data from your web activity, not via the browsing itself, but from the "unrelated" Instagram app. Chrome, Firefox, and Edge were all affected in these findings. DuckDuckGo blocked some but not all of the domains here, so it was "minimally affected." Brave does block requests to the localhost if you don't consent to it, so it did successfully protect users from this tracking.Researchers say Yandex has been doing this since February of 2017 on HTTP sites, and May of 2018 on HTTPS sites. Meta Pixel, on the other hand, hasn't been tracking this way for long: It only started September of 2024 for HTTP, and ended that practice in October. It started via Websocket and WebRTC STUN in November, and WebRTC TURN in May. Website owners apparently complained to Meta starting in September, asking why Meta Pixel communicates with the localhost. As far as researchers could find, Meta never responded.Researchers make it clear that the type of tracking is possible on iOS, as developers can establish localhost connections and apps can "listen in" too. However, they found no evidence of this tracking on iOS devices, and hypothesize that it has to do with how iOS restricts native apps running in the background.Meta has officially stopped this tracking The good news is, as of June 3, researchers say they have not observed Meta Pixel communicating with the localhost. They didn't say the same for Yandex Metrika, though Yandex told Ars Technica it was "discontinuing the practice." Ars Technica also reports that Google has opened an investigation into these actions that "blatantly violate our security and privacy principles."However, even if Meta has stopped this tracking following the report, the damage could be widespread. As highlighted in the report, estimates put Meta Pixel adoption anywhere from 2.4 million to 5.8 million sites. From here, researchers found that just over 17,000 Meta Pixel sites in the U.S. attempt to connect to the localhost, and over 78% of those do so without any user consent needed, including sites like AP News, Buzzfeed, and The Verge. That's a lot of websites that could have been sending your data back to your Facebook and Instagram apps. The report features a tool that you can use to look for affected sites, but notes the list is not exhaustive, and absence doesn't mean the site is safe.Meta sent me the following statement in response to my request for comment: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”
    Like
    Love
    Wow
    Sad
    Angry
    77
    0 Kommentare 0 Anteile
  • Robinhood Acquires Bitstamp for $200 Million, Adds Over 50 Licences to Network

    US-based brokerage firm Robinhood has officially acquired Luxembourg-headquartered crypto exchange Bitstamp. In an announcement posted on June 2, Robinhood said that it paid millionin cash to complete this acquisition. With this, Robinhood has now added over 50 licences held by Bitstamp to its own network. Johann Kerbrat, the general manager of Robinhood Crypto had first spoken about the plan to acquire Bitstamp last year during an interview with the Wall Street Journal.Bitstamp was founded in 2011 and is touted as the longest running crypto exchange in the world. Its offices are located in Singapore, Slovenia, the UK, as well as the US. Robinhood plans to use Bitstamp's resources to bring service offerings to institutional investors, Vlad Tenev, the CEO and co-founder of Robinhood, indicated on X."Bitstamp is now part of Robinhood, adding a globally-scaled crypto exchange and our first-ever institutional crypto business. Our work is just beginning," Tenev posted.Following the announcement, Bitstamp changed its name to "Bitstamp by Robinhood" on various online platforms including X.In an official blog post, Bitstamp said that it "has been trusted for 14 years by institutions for its reliable trade execution, deep order books and industry-leading API connectivity and offerings like crypto-as-a-service, institutional lending, and staking. Robinhood is entering the space with an active and highly trusted business with established relationships."Key Highlights on the AcquisitionRobinhood published some important details about Bitstamp and its acquisition for its investor community on June 2.The American firm disclosed that Bitstamp was catering to over 500,000 retail and around five thousand funded institutional customers as of April 30, 2025. Bitstamp's yearly revenue up till April 30 was clocked at million."In 2025, for the remaining seven months of the year post close, Robinhood expects to record approximately millionof Bitstamp-related costs. These costs are nearly all Adjusted Operating Expenses and are primarily driven by business operations, along with some anticipated integration and deal-related costs," Robinhood's announcement post added.Robinhood will now integrate Bitstamp's infrastructure into its own services and offerings."Bringing Bitstamp's platform and expertise into Robinhood's ecosystem will give users an enhanced trading experience with a continuing commitment to compliance, security, and customer-centricity," JB Graftieaux, CEO of Bitstamp said as commenting on the development.Robinhood's crypto trading service had faced legal challenges with the US SEC last year on allegations of having violated the US Securities laws. However, following Donald Trump's return to the White House as the 47th US President, the SEC closed its investigation into Robinhood and did not take any action.
    #robinhood #acquires #bitstamp #million #adds
    Robinhood Acquires Bitstamp for $200 Million, Adds Over 50 Licences to Network
    US-based brokerage firm Robinhood has officially acquired Luxembourg-headquartered crypto exchange Bitstamp. In an announcement posted on June 2, Robinhood said that it paid millionin cash to complete this acquisition. With this, Robinhood has now added over 50 licences held by Bitstamp to its own network. Johann Kerbrat, the general manager of Robinhood Crypto had first spoken about the plan to acquire Bitstamp last year during an interview with the Wall Street Journal.Bitstamp was founded in 2011 and is touted as the longest running crypto exchange in the world. Its offices are located in Singapore, Slovenia, the UK, as well as the US. Robinhood plans to use Bitstamp's resources to bring service offerings to institutional investors, Vlad Tenev, the CEO and co-founder of Robinhood, indicated on X."Bitstamp is now part of Robinhood, adding a globally-scaled crypto exchange and our first-ever institutional crypto business. Our work is just beginning," Tenev posted.Following the announcement, Bitstamp changed its name to "Bitstamp by Robinhood" on various online platforms including X.In an official blog post, Bitstamp said that it "has been trusted for 14 years by institutions for its reliable trade execution, deep order books and industry-leading API connectivity and offerings like crypto-as-a-service, institutional lending, and staking. Robinhood is entering the space with an active and highly trusted business with established relationships."Key Highlights on the AcquisitionRobinhood published some important details about Bitstamp and its acquisition for its investor community on June 2.The American firm disclosed that Bitstamp was catering to over 500,000 retail and around five thousand funded institutional customers as of April 30, 2025. Bitstamp's yearly revenue up till April 30 was clocked at million."In 2025, for the remaining seven months of the year post close, Robinhood expects to record approximately millionof Bitstamp-related costs. These costs are nearly all Adjusted Operating Expenses and are primarily driven by business operations, along with some anticipated integration and deal-related costs," Robinhood's announcement post added.Robinhood will now integrate Bitstamp's infrastructure into its own services and offerings."Bringing Bitstamp's platform and expertise into Robinhood's ecosystem will give users an enhanced trading experience with a continuing commitment to compliance, security, and customer-centricity," JB Graftieaux, CEO of Bitstamp said as commenting on the development.Robinhood's crypto trading service had faced legal challenges with the US SEC last year on allegations of having violated the US Securities laws. However, following Donald Trump's return to the White House as the 47th US President, the SEC closed its investigation into Robinhood and did not take any action. #robinhood #acquires #bitstamp #million #adds
    WWW.GADGETS360.COM
    Robinhood Acquires Bitstamp for $200 Million, Adds Over 50 Licences to Network
    US-based brokerage firm Robinhood has officially acquired Luxembourg-headquartered crypto exchange Bitstamp. In an announcement posted on June 2, Robinhood said that it paid $200 million (roughly Rs. 1,709 crore) in cash to complete this acquisition. With this, Robinhood has now added over 50 licences held by Bitstamp to its own network. Johann Kerbrat, the general manager of Robinhood Crypto had first spoken about the plan to acquire Bitstamp last year during an interview with the Wall Street Journal.Bitstamp was founded in 2011 and is touted as the longest running crypto exchange in the world. Its offices are located in Singapore, Slovenia, the UK, as well as the US. Robinhood plans to use Bitstamp's resources to bring service offerings to institutional investors, Vlad Tenev, the CEO and co-founder of Robinhood, indicated on X."Bitstamp is now part of Robinhood, adding a globally-scaled crypto exchange and our first-ever institutional crypto business. Our work is just beginning," Tenev posted.Following the announcement, Bitstamp changed its name to "Bitstamp by Robinhood" on various online platforms including X.In an official blog post, Bitstamp said that it "has been trusted for 14 years by institutions for its reliable trade execution, deep order books and industry-leading API connectivity and offerings like crypto-as-a-service, institutional lending, and staking. Robinhood is entering the space with an active and highly trusted business with established relationships."Key Highlights on the AcquisitionRobinhood published some important details about Bitstamp and its acquisition for its investor community on June 2.The American firm disclosed that Bitstamp was catering to over 500,000 retail and around five thousand funded institutional customers as of April 30, 2025. Bitstamp's yearly revenue up till April 30 was clocked at $95 million (roughly Rs. 811 crore)."In 2025, for the remaining seven months of the year post close, Robinhood expects to record approximately $65 million (roughly Rs. 555 crore) of Bitstamp-related costs. These costs are nearly all Adjusted Operating Expenses and are primarily driven by business operations, along with some anticipated integration and deal-related costs," Robinhood's announcement post added.Robinhood will now integrate Bitstamp's infrastructure into its own services and offerings."Bringing Bitstamp's platform and expertise into Robinhood's ecosystem will give users an enhanced trading experience with a continuing commitment to compliance, security, and customer-centricity," JB Graftieaux, CEO of Bitstamp said as commenting on the development.Robinhood's crypto trading service had faced legal challenges with the US SEC last year on allegations of having violated the US Securities laws. However, following Donald Trump's return to the White House as the 47th US President, the SEC closed its investigation into Robinhood and did not take any action.(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
    0 Kommentare 0 Anteile
  • JPMorgan Chase CEO Jamie Dimon says he wouldn't count on China folding under Trump's tariffs: 'They're not scared, folks.'

    JPMorgan Chase CEO Jamie Dimon spoke at the 2025 Reagan National Economic Forum on Friday.

    Noam Galai/Getty Images

    2025-06-01T15:39:12Z

    d

    Read in app

    This story is available exclusively to Business Insider
    subscribers. Become an Insider
    and start reading now.
    Have an account?

    Jamie Dimon spoke at the 2025 Reagan National Economic Forum on Friday.
    Dimon said he hoped the US could "get our own act together" amid the US-China trade war.
    Trump said China "violated" its trade agreement with the US this week.

    JPMorgan Chase CEO Jamie Dimon said the United States needs to get its act together on trade — quickly.Dimon discussed the ongoing tension between the United States and China on Friday at the 2025 Reagan National Economic Forum, where he led a fireside chat. When asked what his biggest worry was right now, Dimon pointed to the shifting global geopolitical and economic landscape, including trade."We have problems and we've got to deal with them," Dimon said before referring to "the enemy within."Addressing the "enemy within," he said, includes fixing how the United States approaches permitting, regulation, taxation, immigration, education, and the healthcare system.It also means maintaining important military alliances, he said."China is a potential adversary. They're doing a lot of things well. They have a lot of problems," Dimon said. "What I'm really worried about is us. Can we get our own act together? Our own values, our own capabilities, our own management."Dimon said that if the United States is not the "preeminent military and preeminent economy in 40 years, we will not be the reserve currency. That's a fact."Although Dimon believes the United States is usually resilient, he said things are different this time around."We have to get our act together, and we have to do it very quickly," he said.During the conversation, Dimon spoke about trade deals and encouraged US leaders to engage with China."I just got back from China last week," Dimon said. "They're not scared, folks. This notion that they're going to come bow to America, I wouldn't count on that."Treasury Secretary Scott Bessent disagreed with Dimon during a Sunday appearance on CBS's "Face the Nation.""Jamie is a great banker. I know him well, but I would vociferously disagree with that assessment," Bessent said. "That the laws of economics and gravity apply to the Chinese economy and the Chinese system, just like everyone else."Trump's decision to impose tariffs on numerous countries, including steep tariffs on China, rattled global markets earlier this year. Markets recovered after many countries, including China, began negotiating. But the possibility that tariffs could increase again at any time has investors and economists on edge.On Friday, for instance, in a Truth Social post, Trump accused China of violating the two countries' trade agreement. That same day, Trump said he planned to increase tariffs on steel imports from 25% to 50%."We're going to bring it from 25% to 50%, the tariffs on steel into the United States of America, which will even further secure the steel industry in the United States. Nobody's going to get around that," Trump said during a rally near Pittsburgh.Representatives for JPMorgan Chase declined to comment.
    #jpmorgan #chase #ceo #jamie #dimon
    JPMorgan Chase CEO Jamie Dimon says he wouldn't count on China folding under Trump's tariffs: 'They're not scared, folks.'
    JPMorgan Chase CEO Jamie Dimon spoke at the 2025 Reagan National Economic Forum on Friday. Noam Galai/Getty Images 2025-06-01T15:39:12Z d Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Jamie Dimon spoke at the 2025 Reagan National Economic Forum on Friday. Dimon said he hoped the US could "get our own act together" amid the US-China trade war. Trump said China "violated" its trade agreement with the US this week. JPMorgan Chase CEO Jamie Dimon said the United States needs to get its act together on trade — quickly.Dimon discussed the ongoing tension between the United States and China on Friday at the 2025 Reagan National Economic Forum, where he led a fireside chat. When asked what his biggest worry was right now, Dimon pointed to the shifting global geopolitical and economic landscape, including trade."We have problems and we've got to deal with them," Dimon said before referring to "the enemy within."Addressing the "enemy within," he said, includes fixing how the United States approaches permitting, regulation, taxation, immigration, education, and the healthcare system.It also means maintaining important military alliances, he said."China is a potential adversary. They're doing a lot of things well. They have a lot of problems," Dimon said. "What I'm really worried about is us. Can we get our own act together? Our own values, our own capabilities, our own management."Dimon said that if the United States is not the "preeminent military and preeminent economy in 40 years, we will not be the reserve currency. That's a fact."Although Dimon believes the United States is usually resilient, he said things are different this time around."We have to get our act together, and we have to do it very quickly," he said.During the conversation, Dimon spoke about trade deals and encouraged US leaders to engage with China."I just got back from China last week," Dimon said. "They're not scared, folks. This notion that they're going to come bow to America, I wouldn't count on that."Treasury Secretary Scott Bessent disagreed with Dimon during a Sunday appearance on CBS's "Face the Nation.""Jamie is a great banker. I know him well, but I would vociferously disagree with that assessment," Bessent said. "That the laws of economics and gravity apply to the Chinese economy and the Chinese system, just like everyone else."Trump's decision to impose tariffs on numerous countries, including steep tariffs on China, rattled global markets earlier this year. Markets recovered after many countries, including China, began negotiating. But the possibility that tariffs could increase again at any time has investors and economists on edge.On Friday, for instance, in a Truth Social post, Trump accused China of violating the two countries' trade agreement. That same day, Trump said he planned to increase tariffs on steel imports from 25% to 50%."We're going to bring it from 25% to 50%, the tariffs on steel into the United States of America, which will even further secure the steel industry in the United States. Nobody's going to get around that," Trump said during a rally near Pittsburgh.Representatives for JPMorgan Chase declined to comment. #jpmorgan #chase #ceo #jamie #dimon
    WWW.BUSINESSINSIDER.COM
    JPMorgan Chase CEO Jamie Dimon says he wouldn't count on China folding under Trump's tariffs: 'They're not scared, folks.'
    JPMorgan Chase CEO Jamie Dimon spoke at the 2025 Reagan National Economic Forum on Friday. Noam Galai/Getty Images 2025-06-01T15:39:12Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Jamie Dimon spoke at the 2025 Reagan National Economic Forum on Friday. Dimon said he hoped the US could "get our own act together" amid the US-China trade war. Trump said China "violated" its trade agreement with the US this week. JPMorgan Chase CEO Jamie Dimon said the United States needs to get its act together on trade — quickly.Dimon discussed the ongoing tension between the United States and China on Friday at the 2025 Reagan National Economic Forum, where he led a fireside chat. When asked what his biggest worry was right now, Dimon pointed to the shifting global geopolitical and economic landscape, including trade."We have problems and we've got to deal with them," Dimon said before referring to "the enemy within."Addressing the "enemy within," he said, includes fixing how the United States approaches permitting, regulation, taxation, immigration, education, and the healthcare system.It also means maintaining important military alliances, he said."China is a potential adversary. They're doing a lot of things well. They have a lot of problems," Dimon said. "What I'm really worried about is us. Can we get our own act together? Our own values, our own capabilities, our own management."Dimon said that if the United States is not the "preeminent military and preeminent economy in 40 years, we will not be the reserve currency. That's a fact."Although Dimon believes the United States is usually resilient, he said things are different this time around."We have to get our act together, and we have to do it very quickly," he said.During the conversation, Dimon spoke about trade deals and encouraged US leaders to engage with China."I just got back from China last week," Dimon said. "They're not scared, folks. This notion that they're going to come bow to America, I wouldn't count on that."Treasury Secretary Scott Bessent disagreed with Dimon during a Sunday appearance on CBS's "Face the Nation.""Jamie is a great banker. I know him well, but I would vociferously disagree with that assessment," Bessent said. "That the laws of economics and gravity apply to the Chinese economy and the Chinese system, just like everyone else."Trump's decision to impose tariffs on numerous countries, including steep tariffs on China, rattled global markets earlier this year. Markets recovered after many countries, including China, began negotiating. But the possibility that tariffs could increase again at any time has investors and economists on edge.On Friday, for instance, in a Truth Social post, Trump accused China of violating the two countries' trade agreement. That same day, Trump said he planned to increase tariffs on steel imports from 25% to 50%."We're going to bring it from 25% to 50%, the tariffs on steel into the United States of America, which will even further secure the steel industry in the United States. Nobody's going to get around that," Trump said during a rally near Pittsburgh.Representatives for JPMorgan Chase declined to comment.
    0 Kommentare 0 Anteile
  • Facebook sees rise in violent content and harassment after policy changes

    Meta has published the first of its quarterly integrity reports since Mark Zuckerberg walked back the company's hate speech policies and changed its approach to content moderation earlier this year. According to the reports, Facebook saw an uptick in violent content, bullying and harassment despite an overall decrease in the amount of content taken down by Meta.
    The reports are the first time Meta has shared data about how Zuckerberg's decision to upend Meta's policies have played out on the platform used by billions of people. Notably, the company is spinning the changes as a victory, saying that it reduced its mistakes by half while the overall prevalence of content breaking its rules "largely remained unchanged for most problem areas."
    There are two notable exceptions, however. Violent and graphic content increased from 0.06%-0.07% at the end of 2024 to .09% in the first quarter of 2025. Meta attributed the uptick to "an increase in sharing of violating content" as well as its own attempts to "reduce enforcement mistakes." Meta also saw a noted increase in the prevalence of bullying and harassment on Facebook, which increased from 0.06-0.07% at the end of 2024 to 0.07-0.08% at the start of 2025. Meta says this was due to an unspecified "spike" in violations in March.Those may sound like relatively tiny percentages, but even small increases can be noticeable for a platform like Facebook that sees billions of posts every day.The report also underscores just how much less content Meta is taking down overall since it moved away from proactive enforcement of all but its most serious policies like child exploitation and terrorist content. Meta's report shows a significant decrease in the amount of Facebook posts removed for hateful content, for example, with just 3.4 million pieces of content "actioned" under the policy, the company's lowest figure since 2018. Spam removals also dropped precipitously from 730 million at the end of 2024 to just 366 million at the start of 2025. The number of fake accounts removed also declined notably on Facebook from 1.4 billion to 1 billionAt the same time, Meta claims it's making far fewer content moderation mistakes, which was one of Zuckerberg's main justifications for his decision to end proactive moderation."We saw a roughly 50% reduction in enforcement mistakes on our platforms in the United States from Q4 2024 to Q1 2025," the company wrote in an update to its January post announcing its policy changes. Meta didn't explain how it calculated that figure, but said future reports would "include metrics on our mistakes so that people can track our progress."
    Meta is acknowledging, however, that there is at least one group where some proactive moderation is still necessary: teens. "At the same time, we remain committed to ensuring teens on our platforms are having the safest experience possible," the company wrote. "That’s why, for teens, we’ll also continue to proactively hide other types of harmful content, like bullying." Meta has been rolling out "teen accounts" for the last several months, which should make it easier to filter content specifically for younger users.
    The company also offered an update on how it's using large language models to aid in its content moderation efforts. "Upon further testing, we are beginning to see LLMs operating beyond that of human performance for select policy areas," Meta writes. "We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it does not violate our policies."
    The other major component to Zuckerberg's policy changes was an end of Meta's fact-checking partnerships in the United States. The company began rolling out its own version of Community Notes to Facebook, Instagram and Threads earlier this year, and has since expanded the effort to Reels and Threads replies. Meta didn't offer any insight into how effective its new crowd-sourced approach to fact-checking might be or how often notes are appearing on its platform, though it promised updates in the coming months.This article originally appeared on Engadget at
    #facebook #sees #rise #violent #content
    Facebook sees rise in violent content and harassment after policy changes
    Meta has published the first of its quarterly integrity reports since Mark Zuckerberg walked back the company's hate speech policies and changed its approach to content moderation earlier this year. According to the reports, Facebook saw an uptick in violent content, bullying and harassment despite an overall decrease in the amount of content taken down by Meta. The reports are the first time Meta has shared data about how Zuckerberg's decision to upend Meta's policies have played out on the platform used by billions of people. Notably, the company is spinning the changes as a victory, saying that it reduced its mistakes by half while the overall prevalence of content breaking its rules "largely remained unchanged for most problem areas." There are two notable exceptions, however. Violent and graphic content increased from 0.06%-0.07% at the end of 2024 to .09% in the first quarter of 2025. Meta attributed the uptick to "an increase in sharing of violating content" as well as its own attempts to "reduce enforcement mistakes." Meta also saw a noted increase in the prevalence of bullying and harassment on Facebook, which increased from 0.06-0.07% at the end of 2024 to 0.07-0.08% at the start of 2025. Meta says this was due to an unspecified "spike" in violations in March.Those may sound like relatively tiny percentages, but even small increases can be noticeable for a platform like Facebook that sees billions of posts every day.The report also underscores just how much less content Meta is taking down overall since it moved away from proactive enforcement of all but its most serious policies like child exploitation and terrorist content. Meta's report shows a significant decrease in the amount of Facebook posts removed for hateful content, for example, with just 3.4 million pieces of content "actioned" under the policy, the company's lowest figure since 2018. Spam removals also dropped precipitously from 730 million at the end of 2024 to just 366 million at the start of 2025. The number of fake accounts removed also declined notably on Facebook from 1.4 billion to 1 billionAt the same time, Meta claims it's making far fewer content moderation mistakes, which was one of Zuckerberg's main justifications for his decision to end proactive moderation."We saw a roughly 50% reduction in enforcement mistakes on our platforms in the United States from Q4 2024 to Q1 2025," the company wrote in an update to its January post announcing its policy changes. Meta didn't explain how it calculated that figure, but said future reports would "include metrics on our mistakes so that people can track our progress." Meta is acknowledging, however, that there is at least one group where some proactive moderation is still necessary: teens. "At the same time, we remain committed to ensuring teens on our platforms are having the safest experience possible," the company wrote. "That’s why, for teens, we’ll also continue to proactively hide other types of harmful content, like bullying." Meta has been rolling out "teen accounts" for the last several months, which should make it easier to filter content specifically for younger users. The company also offered an update on how it's using large language models to aid in its content moderation efforts. "Upon further testing, we are beginning to see LLMs operating beyond that of human performance for select policy areas," Meta writes. "We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it does not violate our policies." The other major component to Zuckerberg's policy changes was an end of Meta's fact-checking partnerships in the United States. The company began rolling out its own version of Community Notes to Facebook, Instagram and Threads earlier this year, and has since expanded the effort to Reels and Threads replies. Meta didn't offer any insight into how effective its new crowd-sourced approach to fact-checking might be or how often notes are appearing on its platform, though it promised updates in the coming months.This article originally appeared on Engadget at #facebook #sees #rise #violent #content
    WWW.ENGADGET.COM
    Facebook sees rise in violent content and harassment after policy changes
    Meta has published the first of its quarterly integrity reports since Mark Zuckerberg walked back the company's hate speech policies and changed its approach to content moderation earlier this year. According to the reports, Facebook saw an uptick in violent content, bullying and harassment despite an overall decrease in the amount of content taken down by Meta. The reports are the first time Meta has shared data about how Zuckerberg's decision to upend Meta's policies have played out on the platform used by billions of people. Notably, the company is spinning the changes as a victory, saying that it reduced its mistakes by half while the overall prevalence of content breaking its rules "largely remained unchanged for most problem areas." There are two notable exceptions, however. Violent and graphic content increased from 0.06%-0.07% at the end of 2024 to .09% in the first quarter of 2025. Meta attributed the uptick to "an increase in sharing of violating content" as well as its own attempts to "reduce enforcement mistakes." Meta also saw a noted increase in the prevalence of bullying and harassment on Facebook, which increased from 0.06-0.07% at the end of 2024 to 0.07-0.08% at the start of 2025. Meta says this was due to an unspecified "spike" in violations in March. (Notably, this is a separate category from the company's hate speech policies, which were re-written to allow posts targeting immigrants and LGBTQ people.) Those may sound like relatively tiny percentages, but even small increases can be noticeable for a platform like Facebook that sees billions of posts every day. (Meta describes its prevalence metric as an estimate of how often rule-breaking content appears on its platform.) The report also underscores just how much less content Meta is taking down overall since it moved away from proactive enforcement of all but its most serious policies like child exploitation and terrorist content. Meta's report shows a significant decrease in the amount of Facebook posts removed for hateful content, for example, with just 3.4 million pieces of content "actioned" under the policy, the company's lowest figure since 2018. Spam removals also dropped precipitously from 730 million at the end of 2024 to just 366 million at the start of 2025. The number of fake accounts removed also declined notably on Facebook from 1.4 billion to 1 billion (Meta doesn't provide stats around fake account removals on Instagram.) At the same time, Meta claims it's making far fewer content moderation mistakes, which was one of Zuckerberg's main justifications for his decision to end proactive moderation."We saw a roughly 50% reduction in enforcement mistakes on our platforms in the United States from Q4 2024 to Q1 2025," the company wrote in an update to its January post announcing its policy changes. Meta didn't explain how it calculated that figure, but said future reports would "include metrics on our mistakes so that people can track our progress." Meta is acknowledging, however, that there is at least one group where some proactive moderation is still necessary: teens. "At the same time, we remain committed to ensuring teens on our platforms are having the safest experience possible," the company wrote. "That’s why, for teens, we’ll also continue to proactively hide other types of harmful content, like bullying." Meta has been rolling out "teen accounts" for the last several months, which should make it easier to filter content specifically for younger users. The company also offered an update on how it's using large language models to aid in its content moderation efforts. "Upon further testing, we are beginning to see LLMs operating beyond that of human performance for select policy areas," Meta writes. "We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it does not violate our policies." The other major component to Zuckerberg's policy changes was an end of Meta's fact-checking partnerships in the United States. The company began rolling out its own version of Community Notes to Facebook, Instagram and Threads earlier this year, and has since expanded the effort to Reels and Threads replies. Meta didn't offer any insight into how effective its new crowd-sourced approach to fact-checking might be or how often notes are appearing on its platform, though it promised updates in the coming months.This article originally appeared on Engadget at https://www.engadget.com/social-media/facebook-sees-rise-in-violent-content-and-harassment-after-policy-changes-182651544.html?src=rss
    0 Kommentare 0 Anteile
  • The Legal Accountability of AI-Generated Deepfakes in Election Misinformation

    How Deepfakes Are Created

    Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever.

    Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵

    During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹.

    Deepfakes in Recent Elections: Examples

    Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³.

    Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸.

    Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike.

    U.S. Legal Framework and Accountability

    In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements.

    Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws.

    U.S. Legislation and Proposals

    Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage.

    At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes.

    Policy Recommendations: Balancing Integrity and Speech

    Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism.

    Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception.

    Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams.

    Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire.

    References:

    /.

    /.

    .

    .

    .

    .

    .

    .

    .

    /.

    .

    .

    /.

    /.

    .

    The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    #legal #accountability #aigenerated #deepfakes #election
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws. U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: /. /. . . . . . . . /. . . /. /. . The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost. #legal #accountability #aigenerated #deepfakes #election
    WWW.MARKTECHPOST.COM
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networks (GANs) and autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping (one estimate suggests DeepFaceLab was used for over 95% of known deepfake videos)². Voice-cloning tools (often built on similar AI principles) can mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars (turning typed scripts into lifelike “spokespeople”), which have already been misused in disinformation campaigns³. Even mobile apps (e.g. FaceApp, Zao) let users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network (GAN): A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processing (color adjustments, lip-syncing refinements) to enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistencies (blinking irregularities, audio artifacts or metadata mismatches) that betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The caller (“Susan Anderson”) was later fined $6 million by the FCC and indicted under existing telemarketing laws¹⁰¹¹. (Importantly, FCC rules on robocalls applied regardless of AI: the perpetrator could have used a voice actor or recording instead.) Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI (e.g., by photoshopping text on real images)¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidate (who is Suharto’s son-in-law) won the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan (amidst tensions with China), a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities (from Bangladesh and Indonesia to Moldova, Slovakia, India and beyond), often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential ads (not necessarily AI-made) did change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering (such as the Bipartisan Campaign Reform Act, which requires disclaimers on political ads), and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the $6M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation laws (prohibiting threats or coercion) also leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes (e.g. for a plot to impersonate an aide to swing votes in 2020), and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commission (FEC) is preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) have signaled that purely commercial deepfakes could violate consumer protection or election laws (for example, liability for mass false impersonation or for foreign-funded electioneering). U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Act (H.R.5586 in the 118th Congress) would, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categories (e.g. false claims about time/place/manner of voting) while carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters (though Florida’s law exempts parody). Some states (like Texas) define “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints (e.g. Minnesota’s 2023 law was challenged for threatening injunctions against anyone “reasonably believed” to violate it). Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s law (which requires platforms to label or block deepfakes) as unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property (for instance, a celebrity suing over a botched celebrity-deepfake video), rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harms (e.g. automated phone calls impersonating voters, or videos claiming false polling information) may be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original media (as encouraged by the EU AI Act) could deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly available (e.g. the MIT OpenDATATEST) helps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: https://www.security.org/resources/deepfake-statistics/. https://www.wired.com/story/synthesia-ai-deepfakes-it-control-riparbelli/. https://www.gao.gov/products/gao-24-107292. https://technologyquotient.freshfields.com/post/102jb19/eu-ai-act-unpacked-8-new-rules-on-deepfakes. https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd. https://www.lawfaremedia.org/article/new-and-old-tools-to-tackle-deepfakes-and-election-lies-in-2024. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena. https://firstamendment.mtsu.edu/article/political-deepfakes-and-elections/. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. https://law.unh.edu/sites/default/files/media/2022/06/nagumotu_pp113-157.pdf. https://dfrlab.org/2024/10/02/brazil-election-ai-research/. https://dfrlab.org/2024/11/26/brazil-election-ai-deepfakes/. https://freedomhouse.org/article/eu-digital-services-act-win-transparency. The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    0 Kommentare 0 Anteile
  • President Trump lashes out at China for violating new trade agreement

    Just hours after courts allowed the "liberation day" tariffs to continue pending appeal, President Donald Trump is claiming that China has violated a preliminary trade agreement with the US.Apple's stock has been hit hard by the Trump tariff battle with ChinaIn a post on Truth Social, the President claims that the 90-day tariff pause resulted in already-closed factories, and "civil unrest" in the country — neither of which appear to have happened. The post on Friday also says that China has violated whatever agreement was in place for a 90-day pause in tariffs presently in place. Continue Reading on AppleInsider | Discuss on our Forums
    #president #trump #lashes #out #china
    President Trump lashes out at China for violating new trade agreement
    Just hours after courts allowed the "liberation day" tariffs to continue pending appeal, President Donald Trump is claiming that China has violated a preliminary trade agreement with the US.Apple's stock has been hit hard by the Trump tariff battle with ChinaIn a post on Truth Social, the President claims that the 90-day tariff pause resulted in already-closed factories, and "civil unrest" in the country — neither of which appear to have happened. The post on Friday also says that China has violated whatever agreement was in place for a 90-day pause in tariffs presently in place. Continue Reading on AppleInsider | Discuss on our Forums #president #trump #lashes #out #china
    APPLEINSIDER.COM
    President Trump lashes out at China for violating new trade agreement
    Just hours after courts allowed the "liberation day" tariffs to continue pending appeal, President Donald Trump is claiming that China has violated a preliminary trade agreement with the US.Apple's stock has been hit hard by the Trump tariff battle with ChinaIn a post on Truth Social, the President claims that the 90-day tariff pause resulted in already-closed factories, and "civil unrest" in the country — neither of which appear to have happened. The post on Friday also says that China has violated whatever agreement was in place for a 90-day pause in tariffs presently in place. Continue Reading on AppleInsider | Discuss on our Forums
    0 Kommentare 0 Anteile