• New Court Order in Stratasys v. Bambu Lab Lawsuit

    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit. 
    Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG. 
    Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299.
    On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward.
    Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases.
    This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits. 
    The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year. 
    Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.       
    A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    Another twist in the Stratasys v. Bambu Lab lawsuit 
    Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities.
    Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers.
    Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice.
    It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab. 
    Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022.
    In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas.
    Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.   
    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.  
    In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party.
    Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment.
    The Bambu Lab X1C 3D printer. Image via Bambu Lab.
    3D printing patent battles 
    The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit. 
    The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent.
    Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.  
    The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs. 
    In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.  
    San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer.
    3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    #new #court #order #stratasys #bambu
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299. On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. #new #court #order #stratasys #bambu
    3DPRINTINGINDUSTRY.COM
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member cases (2:24-CV-00644-JRG and 2:24-CV-00645-JRG) into a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299(a). On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299(a). The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12(b)(7). Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry.
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Σχόλια 0 Μοιράστηκε
  • Fox News AI Newsletter: Hollywood studios sue 'bottomless pit of plagiarism'

    The Minions pose during the world premiere of the film "Despicable Me 4" in New York City, June 9, 2024. NEWYou can now listen to Fox News articles!
    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.IN TODAY’S NEWSLETTER:- Major Hollywood studios sue AI company over copyright infringement in landmark move- Meta's Zuckerberg aiming to dominate AI race with recruiting push for new ‘superintelligence’ team: report- OpenAI says this state will play central role in artificial intelligence development The website of Midjourney, an artificial intelligencecapable of creating AI art, is seen on a smartphone on April 3, 2023, in Berlin, Germany.'PIRACY IS PIRACY': Two major Hollywood studios are suing Midjourney, a popular AI image generator, over its use and distribution of intellectual property.AI RACE: Meta CEO Mark Zuckerberg is reportedly building a team of experts to develop artificial general intelligencethat can meet or exceed human capabilities.TECH HUB: New York is poised to play a central role in the development of artificial intelligence, OpenAI executives told key business and civic leaders on Tuesday. Attendees watch a presentation during an event on the Apple campus in Cupertino, Calif., Monday, June 9, 2025. APPLE FALLING BEHIND: Apple’s annual Worldwide Developers Conferencekicked off on Monday and runs through Friday. But the Cupertino-based company is not making us wait until the end. The major announcements have already been made, and there are quite a few. The headliners are new software versions for Macs, iPhones, iPads and Vision. FROM COAL TO CODE: This week, Amazon announced a billion investment in artificial intelligence infrastructure in the form of new data centers, the largest in the commonwealth's history, according to the eCommerce giant.DIGITAL DEFENSE: A growing number of fire departments across the country are turning to artificial intelligence to help detect and respond to wildfires more quickly. Rep. Darin LaHood, R-Ill., leaves the House Republican Conference meeting at the Capitol Hill Club in Washington on Tuesday, May 17, 2022. SHIELD FROM BEIJING: Rep. Darin LaHood, R-Ill., is introducing a new bill Thursday imploring the National Security Administrationto develop an "AI security playbook" to stay ahead of threats from China and other foreign adversaries. ROBOT RALLY PARTNER: Finding a reliable tennis partner who matches your energy and skill level can be a challenge. Now, with Tenniix, an artificial intelligence-powered tennis robot from T-Apex, players of all abilities have a new way to practice and improve. DIGITAL DANGER ZONE: Scam ads on Facebook have evolved beyond the days of misspelled headlines and sketchy product photos. Today, many are powered by artificial intelligence, fueled by deepfake technology and distributed at scale through Facebook’s own ad system.  Fairfield, Ohio, USA - February 25, 2011 : Chipotle Mexican Grill Logo on brick building. Chipotle is a chain of fast casual restaurants in the United States and Canada that specialize in burritos and tacos.'EXPONENTIAL RATE': Artificial intelligence is helping Chipotle rapidly grow its footprint, according to CEO Scott Boatwright. AI TAKEOVER THREAT: The hottest topic nowadays revolves around Artificial Intelligenceand its potential to rapidly and imminently transform the world we live in — economically, socially, politically and even defensively. Regardless of whether you believe that the technology will be able to develop superintelligence and lead a metamorphosis of everything, the possibility that may come to fruition is a catalyst for more far-leftist control.FOLLOW FOX NEWS ON SOCIAL MEDIASIGN UP FOR OUR OTHER NEWSLETTERSDOWNLOAD OUR APPSWATCH FOX NEWS ONLINEFox News GoSTREAM FOX NATIONFox NationStay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here. This article was written by Fox News staff.
    #fox #news #newsletter #hollywood #studios
    Fox News AI Newsletter: Hollywood studios sue 'bottomless pit of plagiarism'
    The Minions pose during the world premiere of the film "Despicable Me 4" in New York City, June 9, 2024. NEWYou can now listen to Fox News articles! Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.IN TODAY’S NEWSLETTER:- Major Hollywood studios sue AI company over copyright infringement in landmark move- Meta's Zuckerberg aiming to dominate AI race with recruiting push for new ‘superintelligence’ team: report- OpenAI says this state will play central role in artificial intelligence development The website of Midjourney, an artificial intelligencecapable of creating AI art, is seen on a smartphone on April 3, 2023, in Berlin, Germany.'PIRACY IS PIRACY': Two major Hollywood studios are suing Midjourney, a popular AI image generator, over its use and distribution of intellectual property.AI RACE: Meta CEO Mark Zuckerberg is reportedly building a team of experts to develop artificial general intelligencethat can meet or exceed human capabilities.TECH HUB: New York is poised to play a central role in the development of artificial intelligence, OpenAI executives told key business and civic leaders on Tuesday. Attendees watch a presentation during an event on the Apple campus in Cupertino, Calif., Monday, June 9, 2025. APPLE FALLING BEHIND: Apple’s annual Worldwide Developers Conferencekicked off on Monday and runs through Friday. But the Cupertino-based company is not making us wait until the end. The major announcements have already been made, and there are quite a few. The headliners are new software versions for Macs, iPhones, iPads and Vision. FROM COAL TO CODE: This week, Amazon announced a billion investment in artificial intelligence infrastructure in the form of new data centers, the largest in the commonwealth's history, according to the eCommerce giant.DIGITAL DEFENSE: A growing number of fire departments across the country are turning to artificial intelligence to help detect and respond to wildfires more quickly. Rep. Darin LaHood, R-Ill., leaves the House Republican Conference meeting at the Capitol Hill Club in Washington on Tuesday, May 17, 2022. SHIELD FROM BEIJING: Rep. Darin LaHood, R-Ill., is introducing a new bill Thursday imploring the National Security Administrationto develop an "AI security playbook" to stay ahead of threats from China and other foreign adversaries. ROBOT RALLY PARTNER: Finding a reliable tennis partner who matches your energy and skill level can be a challenge. Now, with Tenniix, an artificial intelligence-powered tennis robot from T-Apex, players of all abilities have a new way to practice and improve. DIGITAL DANGER ZONE: Scam ads on Facebook have evolved beyond the days of misspelled headlines and sketchy product photos. Today, many are powered by artificial intelligence, fueled by deepfake technology and distributed at scale through Facebook’s own ad system.  Fairfield, Ohio, USA - February 25, 2011 : Chipotle Mexican Grill Logo on brick building. Chipotle is a chain of fast casual restaurants in the United States and Canada that specialize in burritos and tacos.'EXPONENTIAL RATE': Artificial intelligence is helping Chipotle rapidly grow its footprint, according to CEO Scott Boatwright. AI TAKEOVER THREAT: The hottest topic nowadays revolves around Artificial Intelligenceand its potential to rapidly and imminently transform the world we live in — economically, socially, politically and even defensively. Regardless of whether you believe that the technology will be able to develop superintelligence and lead a metamorphosis of everything, the possibility that may come to fruition is a catalyst for more far-leftist control.FOLLOW FOX NEWS ON SOCIAL MEDIASIGN UP FOR OUR OTHER NEWSLETTERSDOWNLOAD OUR APPSWATCH FOX NEWS ONLINEFox News GoSTREAM FOX NATIONFox NationStay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here. This article was written by Fox News staff. #fox #news #newsletter #hollywood #studios
    WWW.FOXNEWS.COM
    Fox News AI Newsletter: Hollywood studios sue 'bottomless pit of plagiarism'
    The Minions pose during the world premiere of the film "Despicable Me 4" in New York City, June 9, 2024.  (REUTERS/Kena Betancur) NEWYou can now listen to Fox News articles! Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.IN TODAY’S NEWSLETTER:- Major Hollywood studios sue AI company over copyright infringement in landmark move- Meta's Zuckerberg aiming to dominate AI race with recruiting push for new ‘superintelligence’ team: report- OpenAI says this state will play central role in artificial intelligence development The website of Midjourney, an artificial intelligence (AI) capable of creating AI art, is seen on a smartphone on April 3, 2023, in Berlin, Germany. (Thomas Trutschel/Photothek via Getty Images)'PIRACY IS PIRACY': Two major Hollywood studios are suing Midjourney, a popular AI image generator, over its use and distribution of intellectual property.AI RACE: Meta CEO Mark Zuckerberg is reportedly building a team of experts to develop artificial general intelligence (AGI) that can meet or exceed human capabilities.TECH HUB: New York is poised to play a central role in the development of artificial intelligence (AI), OpenAI executives told key business and civic leaders on Tuesday. Attendees watch a presentation during an event on the Apple campus in Cupertino, Calif., Monday, June 9, 2025.  (AP Photo/Jeff Chiu)APPLE FALLING BEHIND: Apple’s annual Worldwide Developers Conference (WWDC) kicked off on Monday and runs through Friday. But the Cupertino-based company is not making us wait until the end. The major announcements have already been made, and there are quite a few. The headliners are new software versions for Macs, iPhones, iPads and Vision. FROM COAL TO CODE: This week, Amazon announced a $20 billion investment in artificial intelligence infrastructure in the form of new data centers, the largest in the commonwealth's history, according to the eCommerce giant.DIGITAL DEFENSE: A growing number of fire departments across the country are turning to artificial intelligence to help detect and respond to wildfires more quickly. Rep. Darin LaHood, R-Ill., leaves the House Republican Conference meeting at the Capitol Hill Club in Washington on Tuesday, May 17, 2022.  (Bill Clark/CQ-Roll Call, Inc via Getty Images)SHIELD FROM BEIJING: Rep. Darin LaHood, R-Ill., is introducing a new bill Thursday imploring the National Security Administration (NSA) to develop an "AI security playbook" to stay ahead of threats from China and other foreign adversaries. ROBOT RALLY PARTNER: Finding a reliable tennis partner who matches your energy and skill level can be a challenge. Now, with Tenniix, an artificial intelligence-powered tennis robot from T-Apex, players of all abilities have a new way to practice and improve. DIGITAL DANGER ZONE: Scam ads on Facebook have evolved beyond the days of misspelled headlines and sketchy product photos. Today, many are powered by artificial intelligence, fueled by deepfake technology and distributed at scale through Facebook’s own ad system.  Fairfield, Ohio, USA - February 25, 2011 : Chipotle Mexican Grill Logo on brick building. Chipotle is a chain of fast casual restaurants in the United States and Canada that specialize in burritos and tacos. (iStock)'EXPONENTIAL RATE': Artificial intelligence is helping Chipotle rapidly grow its footprint, according to CEO Scott Boatwright. AI TAKEOVER THREAT: The hottest topic nowadays revolves around Artificial Intelligence (AI) and its potential to rapidly and imminently transform the world we live in — economically, socially, politically and even defensively. Regardless of whether you believe that the technology will be able to develop superintelligence and lead a metamorphosis of everything, the possibility that may come to fruition is a catalyst for more far-leftist control.FOLLOW FOX NEWS ON SOCIAL MEDIASIGN UP FOR OUR OTHER NEWSLETTERSDOWNLOAD OUR APPSWATCH FOX NEWS ONLINEFox News GoSTREAM FOX NATIONFox NationStay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here. This article was written by Fox News staff.
    0 Σχόλια 0 Μοιράστηκε
  • As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion

    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments

    Image credit: Disney / Epic Games

    Opinion

    by Rob Fahey
    Contributing Editor

    Published on June 13, 2025

    In some regards, the past couple of weeks have felt rather reassuring.
    We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games.
    It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games.
    If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool.
    In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companiesto protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially.
    A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool
    I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations.
    The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here.
    A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them.

    If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation.
    The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation.
    In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic.
    To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner.
    The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once
    AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form.
    That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation.
    Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products.
    Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI.
    Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues.

    Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology.
    Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry.
    Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity.
    The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights.
    Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds, the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future.
    This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees.
    The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues.
    The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background.
    Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time.
    Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business.
    The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models.
    Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.
    #faces #court #challenges #disney #universal
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments Image credit: Disney / Epic Games Opinion by Rob Fahey Contributing Editor Published on June 13, 2025 In some regards, the past couple of weeks have felt rather reassuring. We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games. It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games. If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool. In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companiesto protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations. The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them. If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation. The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation. In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic. To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner. The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form. That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation. Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products. Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI. Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues. Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology. Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry. Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity. The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights. Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds, the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future. This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees. The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues. The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background. Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time. Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business. The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models. Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions. #faces #court #challenges #disney #universal
    WWW.GAMESINDUSTRY.BIZ
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments Image credit: Disney / Epic Games Opinion by Rob Fahey Contributing Editor Published on June 13, 2025 In some regards, the past couple of weeks have felt rather reassuring. We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games. It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games. If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool. In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companies (including game studios and publishers) to protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations. The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them. If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation. The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation (at least for famous characters belonging to rich companies; if you're an individual or a smaller company, it's entirely the Wild West out there as regards your IP rights). In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic. To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner. The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form. That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation. Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products. Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI. Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues. Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology. Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry. Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity. The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights (you can infringe someone else's copyright with it, but generally can't impose your own copyright on its creations – opening careless companies up to a risk of having key assets in their game being technically public domain and impossible to protect). Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds (which is entirely valid given the highly dubious land-grab these companies have done for their training data), the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future. This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees. The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues. The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background. Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time. Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business. The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models. Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.
    0 Σχόλια 0 Μοιράστηκε
  • Why do lawyers keep using ChatGPT?

    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    #why #lawyers #keep #using #chatgpt
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More: #why #lawyers #keep #using #chatgpt
    WWW.THEVERGE.COM
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.” (That said, the cases do at least typically exist.)Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    0 Σχόλια 0 Μοιράστηκε
  • Amazon and The New York Times Announce an A.I. Licensing Deal

    In 2023, The Times sued OpenAI and Microsoft for copyright infringement. Now its editorial content will appear across Amazon platforms.
    #amazon #new #york #times #announce
    Amazon and The New York Times Announce an A.I. Licensing Deal
    In 2023, The Times sued OpenAI and Microsoft for copyright infringement. Now its editorial content will appear across Amazon platforms. #amazon #new #york #times #announce
    WWW.NYTIMES.COM
    Amazon and The New York Times Announce an A.I. Licensing Deal
    In 2023, The Times sued OpenAI and Microsoft for copyright infringement. Now its editorial content will appear across Amazon platforms.
    0 Σχόλια 0 Μοιράστηκε
  • Telegram and xAI forge Grok AI deal

    Telegram has forged a deal with Elon Musk’s xAI to weave Grok AI into the fabric of the encrypted messaging platform.This isn’t just a friendly collaboration; xAI is putting serious money on the table – a cool million, a mix of hard cash and equity. And for Telegram, they’ll pocket 50% of any subscription money Grok pulls in through their app.This leap into the world of AI couldn’t come at a more interesting time for Telegram. While CEO Pavel Durov is wrestling with some pretty serious legal headaches, and governments in certain corners of the globe are giving the platform the side-eye, the company’s bank balance is looking healthy.In fact, Telegram is gearing up to raise at least billion by issuing five-year bonds. With a rather tempting 9% yield, these bonds are also designed to help buy back some of the debt from their 2021 bond issue. It seems big-name investors like BlackRock, Mubadala, and Citadel are still keen, suggesting they see a bright future for the messaging service.And the numbers do tell a story of a significant comeback. Cast your mind back to 2023, and Telegram was nursing a million loss. Fast forward to 2024, and they’d flipped that on its head, banking a million profit from billion in revenue. They’re not stopping there either, with optimistic forecasts for 2025 pointing to profits north of million from a billion revenue pot.So, what will Grok actually do for Telegram users? The hope is that xAI’s conversational AI will bring a whole new layer of smarts to the platform. This includes supercharged information searching, help with drafting messages, and all sorts of automated tricks. It’s a play that could help Telegram unlock fresh monetisation opportunities and compete with Meta bringing Llama-powered smarts to WhatsApp.However, Telegram’s integration of AI is all happening against a pretty dramatic backdrop. Pavel Durov, the man at the company’s helm, has found himself in hot water.Back in August 2024, Durov was arrested in France and later indicted on a dozen charges. These aren’t minor infringements either; they include serious accusations like complicity in spreading child exploitation material and drug trafficking, all linked to claims that Telegram wasn’t doing enough to police its content.Durov was initially stuck in France, but by March 2025, he was given the nod to leave the country, at least for a while. What happens next with these legal battles is anyone’s guess, but it’s a massive cloud hanging over the company.And it’s not just personal legal woes for Durov. Entire governments are starting to lose patience. Vietnam, for instance, has had its Ministry of Science and Technology order internet providers to pull the plug on Telegram. Their reasoning? They say the platform has become a hotbed for crime. Vietnamese officials reckon 68% of Telegram channels and groups in the country are up to no good, involved in everything from fraud to drug deals. Telegram, for its part, said it was taken aback by the move, insisting it had always tried to play ball with legal requests from Vietnam.Back to the xAI partnership, it’s a clear signal of Telegram looking to the future and seeing AI as a core pillar of it. The money involved and the promise of shared revenues show just how much potential both sides see in getting Grok into the hands of Telegram’s millions of users.The next twelve months will be a real test for Telegram. Can the company innovate its way forward while also showing it can be a responsible player on the global stage?Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #telegram #xai #forge #grok #deal
    Telegram and xAI forge Grok AI deal
    Telegram has forged a deal with Elon Musk’s xAI to weave Grok AI into the fabric of the encrypted messaging platform.This isn’t just a friendly collaboration; xAI is putting serious money on the table – a cool million, a mix of hard cash and equity. And for Telegram, they’ll pocket 50% of any subscription money Grok pulls in through their app.This leap into the world of AI couldn’t come at a more interesting time for Telegram. While CEO Pavel Durov is wrestling with some pretty serious legal headaches, and governments in certain corners of the globe are giving the platform the side-eye, the company’s bank balance is looking healthy.In fact, Telegram is gearing up to raise at least billion by issuing five-year bonds. With a rather tempting 9% yield, these bonds are also designed to help buy back some of the debt from their 2021 bond issue. It seems big-name investors like BlackRock, Mubadala, and Citadel are still keen, suggesting they see a bright future for the messaging service.And the numbers do tell a story of a significant comeback. Cast your mind back to 2023, and Telegram was nursing a million loss. Fast forward to 2024, and they’d flipped that on its head, banking a million profit from billion in revenue. They’re not stopping there either, with optimistic forecasts for 2025 pointing to profits north of million from a billion revenue pot.So, what will Grok actually do for Telegram users? The hope is that xAI’s conversational AI will bring a whole new layer of smarts to the platform. This includes supercharged information searching, help with drafting messages, and all sorts of automated tricks. It’s a play that could help Telegram unlock fresh monetisation opportunities and compete with Meta bringing Llama-powered smarts to WhatsApp.However, Telegram’s integration of AI is all happening against a pretty dramatic backdrop. Pavel Durov, the man at the company’s helm, has found himself in hot water.Back in August 2024, Durov was arrested in France and later indicted on a dozen charges. These aren’t minor infringements either; they include serious accusations like complicity in spreading child exploitation material and drug trafficking, all linked to claims that Telegram wasn’t doing enough to police its content.Durov was initially stuck in France, but by March 2025, he was given the nod to leave the country, at least for a while. What happens next with these legal battles is anyone’s guess, but it’s a massive cloud hanging over the company.And it’s not just personal legal woes for Durov. Entire governments are starting to lose patience. Vietnam, for instance, has had its Ministry of Science and Technology order internet providers to pull the plug on Telegram. Their reasoning? They say the platform has become a hotbed for crime. Vietnamese officials reckon 68% of Telegram channels and groups in the country are up to no good, involved in everything from fraud to drug deals. Telegram, for its part, said it was taken aback by the move, insisting it had always tried to play ball with legal requests from Vietnam.Back to the xAI partnership, it’s a clear signal of Telegram looking to the future and seeing AI as a core pillar of it. The money involved and the promise of shared revenues show just how much potential both sides see in getting Grok into the hands of Telegram’s millions of users.The next twelve months will be a real test for Telegram. Can the company innovate its way forward while also showing it can be a responsible player on the global stage?Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #telegram #xai #forge #grok #deal
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Telegram and xAI forge Grok AI deal
    Telegram has forged a deal with Elon Musk’s xAI to weave Grok AI into the fabric of the encrypted messaging platform.This isn’t just a friendly collaboration; xAI is putting serious money on the table – a cool $300 million, a mix of hard cash and equity. And for Telegram, they’ll pocket 50% of any subscription money Grok pulls in through their app.This leap into the world of AI couldn’t come at a more interesting time for Telegram. While CEO Pavel Durov is wrestling with some pretty serious legal headaches, and governments in certain corners of the globe are giving the platform the side-eye, the company’s bank balance is looking healthy.In fact, Telegram is gearing up to raise at least $1.5 billion by issuing five-year bonds. With a rather tempting 9% yield, these bonds are also designed to help buy back some of the debt from their 2021 bond issue. It seems big-name investors like BlackRock, Mubadala, and Citadel are still keen, suggesting they see a bright future for the messaging service.And the numbers do tell a story of a significant comeback. Cast your mind back to 2023, and Telegram was nursing a $173 million loss. Fast forward to 2024, and they’d flipped that on its head, banking a $540 million profit from $1.4 billion in revenue. They’re not stopping there either, with optimistic forecasts for 2025 pointing to profits north of $700 million from a $2 billion revenue pot.So, what will Grok actually do for Telegram users? The hope is that xAI’s conversational AI will bring a whole new layer of smarts to the platform. This includes supercharged information searching, help with drafting messages, and all sorts of automated tricks. It’s a play that could help Telegram unlock fresh monetisation opportunities and compete with Meta bringing Llama-powered smarts to WhatsApp.However, Telegram’s integration of AI is all happening against a pretty dramatic backdrop. Pavel Durov, the man at the company’s helm, has found himself in hot water.Back in August 2024, Durov was arrested in France and later indicted on a dozen charges. These aren’t minor infringements either; they include serious accusations like complicity in spreading child exploitation material and drug trafficking, all linked to claims that Telegram wasn’t doing enough to police its content.Durov was initially stuck in France, but by March 2025, he was given the nod to leave the country, at least for a while. What happens next with these legal battles is anyone’s guess, but it’s a massive cloud hanging over the company.And it’s not just personal legal woes for Durov. Entire governments are starting to lose patience. Vietnam, for instance, has had its Ministry of Science and Technology order internet providers to pull the plug on Telegram. Their reasoning? They say the platform has become a hotbed for crime. Vietnamese officials reckon 68% of Telegram channels and groups in the country are up to no good, involved in everything from fraud to drug deals. Telegram, for its part, said it was taken aback by the move, insisting it had always tried to play ball with legal requests from Vietnam.Back to the xAI partnership, it’s a clear signal of Telegram looking to the future and seeing AI as a core pillar of it. The money involved and the promise of shared revenues show just how much potential both sides see in getting Grok into the hands of Telegram’s millions of users.The next twelve months will be a real test for Telegram. Can the company innovate its way forward while also showing it can be a responsible player on the global stage?(Photo from Unsplash)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Σχόλια 0 Μοιράστηκε