What Types of Legal Liabilities Are Emerging From AI?
www.informationweek.com
Richard Pallardy, Freelance WriterFebruary 7, 20257 Min ReadTithi Luadthong via Alamy StockArtificial intelligence technology is pervasive in the third decade of the 21st century. It manifests in nearly every product or service used in the Western world. And it will only become more entangled in our daily lives. As such, it has the potential to create extensive liability.Both the design of AI, which may -- intentionally or not -- be trained using private data and protected intellectual property, and its implementation, which may result in provision of false or inaccurate information, may lead to claims against AI companies and the customers who use their technology as part of their operations.Legislation specific to AI is scant and very new -- and it is untested in the courts. Most current cases rely on common law -- contractual violations and breaches of intellectual property rights. Some may even resort to torts. And the vast majority of these cases are still in progress, either in the early stages or in appeals courts.They will likely be incredibly expensive for defendants, but costs are difficult to discern. Firms are unlikely to publicly disclose their rates and judgments against defendants are too rare to allow for any generalizations. As new European legislation comes into force and additional legislation follows in the United States, the landscape will almost certainly change. Now, AI liability exists in a state of limbo.Jorden Rutledge, an associate attorney with the Artificial Intelligence Industry Group at law firm Locke Lord, recently spoke with InformationWeek. Rutledge has represented tech companies and advised them on their deployment of AI tools.Hediscusses what is happening in the courts right now and how AI litigation will likely play out in coming years.Where are the US and the EU on AI liability in terms of legislation?The EU is further along than the US. The US has some proposals -- the NO FAKES Act [introduced in July 2024] -- but nothing has really gotten off the ground. The EU is slightly ahead, but there isn't anything really there yet. There has also been some discussion about revenge porn. States have started to get involved. Ultimately, it's going to have to be a federal issue. Hopefully the new administration can get to it.Is AI liability largely a civil issue at this point? Have there been any cases of criminal liability?It's been addressed civilly in terms of trade, secret protections, copyright, and trademarks. Criminally, I haven't seen any cases yet. In the very near future, AI- generated porn and people cyberbullying through AI are going to be hot buttons for prosecutors. Prosecutors will have to take those cases. There are some barriers to creating those things with a lot of the AI out right now. Once those barriers are removed, I think those prosecutions will come into play.It would be helpful to have actual laws on this topic, as opposed to applying the common law to these novel scenarios.What kind of laws are coming into play?There's a lot of liability. If you ask plaintiff attorneys, there's a whole lot more liability than if you ask me. The laws relied on now are largely trade secret laws and copyright laws. Getty Images filed suit against Stability AI, alleging copyright violations. Common law and the right to publicity are going to come into play. The ubiquity of AI will create scenarios of liability in ways we cant imagine yet.Where are litigators finding holes in those protections?Right now, it's largely in the copyright context. The main fight there is going to be fair use. That gets into a complex tangle of what's transformative use and what's not. I suspect there are multiple cases going on right now, either dancing around the topic or directly addressing it. I expect that'll be decided on appeal. Then probably, if there's a circuit split, the Supreme Court will have to sort it out.Jorden Rutledge, Locke LordThe fair use argument is an AI company's strongest argument. As a practical matter, the people who have their art used or scraped, have a persuasive argument. Their stuff got taken. It was used. That just seems off to a lot of people.Have we seen any cases involving the improper use of peoples private data? How would that be proven?I've heard rumblings of it. The problem will be the scraping of documents. The scraping used by AI companies in building their models has been a black box. They will fight to preserve that black box. Their argument will be, You don't know what we scraped. We don't even know what we scraped.How does improper data use even get discovered?It's one of those things that is nearly impossible to find. If you're a plaintiff asking for discovery, you're going to get very frustrated, very fast. Imagine, for example, that I wrote a book. Someone wrote a summary of my book. If the AI company scrapes the summary and not my book, do I really have a claim for copyright officially at that point? You can't know unless you know exactly what was ingested. When its billions or trillions of pages of documents, I don't think you'll ever fully be able to determine that. It's going to be a discovery morass.Does the AI black box -- the difficulty of tracing the actions of an AI program -- make it harder or easier to defend against liability claims?It makes it easier to defend. They can say We can't tell you how it does what it does. Try to explain neural networks to a judge -- good luck to you.How far is liability being traced back? Are companies that deploy AI technology from other providers indemnified by their contracts?Some companies have indemnified their users in certain ways. It depends on the circumstances. If someone created a defamatory picture of a public figure, that person could sue the creator and then also sue OpenAI for letting them do it. The argument is better against the individual. In part, it depends on how aggressive the plaintiff wants to be. There's always a strong chance that the owner of the AI or the owner of the generative or the owner of the sort of black box can be liable as well. Plaintiffs would always want to get the owner involved in the case.Have there been any notable tort claims in regard to AI technology?Not that I've seen. I looked a little bit a few months ago and didn't see anything. Once it starts getting meshed into apps and used more, I think that'll happen. I think the plaintiffs bar will try to jump on that. I can imagine a lot of personal injury cases involving technology where the plaintiffs are going to want to know how things were created and if they were done by an AI. That would probably help their cases.How should companies go about structuring their contracts to limit liability?Employment agreements can outline how to use AI. I would recommend that companies using AI to help workflows strongly consider how to protect them as trade secrets. As for using AI that would hurt someone else -- as in the electric vehicle context -- I don't think there's much you can do to limit your liability contractually.Are we seeing any trends as to who is prevailing in AI liability cases?No. It isn't really one way or the other. I think that that trend will be found once we go up to appeals. That's going to take a little time. There are trial balloons. The courts have said some things on various motions. But the primary cases are being very heavily litigated. When things get heavily litigated, it takes a while. They have some of the best attorneys in the world helping them out.Are there any cases that you're keeping an eye on? Are there any trends youre paying attention to?I'm keeping an eye on a few of the federal cases that have been filed against OpenAI. They're largely about trade secrets and copyright -- the ingestion portion of it. What we're waiting on is the output portion of litigation. What do we do with that? There's no national trend, and there's certainly no national precedent about how we're going to treat it. Hopefully within the next five years we'll have a much clearer view of the path ahead.What are law firms charging to defend these liability cases?They're all good firms. I'm sure they're working the cases very hard. I'm sure theyre working long hours. There are a lot of filings in these cases.Has there been any regulatory action regarding AI liability in the US?Not that I've seen yet. That's in part because it's such a new technology. People don't know where those things fall -- whose jurisdiction it is.How long do you think it will take for legislation to catch up to these issues?I think the legal avenues will kind of crystallize in around five years. I'm less optimistic about the legislative fix, but hopeful.Read more about:RegulationCost of AIAbout the AuthorRichard PallardyFreelance WriterRichard Pallardy is a freelance writer based in Chicago. He has written for such publications as Vice, Discover, Science Magazine, and the Encyclopedia Britannica.See more from Richard PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
0 Commentarii ·0 Distribuiri ·53 Views