
AI making up cases can get lawyers fired, scandalized law firm warns
arstechnica.com
Burned by AI AI making up cases can get lawyers fired, scandalized law firm warns Nauseatingly frightening: Law firm condemns careless AI use in court. Ashley Belanger Feb 19, 2025 1:06 pm | 26 Morgan And Morga injury law firm ad is seen on a bus in New York City. Credit: NurPhoto / Contributor | NurPhoto Morgan And Morga injury law firm ad is seen on a bus in New York City. Credit: NurPhoto / Contributor | NurPhoto Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreMorgan & Morganwhich bills itself as "America's largest injury law firm" that fights "for the people"learned the hard way this month that even one lawyer blindly citing AI-hallucinated case law can risk sullying the reputation of an entire nationwide firm.In a letter shared in a court filing, Morgan & Morgan's chief transformation officer, Yath Ithayakumar, warned the firms' more than 1,000 attorneys that citing fake AI-generated cases in court filings could be cause for disciplinary action, including "termination.""This is a serious issue," Ithayakumar wrote. "The integrity of your legal work and reputation depend on it."Morgan & Morgan's AI troubles were sparked in a lawsuit claiming that Walmart was involved in designing a supposedly defective hoverboard toy that allegedly caused a family's house fire. Despite being an experienced litigator, Rudwin Ayala, the firm's lead attorney on the case, cited eight cases in a court filing that Walmart's lawyers could not find anywhere except on ChatGPT.These "cited cases seemingly do not exist anywhere other than in the world of Artificial Intelligence," Walmart's lawyers said, urging the court to consider sanctions.So far, the court has not ruled on possible sanctions. But Ayala was immediately dropped from the case and was replaced by his direct supervisor, T. Michael Morgan, Esq. Expressing "great embarrassment" over Ayala's fake citations that wasted the court's time, Morgan struck a deal with Walmart's attorneys to pay all fees and expenses associated with replying to the errant court filing, which Morgan told the court should serve as a "cautionary tale" for both his firm and "all firms."Reuters found that lawyers improperly citing AI-hallucinated cases have scrambled litigation in at least seven cases in the past two years. Some lawyers have been sanctioned, including an early case last June fining lawyers $5,000 for citing chatbot "gibberish" in filings. And in at least one case in Texas, Reuters reported, a lawyer was fined $2,000 and required to attend a course on responsible use of generative AI in legal applications. But in another high-profile incident, Michael Cohen, Donald Trump's former lawyer, avoided sanctions after Cohen accidentally gave his own attorney three fake case citations to help his defense in his criminal tax and campaign finance litigation.In a court filing, Morgan explained that Ayala was solely responsible for the AI citations in the Walmart case. No one else involved " had any knowledge or even notice" that the errant court filing "contained any AI-generated content, let alone hallucinated content," Morgan said, insisting that had he known, he would have required Ayala to independently verify all citations."The risk that a Court could rely upon and incorporate invented cases into our body of common law is a nauseatingly frightening thought," Morgan said, "deeply" apologizing to the court while acknowledging that AI can be "dangerous when used carelessly."Further, Morgan said, it's clear that his firm must work harder to train attorneys on AI tools the firm has been using since November 2024 that were intended to supportnot replacelawyers as they researched cases. Despite the firm supposedly warning lawyers that AI can hallucinate or fabricate information, Ayala shockingly claimed that he "mistakenly" believed that the firm's "internal AI support" was "fully capable" of not just researching but also drafting briefs."This deeply regrettable filing serves as a hard lesson for me and our firm as we enter a world in which artificial intelligence becomes more intertwined with everyday practice," Morgan told the court. "While artificial intelligence is a powerful tool, it is a tool which must be used carefully. There are no shortcuts in law."Andrew Perlman, dean of Suffolk University's law school, advocates for responsible AI use in court and told Reuters that lawyers citing ChatGPT or other AI tools without verifying outputs is "incompetence, just pure and simple."Morgan & Morgan declined Ars' request to comment.Law firm makes changes to prevent AI citationsMorgan & Morgan wants to ensure that no one else at the firm makes the same mistakes that Ayala did. In the letter sent to all attorneys, Ithayakumar reiterated that AI cannot be solely used to dependably research cases or draft briefs, as "AI can generate plausible responses that may be entirely fabricated information.""As all lawyers know (or should know), it has been documented that AI sometimes invents case law, complete with fabricated citations, holdings, and even direct quotes," his letter said. "As we previously instructed you, if you use AI to identify cases for citation, every case must be independently verified."While Harry Surden, a law professor who studies AI legal issues, told Reuters that "lawyers have always made mistakes," he also suggested that an increasing reliance on AI tools in the legal field requires lawyers to increase AI literacy to fully understand "the strengths and weaknesses of the tools." (A July 2024 Reuters survey found that 63 percent of lawyers have used AI and 12 percent use it regularly, after experts signaled an AI-fueled paradigm shift in the legal field in 2023.)At Morgan & Morgan, it has become clear in 2025 that better AI training is needed across its nationwide firm. Morgan told the court that the firms technology team and risk management members have met to "discuss and implement further policies to prevent another occurrence in the future."Additionally, a checkbox acknowledging AI's potential for hallucinations was added, and it must be clicked before any attorney at the firm can access the internal AI platform."Further, safeguards and training are being discussed to protect against any errant uses of artificial intelligence," Morgan told the court.Whether these efforts will help Morgan & Morgan avoid sanctions is unclear, but Ithayakumar suggested that on par with sanctions might be the reputational loss to the firm's or any individual lawyer's credibility."Blind reliance on AI is equivalent to citing an unverified case," Ithayakumar told lawyers, saying that it is their "responsibility and ethical obligation" to verify AI outputs. "Failure to comply with AI verification requirements may result in court sanctions, professional discipline, discipline by the firm (up to and including termination), and reputational harm. Every lawyer must stay informed of the specific AI-related rules and orders in the jurisdictions where they practice and strictly adhere to these obligations."Ashley BelangerSenior Policy ReporterAshley BelangerSenior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 26 Comments
0 Commentaires
·0 Parts
·82 Vue