• AI, college selection, college counselors, student interests, scholarships, education technology, specialized AI tools, college recommendations, higher education, career guidance

    ## Introduction

    Ah, the age-old quest for the perfect college! A journey filled with stress, confusion, and more than a few tears. With college counselors so overworked they might as well be juggling flaming swords while blindfolded, students are left to fend for themselves in a jungle of brochures, rankings, and endl...
    AI, college selection, college counselors, student interests, scholarships, education technology, specialized AI tools, college recommendations, higher education, career guidance ## Introduction Ah, the age-old quest for the perfect college! A journey filled with stress, confusion, and more than a few tears. With college counselors so overworked they might as well be juggling flaming swords while blindfolded, students are left to fend for themselves in a jungle of brochures, rankings, and endl...
    How AI Is Revolutionizing College Selection for Students
    AI, college selection, college counselors, student interests, scholarships, education technology, specialized AI tools, college recommendations, higher education, career guidance ## Introduction Ah, the age-old quest for the perfect college! A journey filled with stress, confusion, and more than a few tears. With college counselors so overworked they might as well be juggling flaming swords...
    Like
    Love
    Wow
    Sad
    Angry
    114
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Can AI Mistakes Lead to Real Legal Exposure?

    Posted on : June 5, 2025

    By

    Tech World Times

    AI 

    Rate this post

    Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner.
    What Types of AI Errors Create Legal Liability?
    AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts.
    When Is a Business Owner Liable for AI Mistakes?
    Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility.
    How Do AI Errors Harm Your Reputation and Operations?
    AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk.
    What Steps Reduce Legal Risk From AI Deployments?
    Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset.
    Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization.
    You should review these AI risk mitigation strategies below.

    Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement.
    Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved.
    Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties.
    Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach.
    Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks.

    How Do Attorneys Shield Your Business From AI Legal Risks?
    Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #can #mistakes #lead #real #legal
    Can AI Mistakes Lead to Real Legal Exposure?
    Posted on : June 5, 2025 By Tech World Times AI  Rate this post Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner. What Types of AI Errors Create Legal Liability? AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts. When Is a Business Owner Liable for AI Mistakes? Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility. How Do AI Errors Harm Your Reputation and Operations? AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk. What Steps Reduce Legal Risk From AI Deployments? Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset. Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization. You should review these AI risk mitigation strategies below. Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement. Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved. Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties. Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach. Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks. How Do Attorneys Shield Your Business From AI Legal Risks? Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #can #mistakes #lead #real #legal
    TECHWORLDTIMES.COM
    Can AI Mistakes Lead to Real Legal Exposure?
    Posted on : June 5, 2025 By Tech World Times AI  Rate this post Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner. What Types of AI Errors Create Legal Liability? AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts. When Is a Business Owner Liable for AI Mistakes? Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility. How Do AI Errors Harm Your Reputation and Operations? AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk. What Steps Reduce Legal Risk From AI Deployments? Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset. Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization. You should review these AI risk mitigation strategies below. Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement. Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved. Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties. Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach. Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks. How Do Attorneys Shield Your Business From AI Legal Risks? Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    Like
    Love
    Wow
    Sad
    Angry
    272
    0 Commentarii 0 Distribuiri 0 previzualizare
  • A federal court’s novel proposal to rein in Trump’s power grab

    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Boardhears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the jobtheir unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More:
    #federal #courts #novel #proposal #rein
    A federal court’s novel proposal to rein in Trump’s power grab
    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Boardhears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the jobtheir unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More: #federal #courts #novel #proposal #rein
    WWW.VOX.COM
    A federal court’s novel proposal to rein in Trump’s power grab
    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Board (MSPB) hears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the job [after] their unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More:
    Like
    Love
    Wow
    Sad
    Angry
    286
    0 Commentarii 0 Distribuiri 0 previzualizare
  • US lawyer sanctioned after caught using ChatGPT for court brief | Richard Bednar apologized after Utah appeals court discovered false citations, including one nonexistent case.

    The Utah court of appeals has sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case.Earlier this week, the Utah court of appeals made the decision to sanction Richard Bednar over claims that he filed a brief which included false citations.According to court documents reviewed by ABC4, Bednar and Douglas Durbano, another Utah-based lawyer who was serving as the petitioner’s counsel, filed a “timely petition for interlocutory appeal”.Upon reviewing the brief which was written by a law clerk, the respondent’s counsel found several false citations of cases.“It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter,” the respondent’s counsel said in documents reviewed by ABC4.The outlet reports that the brief referenced a case titled “Royer v Nelson”, which did not exist in any legal database.Following the discovery of the false citations, Bednar “acknowledged ‘the errors contained in the petition’ and apologized”, according to a document from the Utah court of appeals, ABC4 reports. It went on to add that during a hearing in April, Bednar and his attorney “acknowledged that the petition contained fabricated legal authority, which was obtained from ChatGPT, and they accepted responsibility for the contents of the petition”.According to Bednar and his attorney, an “unlicensed law clerk” wrote up the brief and Bednar did not “independently check the accuracy” before he made the filing. ABC4 further reports that Durbano was not involved in the creation of the petition and the law clerk responsible for the filing was a law school graduate who was terminated from the law firm.The outlet added that Bednar offered to pay any related attorney fees to “make amends”.In a statement reported by ABC4, the Utah court of appeals said: “We agree that the use of AI in the preparation of pleadings is a legal research tool that will continue to evolve with advances in technology. However, we emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings. In the present case, petitioner’s counsel fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT.”As a result of the false citations, ABC4 reports that Bednar was ordered to pay the respondent’s attorney fees for the petition and hearing, refund fees to their client for the time used to prepare the filing and attend the hearing, as well as donate to the Utah-based legal non-profit And Justice for All.
    #lawyer #sanctioned #after #caught #using
    US lawyer sanctioned after caught using ChatGPT for court brief | Richard Bednar apologized after Utah appeals court discovered false citations, including one nonexistent case.
    The Utah court of appeals has sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case.Earlier this week, the Utah court of appeals made the decision to sanction Richard Bednar over claims that he filed a brief which included false citations.According to court documents reviewed by ABC4, Bednar and Douglas Durbano, another Utah-based lawyer who was serving as the petitioner’s counsel, filed a “timely petition for interlocutory appeal”.Upon reviewing the brief which was written by a law clerk, the respondent’s counsel found several false citations of cases.“It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter,” the respondent’s counsel said in documents reviewed by ABC4.The outlet reports that the brief referenced a case titled “Royer v Nelson”, which did not exist in any legal database.Following the discovery of the false citations, Bednar “acknowledged ‘the errors contained in the petition’ and apologized”, according to a document from the Utah court of appeals, ABC4 reports. It went on to add that during a hearing in April, Bednar and his attorney “acknowledged that the petition contained fabricated legal authority, which was obtained from ChatGPT, and they accepted responsibility for the contents of the petition”.According to Bednar and his attorney, an “unlicensed law clerk” wrote up the brief and Bednar did not “independently check the accuracy” before he made the filing. ABC4 further reports that Durbano was not involved in the creation of the petition and the law clerk responsible for the filing was a law school graduate who was terminated from the law firm.The outlet added that Bednar offered to pay any related attorney fees to “make amends”.In a statement reported by ABC4, the Utah court of appeals said: “We agree that the use of AI in the preparation of pleadings is a legal research tool that will continue to evolve with advances in technology. However, we emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings. In the present case, petitioner’s counsel fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT.”As a result of the false citations, ABC4 reports that Bednar was ordered to pay the respondent’s attorney fees for the petition and hearing, refund fees to their client for the time used to prepare the filing and attend the hearing, as well as donate to the Utah-based legal non-profit And Justice for All. #lawyer #sanctioned #after #caught #using
    WWW.THEGUARDIAN.COM
    US lawyer sanctioned after caught using ChatGPT for court brief | Richard Bednar apologized after Utah appeals court discovered false citations, including one nonexistent case.
    The Utah court of appeals has sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case.Earlier this week, the Utah court of appeals made the decision to sanction Richard Bednar over claims that he filed a brief which included false citations.According to court documents reviewed by ABC4, Bednar and Douglas Durbano, another Utah-based lawyer who was serving as the petitioner’s counsel, filed a “timely petition for interlocutory appeal”.Upon reviewing the brief which was written by a law clerk, the respondent’s counsel found several false citations of cases.“It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter,” the respondent’s counsel said in documents reviewed by ABC4.The outlet reports that the brief referenced a case titled “Royer v Nelson”, which did not exist in any legal database.Following the discovery of the false citations, Bednar “acknowledged ‘the errors contained in the petition’ and apologized”, according to a document from the Utah court of appeals, ABC4 reports. It went on to add that during a hearing in April, Bednar and his attorney “acknowledged that the petition contained fabricated legal authority, which was obtained from ChatGPT, and they accepted responsibility for the contents of the petition”.According to Bednar and his attorney, an “unlicensed law clerk” wrote up the brief and Bednar did not “independently check the accuracy” before he made the filing. ABC4 further reports that Durbano was not involved in the creation of the petition and the law clerk responsible for the filing was a law school graduate who was terminated from the law firm.The outlet added that Bednar offered to pay any related attorney fees to “make amends”.In a statement reported by ABC4, the Utah court of appeals said: “We agree that the use of AI in the preparation of pleadings is a legal research tool that will continue to evolve with advances in technology. However, we emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings. In the present case, petitioner’s counsel fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT.”As a result of the false citations, ABC4 reports that Bednar was ordered to pay the respondent’s attorney fees for the petition and hearing, refund fees to their client for the time used to prepare the filing and attend the hearing, as well as donate $1,000 to the Utah-based legal non-profit And Justice for All.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • What Medical Guidelines (Finally) Say About Pain Management for IUD Insertion

    Intrauterine devices, or IUDs, are an extremely effective and convenient form of birth control for many people—but it can also very painful to get one inserted. Current medical guidelines say that your doctor should be discussing pain management with you, and they also give advice to doctors on what methods tend to work best for most people. The newest set of guidelines is from ACOG, the American College of Obstetricians and Gynecologists. These guidelines actually cover a variety of procedures, including endometrial and cervical biopsies, but today I'll be talking about the IUD insertion portions. And in 2024, the Centers for Disease Control and Prevention's released new contraceptive recommendations that include a section on how and why providers should help you with pain relief. Before we get into the new recommendations and what they say, it’s important to keep in mind that that not everybody feels severe pain with insertion—the estimate is that insertion is severely painful for 50% of people who haven't given birth, and only 10% of people who have, according to Rachel Flink, the OB-GYN I spoke with for my article on what to expect when you get an IUD.  I’m making sure to point this out because I’ve met people who are terrified at the thought of getting an IUD, because they think that severe pain is guaranteed and that doctors are lying if they say otherwise. In reality, there’s a whole spectrum of possible experiences, and both you and your provider should be informed and prepared for anything along that spectrum.Your provider should discuss pain management with youThe biggest thing in both sets of guidelines is not just the pain management options they discuss, but the guideline that says there is a place for this discussion and that it is important! You’ve always been able to ask about pain management, but providers are now expected to know that they need to discuss this with their patients. The ACOG guidelines say: "Options to manage pain should be discussed with and offered to all patients seeking in-office gynecologic procedures." And the CDC says: Before IUD placement, all patients should be counseled on potential pain during placement as well as the risks, benefits, and alternatives of different options for pain management. A person-centered plan for IUD placement and pain management should be made based on patient preference.“Person-centered” means that the plan should take into account what you want and need, not just what the provider is used to doing or thinks will be easiest. The CDC guidelines also say: “When considering patient pain, it is important to recognize that the experience of pain is individualized and might be influenced by previous experiences including trauma and mental health conditions, such as depression or anxiety.” The ACOG guidelines, similarly, say that talking over the procedure and what to expect can help make the procedure more tolerable, regardless of how physically painful it ends up being.Lidocaine paracervical blocks may relieve painThere’s good news and bad news about the recommended pain medications. The good news is that there are recommendations. The bad news is that none of them are guaranteed to work for everyone, and it’s not clear if they work very well at all. The CDC says that a paracervical block“might” reduce pain with insertion. Three studies showed that the injections worked to reduce pain, while three others found they did not. The CDC rates the certainty of evidence as “low” for pain and for satisfaction with the procedure. The ACOG guidelines also mention local anesthetics, including lidocaine paracervical blocks, as one of the best options for pain management. Dr. Flink told me that while some of her patients appreciate this option, it’s often impossible to numb all of the nerves in the cervix, and the injection itself can be painful—so in many cases, patients decide it’s not worth it. Still, it’s worth discussing with your provider if this sounds like something you would like to try.Topical lidocaine may also helpLidocaine, the same numbing medication, can also be applied to the cervix as a cream, spray, or gel. Again, evidence is mixed, with six trials finding that it helped, and seven finding that it did not. The ACOG guidelines note that sometimes topical lidocaine has worked better than the injected kind. Unfortunately, they also say that it can be hard for doctors to find an appropriate spray-on product that can be used on the cervix.The CDC judged the certainty of to be a bit better here compared to the injection—moderate for reducing pain, and high for improving placement success. Other methods aren’t well supported by the evidenceFor the other pain management methods that the CDC group studied, there wasn’t enough evidence to say whether they work. These included analgesics like ibuprofen, and smooth-muscle-relaxing medications. The ACOG guidelines say that taking NSAIDSbefore insertion doesn't seem to help with insertion pain, even though that's commonly recommended. That approach does seem to work for some other procedures, though, and may help with pain that occurs after an IUD insertion. So it may not be a bad idea to take those four Advil if that's what your doc recommends, but it shouldn't be your only option. Or as the ACOG paper puts it: "Although recommending preprocedural NSAIDs is a benign, low-risk intervention unlikely to cause harm, relying on NSAIDs alone for pain management during IUD insertion is ineffective and does not provide the immediate pain control patients need at the time of the procedure." Both sets of guidelines also don't recommend misoprostol, which is sometimes used to soften and open the cervix before inserting an IUD. The ACOG guidelines describe the evidence as mixed, and the CDC guidelines specifically recommend against it. Moderate certainty evidence says that misoprostol doesn’t help with pain, and low certainty evidence says that it may increase the risk of adverse events like cramping and vomiting. What this means for youThe publication of these guidelines won’t change anything overnight at your local OB-GYN office, but it’s a good sign that discussions about pain management with IUD placement are happening more openly. The new guidelines also don’t necessarily take any options off the table. Even misoprostol, which the CDC now says not to use for routine insertions, “might be useful in selected circumstances,” it writes.Don’t be afraid to ask about pain management before your appointment; as we discussed before, some medications and procedures require that you and your provider plan ahead. And definitely don’t accept a dismissive reply about how taking a few Advil should be enough; it may help for some people, but that shouldn't be the end of the discussion. You deserve to have your provider take your concerns seriously.
    #what #medical #guidelines #finally #say
    What Medical Guidelines (Finally) Say About Pain Management for IUD Insertion
    Intrauterine devices, or IUDs, are an extremely effective and convenient form of birth control for many people—but it can also very painful to get one inserted. Current medical guidelines say that your doctor should be discussing pain management with you, and they also give advice to doctors on what methods tend to work best for most people. The newest set of guidelines is from ACOG, the American College of Obstetricians and Gynecologists. These guidelines actually cover a variety of procedures, including endometrial and cervical biopsies, but today I'll be talking about the IUD insertion portions. And in 2024, the Centers for Disease Control and Prevention's released new contraceptive recommendations that include a section on how and why providers should help you with pain relief. Before we get into the new recommendations and what they say, it’s important to keep in mind that that not everybody feels severe pain with insertion—the estimate is that insertion is severely painful for 50% of people who haven't given birth, and only 10% of people who have, according to Rachel Flink, the OB-GYN I spoke with for my article on what to expect when you get an IUD.  I’m making sure to point this out because I’ve met people who are terrified at the thought of getting an IUD, because they think that severe pain is guaranteed and that doctors are lying if they say otherwise. In reality, there’s a whole spectrum of possible experiences, and both you and your provider should be informed and prepared for anything along that spectrum.Your provider should discuss pain management with youThe biggest thing in both sets of guidelines is not just the pain management options they discuss, but the guideline that says there is a place for this discussion and that it is important! You’ve always been able to ask about pain management, but providers are now expected to know that they need to discuss this with their patients. The ACOG guidelines say: "Options to manage pain should be discussed with and offered to all patients seeking in-office gynecologic procedures." And the CDC says: Before IUD placement, all patients should be counseled on potential pain during placement as well as the risks, benefits, and alternatives of different options for pain management. A person-centered plan for IUD placement and pain management should be made based on patient preference.“Person-centered” means that the plan should take into account what you want and need, not just what the provider is used to doing or thinks will be easiest. The CDC guidelines also say: “When considering patient pain, it is important to recognize that the experience of pain is individualized and might be influenced by previous experiences including trauma and mental health conditions, such as depression or anxiety.” The ACOG guidelines, similarly, say that talking over the procedure and what to expect can help make the procedure more tolerable, regardless of how physically painful it ends up being.Lidocaine paracervical blocks may relieve painThere’s good news and bad news about the recommended pain medications. The good news is that there are recommendations. The bad news is that none of them are guaranteed to work for everyone, and it’s not clear if they work very well at all. The CDC says that a paracervical block“might” reduce pain with insertion. Three studies showed that the injections worked to reduce pain, while three others found they did not. The CDC rates the certainty of evidence as “low” for pain and for satisfaction with the procedure. The ACOG guidelines also mention local anesthetics, including lidocaine paracervical blocks, as one of the best options for pain management. Dr. Flink told me that while some of her patients appreciate this option, it’s often impossible to numb all of the nerves in the cervix, and the injection itself can be painful—so in many cases, patients decide it’s not worth it. Still, it’s worth discussing with your provider if this sounds like something you would like to try.Topical lidocaine may also helpLidocaine, the same numbing medication, can also be applied to the cervix as a cream, spray, or gel. Again, evidence is mixed, with six trials finding that it helped, and seven finding that it did not. The ACOG guidelines note that sometimes topical lidocaine has worked better than the injected kind. Unfortunately, they also say that it can be hard for doctors to find an appropriate spray-on product that can be used on the cervix.The CDC judged the certainty of to be a bit better here compared to the injection—moderate for reducing pain, and high for improving placement success. Other methods aren’t well supported by the evidenceFor the other pain management methods that the CDC group studied, there wasn’t enough evidence to say whether they work. These included analgesics like ibuprofen, and smooth-muscle-relaxing medications. The ACOG guidelines say that taking NSAIDSbefore insertion doesn't seem to help with insertion pain, even though that's commonly recommended. That approach does seem to work for some other procedures, though, and may help with pain that occurs after an IUD insertion. So it may not be a bad idea to take those four Advil if that's what your doc recommends, but it shouldn't be your only option. Or as the ACOG paper puts it: "Although recommending preprocedural NSAIDs is a benign, low-risk intervention unlikely to cause harm, relying on NSAIDs alone for pain management during IUD insertion is ineffective and does not provide the immediate pain control patients need at the time of the procedure." Both sets of guidelines also don't recommend misoprostol, which is sometimes used to soften and open the cervix before inserting an IUD. The ACOG guidelines describe the evidence as mixed, and the CDC guidelines specifically recommend against it. Moderate certainty evidence says that misoprostol doesn’t help with pain, and low certainty evidence says that it may increase the risk of adverse events like cramping and vomiting. What this means for youThe publication of these guidelines won’t change anything overnight at your local OB-GYN office, but it’s a good sign that discussions about pain management with IUD placement are happening more openly. The new guidelines also don’t necessarily take any options off the table. Even misoprostol, which the CDC now says not to use for routine insertions, “might be useful in selected circumstances,” it writes.Don’t be afraid to ask about pain management before your appointment; as we discussed before, some medications and procedures require that you and your provider plan ahead. And definitely don’t accept a dismissive reply about how taking a few Advil should be enough; it may help for some people, but that shouldn't be the end of the discussion. You deserve to have your provider take your concerns seriously. #what #medical #guidelines #finally #say
    LIFEHACKER.COM
    What Medical Guidelines (Finally) Say About Pain Management for IUD Insertion
    Intrauterine devices, or IUDs, are an extremely effective and convenient form of birth control for many people—but it can also very painful to get one inserted. Current medical guidelines say that your doctor should be discussing pain management with you, and they also give advice to doctors on what methods tend to work best for most people. The newest set of guidelines is from ACOG, the American College of Obstetricians and Gynecologists. These guidelines actually cover a variety of procedures, including endometrial and cervical biopsies, but today I'll be talking about the IUD insertion portions. And in 2024, the Centers for Disease Control and Prevention's released new contraceptive recommendations that include a section on how and why providers should help you with pain relief. Before we get into the new recommendations and what they say, it’s important to keep in mind that that not everybody feels severe pain with insertion—the estimate is that insertion is severely painful for 50% of people who haven't given birth, and only 10% of people who have, according to Rachel Flink, the OB-GYN I spoke with for my article on what to expect when you get an IUD. (She also gave me a great rundown of pain management options and their pros and cons, which I included in the article.)  I’m making sure to point this out because I’ve met people who are terrified at the thought of getting an IUD, because they think that severe pain is guaranteed and that doctors are lying if they say otherwise. In reality, there’s a whole spectrum of possible experiences, and both you and your provider should be informed and prepared for anything along that spectrum.Your provider should discuss pain management with youThe biggest thing in both sets of guidelines is not just the pain management options they discuss, but the guideline that says there is a place for this discussion and that it is important! You’ve always been able to ask about pain management, but providers are now expected to know that they need to discuss this with their patients. The ACOG guidelines say: "Options to manage pain should be discussed with and offered to all patients seeking in-office gynecologic procedures." And the CDC says: Before IUD placement, all patients should be counseled on potential pain during placement as well as the risks, benefits, and alternatives of different options for pain management. A person-centered plan for IUD placement and pain management should be made based on patient preference.“Person-centered” means that the plan should take into account what you want and need, not just what the provider is used to doing or thinks will be easiest. (This has sometimes been called “patient-centered” care, but “person-centered” is meant to convey that you and your provider understand that they are treating a whole person, with concerns outside of just their health, and you’re not only a patient who exists in a medical context.) The CDC guidelines also say: “When considering patient pain, it is important to recognize that the experience of pain is individualized and might be influenced by previous experiences including trauma and mental health conditions, such as depression or anxiety.” The ACOG guidelines, similarly, say that talking over the procedure and what to expect can help make the procedure more tolerable, regardless of how physically painful it ends up being. (Dr. Flink told me that anti-anxiety medications during insertion are helpful for some of her patients, and that she’ll discuss them alongside options for physical pain relief.)Lidocaine paracervical blocks may relieve painThere’s good news and bad news about the recommended pain medications. The good news is that there are recommendations. The bad news is that none of them are guaranteed to work for everyone, and it’s not clear if they work very well at all. The CDC says that a paracervical block (done by injection, similar to the numbing injections used for dental work) “might” reduce pain with insertion. Three studies showed that the injections worked to reduce pain, while three others found they did not. The CDC rates the certainty of evidence as “low” for pain and for satisfaction with the procedure. The ACOG guidelines also mention local anesthetics, including lidocaine paracervical blocks, as one of the best options for pain management. Dr. Flink told me that while some of her patients appreciate this option, it’s often impossible to numb all of the nerves in the cervix, and the injection itself can be painful—so in many cases, patients decide it’s not worth it. Still, it’s worth discussing with your provider if this sounds like something you would like to try.Topical lidocaine may also helpLidocaine, the same numbing medication, can also be applied to the cervix as a cream, spray, or gel. Again, evidence is mixed, with six trials finding that it helped, and seven finding that it did not. The ACOG guidelines note that sometimes topical lidocaine has worked better than the injected kind. Unfortunately, they also say that it can be hard for doctors to find an appropriate spray-on product that can be used on the cervix.The CDC judged the certainty of to be a bit better here compared to the injection—moderate for reducing pain, and high for improving placement success (meaning that the provider was able to get the IUD inserted properly). Other methods aren’t well supported by the evidence (yet?)For the other pain management methods that the CDC group studied, there wasn’t enough evidence to say whether they work. These included analgesics like ibuprofen, and smooth-muscle-relaxing medications. The ACOG guidelines say that taking NSAIDS (like ibuprofen) before insertion doesn't seem to help with insertion pain, even though that's commonly recommended. That approach does seem to work for some other procedures, though, and may help with pain that occurs after an IUD insertion. So it may not be a bad idea to take those four Advil if that's what your doc recommends, but it shouldn't be your only option. Or as the ACOG paper puts it: "Although recommending preprocedural NSAIDs is a benign, low-risk intervention unlikely to cause harm, relying on NSAIDs alone for pain management during IUD insertion is ineffective and does not provide the immediate pain control patients need at the time of the procedure." Both sets of guidelines also don't recommend misoprostol, which is sometimes used to soften and open the cervix before inserting an IUD. The ACOG guidelines describe the evidence as mixed, and the CDC guidelines specifically recommend against it. Moderate certainty evidence says that misoprostol doesn’t help with pain, and low certainty evidence says that it may increase the risk of adverse events like cramping and vomiting. What this means for youThe publication of these guidelines won’t change anything overnight at your local OB-GYN office, but it’s a good sign that discussions about pain management with IUD placement are happening more openly. The new guidelines also don’t necessarily take any options off the table. Even misoprostol, which the CDC now says not to use for routine insertions, “might be useful in selected circumstances (e.g., in patients with a recent failed placement),” it writes.Don’t be afraid to ask about pain management before your appointment; as we discussed before, some medications and procedures require that you and your provider plan ahead. And definitely don’t accept a dismissive reply about how taking a few Advil should be enough; it may help for some people, but that shouldn't be the end of the discussion. You deserve to have your provider take your concerns seriously.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Why do lawyers keep using ChatGPT?

    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    #why #lawyers #keep #using #chatgpt
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More: #why #lawyers #keep #using #chatgpt
    WWW.THEVERGE.COM
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.” (That said, the cases do at least typically exist.)Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    0 Commentarii 0 Distribuiri 0 previzualizare
CGShares https://cgshares.com