• Major data broker hack impacts 364,000 individuals’ data

    Published
    June 5, 2025 10:00am EDT close Don’t be so quick to click that Google calendar invite. It could be a hacker’s trap Cybercriminals are sending fake meeting invitations that seem legitimate. NEWYou can now listen to Fox News articles!
    Americans’ personal data is now spread across more digital platforms than ever. From online shopping habits to fitness tracking logs, personal information ends up in hundreds of company databases. While most people worry about social media leaks or email hacks, a far less visible threat comes from data brokers.I still find it hard to believe that companies like this are allowed to operate with so little legal scrutiny. These firms trade in personal information without our knowledge or consent. What baffles me even more is that they aren’t serious about protecting the one thing that is central to their business model: data. Just last year, we saw news of a massive data breach at a data broker called National Public Data, which exposed 2.7 billion records. And now another data broker, LexisNexis, a major name in the industry, has reported a significant breach that exposed sensitive information from more than 364,000 people. A hacker at workLexisNexis breach went undetected for months after holiday hackLexisNexis filed a notice with the Maine attorney general revealing that a hacker accessed consumer data through a third-party software development platform. The breach happened on Dec. 25, 2024, but the company only discovered it months later. LexisNexis was alerted on April 1, 2025, by an unnamed individual who claimed to have found sensitive files. It remains unclear whether this person was responsible for the breach or merely came across the exposed data.MASSIVE DATA BREACH EXPOSES 184 MILLION PASSWORDS AND LOGINSA spokesperson for LexisNexis confirmed that the hacker gained access to the company’s GitHub account. This is a platform commonly used by developers to store and collaborate on code. Security guidelines repeatedly warn against storing sensitive information in such repositories; however, mistakes such as exposed access tokens and personal data files continue to occur.The stolen data varies from person to person but includes full names, birthdates, phone numbers, mailing and email addresses, Social Security numbers and driver's license numbers. LexisNexis has not confirmed whether it received any ransom demand or had further contact with the attacker. An individual working on their laptopWhy the LexisNexis hack is a bigger threat than you realizeLexisNexis isn’t a household name for most people, but it plays a major role in how personal data is harvested and used behind the scenes. The company pulls information from a wide range of sources, compiling detailed profiles that help other businesses assess risk and detect fraud. Its clients include banks, insurance companies and government agencies.In 2023, the New York Times reported that several car manufacturers had been sharing driving data with LexisNexis without notifying vehicle owners. That information was then sold to insurance companies, which used it to adjust premiums based on individual driving behavior. The story made one thing clear. LexisNexis has access to a staggering amount of personal detail, even from people who have never willingly engaged with the company.Law enforcement also uses LexisNexis tools to dig up information on suspects. These systems offer access to phone records, home addresses and other historical data. While such tools might assist in investigations, they also highlight a serious issue. When this much sensitive information is concentrated in one place, it becomes a single point of failure. And as the recent breach shows, that failure is no longer hypothetical. A hacker at work7 expert tips to protect your personal data after a data broker breachKeeping your personal data safe online can feel overwhelming, but a few practical steps can make a big difference in protecting your privacy and reducing your digital footprint. Here are 7 effective ways to take control of your information and keep it out of the wrong hands:1. Remove your data from the internet: The most effective way to take control of your data and avoid data brokers from selling it is to opt for data removal services. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here.Get a free scan to find out if your personal information is already out on the web.2. Review privacy settings: Take a few minutes to explore the privacy and security settings on the services you use. For example, limit who can see your social media posts, disable unnecessary location-sharing on your phone and consider turning off ad personalization on accounts like Google and Facebook. Most browsers let you block third-party cookies or clear tracking data. The FTC suggests comparing the privacy notices of different sites and apps and choosing ones that let you opt out of sharing when possible.3. Use privacy-friendly tools: Install browser extensions or plugins that block ads and trackers. You might switch to a more private search enginethat doesn’t log your queries. Consider using a browser’s "incognito" or private mode when you don’t want your history saved, and regularly clear your cookies and cache. Even small habits, like logging out of accounts when not in use or using a password manager, make you less trackable.GET FOX BUSINESS ON THE GO BY CLICKING HERE4. Beware of phishing links and use strong antivirus software: Scammers may try to get access to your financial details and other important data using phishing links. The best way to safeguard yourself from malicious links is to have antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.5. Be cautious with personal data: Think twice before sharing extra details. Don’t fill out online surveys or quizzes that ask for personal or financial information unless you trust the source. Create separate email addresses for sign-ups. Only download apps from official stores and check app permissions.6. Opt out of data broker lists: Many data brokers offer ways to opt out or delete your information, though it can be a tedious process. For example, there are sites like Privacy Rights Clearinghouse or the Whitepages opt-out page that list popular brokers and their opt-out procedures. The FTC’s consumer guide, "Your Guide to Protecting Your Privacy Online," includes tips on opting out of targeted ads and removing yourself from people-search databases. Keep in mind you may have to repeat this every few months.7. Be wary of mailbox communications: Bad actors may also try to scam you through snail mail. The data leak gives them access to your address. They may impersonate people or brands you know and use themes that require urgent attention, such as missed deliveries, account suspensions and security alerts.Kurt’s key takeawayFor many, the LexisNexis breach may be the first time they realize just how much of their data is in circulation. Unlike a social media platform or a bank, there is no clear customer relationship with a data broker, and that makes it harder to demand transparency. This incident should prompt serious discussion around what kind of oversight is necessary in industries that operate in the shadows. A more informed public and stronger regulation may be the only things standing between personal data and permanent exposure.CLICK HERE TO GET THE FOX NEWS APPShould companies be allowed to sell your personal information without your consent? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Ask Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #major #data #broker #hack #impacts
    Major data broker hack impacts 364,000 individuals’ data
    Published June 5, 2025 10:00am EDT close Don’t be so quick to click that Google calendar invite. It could be a hacker’s trap Cybercriminals are sending fake meeting invitations that seem legitimate. NEWYou can now listen to Fox News articles! Americans’ personal data is now spread across more digital platforms than ever. From online shopping habits to fitness tracking logs, personal information ends up in hundreds of company databases. While most people worry about social media leaks or email hacks, a far less visible threat comes from data brokers.I still find it hard to believe that companies like this are allowed to operate with so little legal scrutiny. These firms trade in personal information without our knowledge or consent. What baffles me even more is that they aren’t serious about protecting the one thing that is central to their business model: data. Just last year, we saw news of a massive data breach at a data broker called National Public Data, which exposed 2.7 billion records. And now another data broker, LexisNexis, a major name in the industry, has reported a significant breach that exposed sensitive information from more than 364,000 people. A hacker at workLexisNexis breach went undetected for months after holiday hackLexisNexis filed a notice with the Maine attorney general revealing that a hacker accessed consumer data through a third-party software development platform. The breach happened on Dec. 25, 2024, but the company only discovered it months later. LexisNexis was alerted on April 1, 2025, by an unnamed individual who claimed to have found sensitive files. It remains unclear whether this person was responsible for the breach or merely came across the exposed data.MASSIVE DATA BREACH EXPOSES 184 MILLION PASSWORDS AND LOGINSA spokesperson for LexisNexis confirmed that the hacker gained access to the company’s GitHub account. This is a platform commonly used by developers to store and collaborate on code. Security guidelines repeatedly warn against storing sensitive information in such repositories; however, mistakes such as exposed access tokens and personal data files continue to occur.The stolen data varies from person to person but includes full names, birthdates, phone numbers, mailing and email addresses, Social Security numbers and driver's license numbers. LexisNexis has not confirmed whether it received any ransom demand or had further contact with the attacker. An individual working on their laptopWhy the LexisNexis hack is a bigger threat than you realizeLexisNexis isn’t a household name for most people, but it plays a major role in how personal data is harvested and used behind the scenes. The company pulls information from a wide range of sources, compiling detailed profiles that help other businesses assess risk and detect fraud. Its clients include banks, insurance companies and government agencies.In 2023, the New York Times reported that several car manufacturers had been sharing driving data with LexisNexis without notifying vehicle owners. That information was then sold to insurance companies, which used it to adjust premiums based on individual driving behavior. The story made one thing clear. LexisNexis has access to a staggering amount of personal detail, even from people who have never willingly engaged with the company.Law enforcement also uses LexisNexis tools to dig up information on suspects. These systems offer access to phone records, home addresses and other historical data. While such tools might assist in investigations, they also highlight a serious issue. When this much sensitive information is concentrated in one place, it becomes a single point of failure. And as the recent breach shows, that failure is no longer hypothetical. A hacker at work7 expert tips to protect your personal data after a data broker breachKeeping your personal data safe online can feel overwhelming, but a few practical steps can make a big difference in protecting your privacy and reducing your digital footprint. Here are 7 effective ways to take control of your information and keep it out of the wrong hands:1. Remove your data from the internet: The most effective way to take control of your data and avoid data brokers from selling it is to opt for data removal services. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here.Get a free scan to find out if your personal information is already out on the web.2. Review privacy settings: Take a few minutes to explore the privacy and security settings on the services you use. For example, limit who can see your social media posts, disable unnecessary location-sharing on your phone and consider turning off ad personalization on accounts like Google and Facebook. Most browsers let you block third-party cookies or clear tracking data. The FTC suggests comparing the privacy notices of different sites and apps and choosing ones that let you opt out of sharing when possible.3. Use privacy-friendly tools: Install browser extensions or plugins that block ads and trackers. You might switch to a more private search enginethat doesn’t log your queries. Consider using a browser’s "incognito" or private mode when you don’t want your history saved, and regularly clear your cookies and cache. Even small habits, like logging out of accounts when not in use or using a password manager, make you less trackable.GET FOX BUSINESS ON THE GO BY CLICKING HERE4. Beware of phishing links and use strong antivirus software: Scammers may try to get access to your financial details and other important data using phishing links. The best way to safeguard yourself from malicious links is to have antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.5. Be cautious with personal data: Think twice before sharing extra details. Don’t fill out online surveys or quizzes that ask for personal or financial information unless you trust the source. Create separate email addresses for sign-ups. Only download apps from official stores and check app permissions.6. Opt out of data broker lists: Many data brokers offer ways to opt out or delete your information, though it can be a tedious process. For example, there are sites like Privacy Rights Clearinghouse or the Whitepages opt-out page that list popular brokers and their opt-out procedures. The FTC’s consumer guide, "Your Guide to Protecting Your Privacy Online," includes tips on opting out of targeted ads and removing yourself from people-search databases. Keep in mind you may have to repeat this every few months.7. Be wary of mailbox communications: Bad actors may also try to scam you through snail mail. The data leak gives them access to your address. They may impersonate people or brands you know and use themes that require urgent attention, such as missed deliveries, account suspensions and security alerts.Kurt’s key takeawayFor many, the LexisNexis breach may be the first time they realize just how much of their data is in circulation. Unlike a social media platform or a bank, there is no clear customer relationship with a data broker, and that makes it harder to demand transparency. This incident should prompt serious discussion around what kind of oversight is necessary in industries that operate in the shadows. A more informed public and stronger regulation may be the only things standing between personal data and permanent exposure.CLICK HERE TO GET THE FOX NEWS APPShould companies be allowed to sell your personal information without your consent? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Ask Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #major #data #broker #hack #impacts
    WWW.FOXNEWS.COM
    Major data broker hack impacts 364,000 individuals’ data
    Published June 5, 2025 10:00am EDT close Don’t be so quick to click that Google calendar invite. It could be a hacker’s trap Cybercriminals are sending fake meeting invitations that seem legitimate. NEWYou can now listen to Fox News articles! Americans’ personal data is now spread across more digital platforms than ever. From online shopping habits to fitness tracking logs, personal information ends up in hundreds of company databases. While most people worry about social media leaks or email hacks, a far less visible threat comes from data brokers.I still find it hard to believe that companies like this are allowed to operate with so little legal scrutiny. These firms trade in personal information without our knowledge or consent. What baffles me even more is that they aren’t serious about protecting the one thing that is central to their business model: data. Just last year, we saw news of a massive data breach at a data broker called National Public Data, which exposed 2.7 billion records. And now another data broker, LexisNexis, a major name in the industry, has reported a significant breach that exposed sensitive information from more than 364,000 people. A hacker at work (Kurt "CyberGuy" Knutsson)LexisNexis breach went undetected for months after holiday hackLexisNexis filed a notice with the Maine attorney general revealing that a hacker accessed consumer data through a third-party software development platform. The breach happened on Dec. 25, 2024, but the company only discovered it months later. LexisNexis was alerted on April 1, 2025, by an unnamed individual who claimed to have found sensitive files. It remains unclear whether this person was responsible for the breach or merely came across the exposed data.MASSIVE DATA BREACH EXPOSES 184 MILLION PASSWORDS AND LOGINSA spokesperson for LexisNexis confirmed that the hacker gained access to the company’s GitHub account. This is a platform commonly used by developers to store and collaborate on code. Security guidelines repeatedly warn against storing sensitive information in such repositories; however, mistakes such as exposed access tokens and personal data files continue to occur.The stolen data varies from person to person but includes full names, birthdates, phone numbers, mailing and email addresses, Social Security numbers and driver's license numbers. LexisNexis has not confirmed whether it received any ransom demand or had further contact with the attacker. An individual working on their laptop (Kurt "CyberGuy" Knutsson)Why the LexisNexis hack is a bigger threat than you realizeLexisNexis isn’t a household name for most people, but it plays a major role in how personal data is harvested and used behind the scenes. The company pulls information from a wide range of sources, compiling detailed profiles that help other businesses assess risk and detect fraud. Its clients include banks, insurance companies and government agencies.In 2023, the New York Times reported that several car manufacturers had been sharing driving data with LexisNexis without notifying vehicle owners. That information was then sold to insurance companies, which used it to adjust premiums based on individual driving behavior. The story made one thing clear. LexisNexis has access to a staggering amount of personal detail, even from people who have never willingly engaged with the company.Law enforcement also uses LexisNexis tools to dig up information on suspects. These systems offer access to phone records, home addresses and other historical data. While such tools might assist in investigations, they also highlight a serious issue. When this much sensitive information is concentrated in one place, it becomes a single point of failure. And as the recent breach shows, that failure is no longer hypothetical. A hacker at work (Kurt "CyberGuy" Knutsson)7 expert tips to protect your personal data after a data broker breachKeeping your personal data safe online can feel overwhelming, but a few practical steps can make a big difference in protecting your privacy and reducing your digital footprint. Here are 7 effective ways to take control of your information and keep it out of the wrong hands:1. Remove your data from the internet: The most effective way to take control of your data and avoid data brokers from selling it is to opt for data removal services. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here.Get a free scan to find out if your personal information is already out on the web.2. Review privacy settings: Take a few minutes to explore the privacy and security settings on the services you use. For example, limit who can see your social media posts, disable unnecessary location-sharing on your phone and consider turning off ad personalization on accounts like Google and Facebook. Most browsers let you block third-party cookies or clear tracking data. The FTC suggests comparing the privacy notices of different sites and apps and choosing ones that let you opt out of sharing when possible.3. Use privacy-friendly tools: Install browser extensions or plugins that block ads and trackers (such as uBlock Origin or Privacy Badger). You might switch to a more private search engine (like DuckDuckGo or Brave) that doesn’t log your queries. Consider using a browser’s "incognito" or private mode when you don’t want your history saved, and regularly clear your cookies and cache. Even small habits, like logging out of accounts when not in use or using a password manager, make you less trackable.GET FOX BUSINESS ON THE GO BY CLICKING HERE4. Beware of phishing links and use strong antivirus software: Scammers may try to get access to your financial details and other important data using phishing links. The best way to safeguard yourself from malicious links is to have antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.5. Be cautious with personal data: Think twice before sharing extra details. Don’t fill out online surveys or quizzes that ask for personal or financial information unless you trust the source. Create separate email addresses for sign-ups (so marketing emails don’t go to your main inbox). Only download apps from official stores and check app permissions.6. Opt out of data broker lists: Many data brokers offer ways to opt out or delete your information, though it can be a tedious process. For example, there are sites like Privacy Rights Clearinghouse or the Whitepages opt-out page that list popular brokers and their opt-out procedures. The FTC’s consumer guide, "Your Guide to Protecting Your Privacy Online," includes tips on opting out of targeted ads and removing yourself from people-search databases. Keep in mind you may have to repeat this every few months.7. Be wary of mailbox communications: Bad actors may also try to scam you through snail mail. The data leak gives them access to your address. They may impersonate people or brands you know and use themes that require urgent attention, such as missed deliveries, account suspensions and security alerts.Kurt’s key takeawayFor many, the LexisNexis breach may be the first time they realize just how much of their data is in circulation. Unlike a social media platform or a bank, there is no clear customer relationship with a data broker, and that makes it harder to demand transparency. This incident should prompt serious discussion around what kind of oversight is necessary in industries that operate in the shadows. A more informed public and stronger regulation may be the only things standing between personal data and permanent exposure.CLICK HERE TO GET THE FOX NEWS APPShould companies be allowed to sell your personal information without your consent? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Ask Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    Like
    Love
    Wow
    Angry
    Sad
    369
    0 Kommentare 0 Anteile
  • Why do lawyers keep using ChatGPT?

    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    #why #lawyers #keep #using #chatgpt
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More: #why #lawyers #keep #using #chatgpt
    WWW.THEVERGE.COM
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.” (That said, the cases do at least typically exist.)Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    0 Kommentare 0 Anteile
  • That LexisNexis Data Breach Was So Bad, It Might Lead to a Class-Action Lawsuit

    Data broker LexisNexis Risk Solutionshas just disclosed a data breach that occurred at the end of last year, and while it doesn't affect as many individuals as other recent high profile incidents—such as the DISA hack that included 3.3 million people's information—it underscores the ever-present concerns with companies collectinguser data. As TechCrunch reports, LexisNexis Risk Solutions uses consumers' personal and financial information to help corporations conduct risk assessments on prospective customers and detect fraudulent transactions. For example, LexisNexis sold data on vehicle driving habits collected by car manufacturers to insurance companies to set premiums, while law enforcement agencies pull data from LexisNexis about suspects.The LexisNexis hack compromised data collected on 364,333 individuals, and there's a potential class action lawsuit brewing over the incident. Here's what you need to know. What happened with LexisNexis?According to the company's filing with the Maine attorney general's office, a data breach took place on December 25, 2024 but wasn't discovered until May 14, 2025. A third-party platform used by LexisNexis was hacked, compromising information that may include the following: NamePhone numberMailing addressEmail addressSocial Security numberDriver's license numberDate of birthIn a letter to affected individuals, LexisNexis states that no financial or credit card information was included in the breach, nor has any data been obviously misused. Few additional details about the incident have been disclosed, other than that none of the company's own networks or systems were hacked. What consumers need to doLexisNexis sent a notice dated May 24 to consumers whose data may have been compromised, so if you receive a letter from LexisNexis Risk Solutions, don't throw it out. The company is offering 24 months of identity protection and credit monitoring services through Experian IdentityWorks, and you must enroll online by August 31, 2025 using the activation code provided in your notice. Affected individuals can also indicate their interest in joining a class action lawsuit against LexisNexis through Oklahoma-based firm Abington Cole + Ellery. If you want to volunteer to be considered as a class representative, fill out the online form with your name, contact information, and connection to the breach.Finally, even if you don't plan to join the class action suit, you should keep an eye out for signs of identity theft. Check your credit report—which you can request for free on a weekly basis—and monitor your accounts for any unauthorized activity. You can also freeze your credit, place a fraud alert, and take other steps to secure your Social Security number so no one can open accounts or take out debt in your name.
    #that #lexisnexis #data #breach #was
    That LexisNexis Data Breach Was So Bad, It Might Lead to a Class-Action Lawsuit
    Data broker LexisNexis Risk Solutionshas just disclosed a data breach that occurred at the end of last year, and while it doesn't affect as many individuals as other recent high profile incidents—such as the DISA hack that included 3.3 million people's information—it underscores the ever-present concerns with companies collectinguser data. As TechCrunch reports, LexisNexis Risk Solutions uses consumers' personal and financial information to help corporations conduct risk assessments on prospective customers and detect fraudulent transactions. For example, LexisNexis sold data on vehicle driving habits collected by car manufacturers to insurance companies to set premiums, while law enforcement agencies pull data from LexisNexis about suspects.The LexisNexis hack compromised data collected on 364,333 individuals, and there's a potential class action lawsuit brewing over the incident. Here's what you need to know. What happened with LexisNexis?According to the company's filing with the Maine attorney general's office, a data breach took place on December 25, 2024 but wasn't discovered until May 14, 2025. A third-party platform used by LexisNexis was hacked, compromising information that may include the following: NamePhone numberMailing addressEmail addressSocial Security numberDriver's license numberDate of birthIn a letter to affected individuals, LexisNexis states that no financial or credit card information was included in the breach, nor has any data been obviously misused. Few additional details about the incident have been disclosed, other than that none of the company's own networks or systems were hacked. What consumers need to doLexisNexis sent a notice dated May 24 to consumers whose data may have been compromised, so if you receive a letter from LexisNexis Risk Solutions, don't throw it out. The company is offering 24 months of identity protection and credit monitoring services through Experian IdentityWorks, and you must enroll online by August 31, 2025 using the activation code provided in your notice. Affected individuals can also indicate their interest in joining a class action lawsuit against LexisNexis through Oklahoma-based firm Abington Cole + Ellery. If you want to volunteer to be considered as a class representative, fill out the online form with your name, contact information, and connection to the breach.Finally, even if you don't plan to join the class action suit, you should keep an eye out for signs of identity theft. Check your credit report—which you can request for free on a weekly basis—and monitor your accounts for any unauthorized activity. You can also freeze your credit, place a fraud alert, and take other steps to secure your Social Security number so no one can open accounts or take out debt in your name. #that #lexisnexis #data #breach #was
    LIFEHACKER.COM
    That LexisNexis Data Breach Was So Bad, It Might Lead to a Class-Action Lawsuit
    Data broker LexisNexis Risk Solutions (LNRS) has just disclosed a data breach that occurred at the end of last year, and while it doesn't affect as many individuals as other recent high profile incidents—such as the DISA hack that included 3.3 million people's information—it underscores the ever-present concerns with companies collecting (and profiting off of) user data. As TechCrunch reports, LexisNexis Risk Solutions uses consumers' personal and financial information to help corporations conduct risk assessments on prospective customers and detect fraudulent transactions. For example, LexisNexis sold data on vehicle driving habits collected by car manufacturers to insurance companies to set premiums, while law enforcement agencies pull data from LexisNexis about suspects. (LexisNexis Risk Solutions is a subsidiary of the same corporation that owns data analytics and research firm LexisNexis.)The LexisNexis hack compromised data collected on 364,333 individuals, and there's a potential class action lawsuit brewing over the incident. Here's what you need to know. What happened with LexisNexis?According to the company's filing with the Maine attorney general's office, a data breach took place on December 25, 2024 but wasn't discovered until May 14, 2025. A third-party platform used by LexisNexis was hacked, compromising information that may include the following: NamePhone numberMailing addressEmail addressSocial Security numberDriver's license numberDate of birthIn a letter to affected individuals, LexisNexis states that no financial or credit card information was included in the breach, nor has any data been obviously misused (so far). Few additional details about the incident have been disclosed, other than that none of the company's own networks or systems were hacked. What consumers need to doLexisNexis sent a notice dated May 24 to consumers whose data may have been compromised, so if you receive a letter from LexisNexis Risk Solutions, don't throw it out. The company is offering 24 months of identity protection and credit monitoring services through Experian IdentityWorks, and you must enroll online by August 31, 2025 using the activation code provided in your notice. Affected individuals can also indicate their interest in joining a class action lawsuit against LexisNexis through Oklahoma-based firm Abington Cole + Ellery. If you want to volunteer to be considered as a class representative, fill out the online form with your name, contact information, and connection to the breach.Finally, even if you don't plan to join the class action suit, you should keep an eye out for signs of identity theft. Check your credit report—which you can request for free on a weekly basis—and monitor your accounts for any unauthorized activity. You can also freeze your credit, place a fraud alert, and take other steps to secure your Social Security number so no one can open accounts or take out debt in your name.
    0 Kommentare 0 Anteile