• How to delete your 23andMe data

    DNA testing service 23andMe has undergone serious upheaval in recent months, creating concerns for the 15 million customers who entrusted the company with their personal biological information. After filing for Chapter 11 bankruptcy protection in March, the company became the center of a bidding war that ended Friday when co-founder Anne Wojcicki said she’d successfully reacquired control through her nonprofit TTAM Research Institute for million.
    The bankruptcy proceedings had sent shockwaves through the genetic testing industry and among privacy advocates, with security experts and lawmakers urging customers to take immediate action to safeguard their data. The company’s interim CEO revealed this week that 1.9 million people, around 15% of 23andMe’s customer base, have already requested their genetic data be deleted from the company’s servers.
    The situation became even more complex last week after more than two dozen states filed lawsuits challenging the sale of customers’ private data, arguing that 23andMe must obtain explicit consent before transferring or selling personal information to any new entity.
    While the company’s policies mean you cannot delete all traces of your genetic data — particularly information that may have already been shared with research partners or stored in backup systems — if you’re one of the 15 million people who shared their DNA with 23andMe, there are still meaningful steps you can take to protect yourself and minimize your exposure.
    How to delete your 23andMe data
    To delete your data from 23andMe, you need to log in to your account and then follow these steps:

    Navigate to the Settings section of your profile.
    Scroll down to the selection labeled 23andMe Data. 
    Click the View option and scroll to the Delete Data section.
    Select the Permanently Delete Data button.

    You will then receive an email from 23andMe with a link that will allow you to confirm your deletion request. 
    You can choose to download a copy of your data before deleting it.
    There is an important caveat, as 23andMe’s privacy policy states that the company and its labs “will retain your Genetic Information, date of birth, and sex as required for compliance with applicable legal obligations.”
    The policy continues: “23andMe will also retain limited information related to your account and data deletion request, including but not limited to, your email address, account deletion request identifier, communications related to inquiries or complaints and legal agreements for a limited period of time as required by law, contractual obligations, and/or as necessary for the establishment, exercise or defense of legal claims and for audit and compliance purposes.”
    This essentially means that 23andMe may keep some of your information for an unspecified amount of time. 
    How to destroy your 23andMe test sample and revoke permission for your data to be used for research
    If you previously opted to have your saliva sample and DNA stored by 23andMe, you can change this setting.
    To revoke your permission, go into your 23andMe account settings page and then navigate to Preferences. 
    In addition, if you previously agreed to 23andMe and third-party researchers using your genetic data and sample for research, you can withdraw consent from the Research and Product Consents section in your account settings. 
    While you can reverse that consent, there’s no way for you to delete that information.
    Check in with your family members
    Once you have requested the deletion of your data, it’s important to check in with your family members and encourage them to do the same because it’s not just their DNA that’s at risk of sale — it also affects people they are related to. 
    And while you’re at it, it’s worth checking in with your friends to ensure that all of your loved ones are taking steps to protect their data. 
    This story originally published on March 25 and was updated June 11 with new information.
    #how #delete #your #23andme #data
    How to delete your 23andMe data
    DNA testing service 23andMe has undergone serious upheaval in recent months, creating concerns for the 15 million customers who entrusted the company with their personal biological information. After filing for Chapter 11 bankruptcy protection in March, the company became the center of a bidding war that ended Friday when co-founder Anne Wojcicki said she’d successfully reacquired control through her nonprofit TTAM Research Institute for million. The bankruptcy proceedings had sent shockwaves through the genetic testing industry and among privacy advocates, with security experts and lawmakers urging customers to take immediate action to safeguard their data. The company’s interim CEO revealed this week that 1.9 million people, around 15% of 23andMe’s customer base, have already requested their genetic data be deleted from the company’s servers. The situation became even more complex last week after more than two dozen states filed lawsuits challenging the sale of customers’ private data, arguing that 23andMe must obtain explicit consent before transferring or selling personal information to any new entity. While the company’s policies mean you cannot delete all traces of your genetic data — particularly information that may have already been shared with research partners or stored in backup systems — if you’re one of the 15 million people who shared their DNA with 23andMe, there are still meaningful steps you can take to protect yourself and minimize your exposure. How to delete your 23andMe data To delete your data from 23andMe, you need to log in to your account and then follow these steps: Navigate to the Settings section of your profile. Scroll down to the selection labeled 23andMe Data.  Click the View option and scroll to the Delete Data section. Select the Permanently Delete Data button. You will then receive an email from 23andMe with a link that will allow you to confirm your deletion request.  You can choose to download a copy of your data before deleting it. There is an important caveat, as 23andMe’s privacy policy states that the company and its labs “will retain your Genetic Information, date of birth, and sex as required for compliance with applicable legal obligations.” The policy continues: “23andMe will also retain limited information related to your account and data deletion request, including but not limited to, your email address, account deletion request identifier, communications related to inquiries or complaints and legal agreements for a limited period of time as required by law, contractual obligations, and/or as necessary for the establishment, exercise or defense of legal claims and for audit and compliance purposes.” This essentially means that 23andMe may keep some of your information for an unspecified amount of time.  How to destroy your 23andMe test sample and revoke permission for your data to be used for research If you previously opted to have your saliva sample and DNA stored by 23andMe, you can change this setting. To revoke your permission, go into your 23andMe account settings page and then navigate to Preferences.  In addition, if you previously agreed to 23andMe and third-party researchers using your genetic data and sample for research, you can withdraw consent from the Research and Product Consents section in your account settings.  While you can reverse that consent, there’s no way for you to delete that information. Check in with your family members Once you have requested the deletion of your data, it’s important to check in with your family members and encourage them to do the same because it’s not just their DNA that’s at risk of sale — it also affects people they are related to.  And while you’re at it, it’s worth checking in with your friends to ensure that all of your loved ones are taking steps to protect their data.  This story originally published on March 25 and was updated June 11 with new information. #how #delete #your #23andme #data
    TECHCRUNCH.COM
    How to delete your 23andMe data
    DNA testing service 23andMe has undergone serious upheaval in recent months, creating concerns for the 15 million customers who entrusted the company with their personal biological information. After filing for Chapter 11 bankruptcy protection in March, the company became the center of a bidding war that ended Friday when co-founder Anne Wojcicki said she’d successfully reacquired control through her nonprofit TTAM Research Institute for $305 million. The bankruptcy proceedings had sent shockwaves through the genetic testing industry and among privacy advocates, with security experts and lawmakers urging customers to take immediate action to safeguard their data. The company’s interim CEO revealed this week that 1.9 million people, around 15% of 23andMe’s customer base, have already requested their genetic data be deleted from the company’s servers. The situation became even more complex last week after more than two dozen states filed lawsuits challenging the sale of customers’ private data, arguing that 23andMe must obtain explicit consent before transferring or selling personal information to any new entity. While the company’s policies mean you cannot delete all traces of your genetic data — particularly information that may have already been shared with research partners or stored in backup systems — if you’re one of the 15 million people who shared their DNA with 23andMe, there are still meaningful steps you can take to protect yourself and minimize your exposure. How to delete your 23andMe data To delete your data from 23andMe, you need to log in to your account and then follow these steps: Navigate to the Settings section of your profile. Scroll down to the selection labeled 23andMe Data.  Click the View option and scroll to the Delete Data section. Select the Permanently Delete Data button. You will then receive an email from 23andMe with a link that will allow you to confirm your deletion request.  You can choose to download a copy of your data before deleting it. There is an important caveat, as 23andMe’s privacy policy states that the company and its labs “will retain your Genetic Information, date of birth, and sex as required for compliance with applicable legal obligations.” The policy continues: “23andMe will also retain limited information related to your account and data deletion request, including but not limited to, your email address, account deletion request identifier, communications related to inquiries or complaints and legal agreements for a limited period of time as required by law, contractual obligations, and/or as necessary for the establishment, exercise or defense of legal claims and for audit and compliance purposes.” This essentially means that 23andMe may keep some of your information for an unspecified amount of time.  How to destroy your 23andMe test sample and revoke permission for your data to be used for research If you previously opted to have your saliva sample and DNA stored by 23andMe, you can change this setting. To revoke your permission, go into your 23andMe account settings page and then navigate to Preferences.  In addition, if you previously agreed to 23andMe and third-party researchers using your genetic data and sample for research, you can withdraw consent from the Research and Product Consents section in your account settings.  While you can reverse that consent, there’s no way for you to delete that information. Check in with your family members Once you have requested the deletion of your data, it’s important to check in with your family members and encourage them to do the same because it’s not just their DNA that’s at risk of sale — it also affects people they are related to.  And while you’re at it, it’s worth checking in with your friends to ensure that all of your loved ones are taking steps to protect their data.  This story originally published on March 25 and was updated June 11 with new information.
    0 التعليقات 0 المشاركات
  • Anthropic launches new Claude service for military and intelligence use

    Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.” Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.See More:
    #anthropic #launches #new #claude #service
    Anthropic launches new Claude service for military and intelligence use
    Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.” Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.See More: #anthropic #launches #new #claude #service
    WWW.THEVERGE.COM
    Anthropic launches new Claude service for military and intelligence use
    Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.” Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.See More:
    Like
    Love
    Wow
    Angry
    Sad
    682
    0 التعليقات 0 المشاركات
  • What VMware’s licensing crackdown reveals about control and risk 

    Over the past few weeks, VMware customers holding onto their perpetual licenses, which are often unsupported and in limbo, have reportedly begun receiving formal cease-and-desist letters from Broadcom. The message is as blunt as it is unsettling: your support contract has expired, and you are to immediately uninstall any updates, patches, or enhancements released since that expiration date. Not only that, but audits could follow, with the possibility of “enhanced damages” for breach of contract.
    This is a sharp escalation in an effort to push perpetual license holders toward VMware’s new subscription-only model. For many, it signals the end of an era where critical infrastructure software could be owned, maintained, and supported on long-term, stable terms.
    Now, even those who bought VMware licenses outright are being told that support access is off the table unless they sign on to the new subscription regime. As a result, enterprises are being forced to make tough decisions about how they manage and support one of the most foundational layers of their IT environments.

    VMware isn’t just another piece of enterprise software. It’s the plumbing. The foundation. The layer everything else runs on top of, which is precisely why many CIOs flinch at the idea of running unsupported. The potential risk is too great. A vulnerability or failure in your virtual infrastructure isn’t the same as a bug in a CRM. It’s a systemic weakness. It touches everything.
    This technical risk is, without question, the biggest barrier to any organization considering support options outside of VMware’s official offering. And it’s a valid concern.  But technical risk isn’t black and white. It varies widely depending on version, deployment model, network architecture, and operational maturity. A tightly managed and stable VMware environment running a mature release with minimal exposure doesn’t carry the same risk profile as an open, multi-tenant deployment on a newer build.

    The prevailing assumption is that support equals security—and that operating unsupported equals exposure. But this relationship is more complex than it appears. In most enterprise environments, security is not determined by whether a patch is available. It’s determined by how well the environment is configured, managed, and monitored.
    Patches are not applied instantly. Risk assessments, integration testing, and change control processes introduce natural delays. And in many cases, security gaps arise not from missing patches but from misconfigurations: exposed management interfaces, weak credentials, overly permissive access. An unpatched environment, properly maintained and reviewed, can be significantly more secure than a patched one with poor hygiene. Support models that focus on proactive security—through vulnerability analysis, environment-specific impact assessments, and mitigation strategies—offer a different but equally valid form of protection. They don’t rely on patch delivery alone. They consider how a vulnerability behaves in the attack chain, whether it’s exploitable, and what compensating controls are available. 

    about VMware security

    Hacking contest exposes VMware security: In what has been described as a historical first, hackers in Berlin have been able to demo successful attacks on the ESXi hypervisor.
    No workaround leads to more pain for VMware users: There are patches for the latest batch of security alerts from Broadcom, but VMware users on perpetual licences may not have access.

    This kind of tailored risk management is especially important now, as vendor support for older VMware versions diminishes. Many reported vulnerabilities relate to newer product components or bundled services, not the core virtualization stack. The perception of rising security risk needs to be balanced against the stability and maturity of the versions in question. In other words, not all unsupported deployments are created equal.

    Some VMware environments—particularly older versions like vSphere 5.x or 6.x—are already beyond the range of vendor patching. In these cases, the transition to unsupported status may be more symbolic than substantive. The risk profile has not meaningfully changed.  Others, particularly organisations operating vSphere 7 or 8 without an active support contract, face a more complex challenge. Some critical security patches remain accessible, depending on severity and version, but the margin of certainty is shrinking.  
    These are the cases where enterprises are increasingly turning to alternative support models to bridge the gap—ensuring continuity, maintaining compliance, and retaining access to skilled technical expertise.

    Third-party support is sometimes seen as a temporary fix—a way to buy time while organizations figure out their long-term plans. And it can serve that purpose well. But increasingly, it’s also being recognized as a strategic choice in its own right: a long-term solution for enterprises that want to maintain operational stability with a reliable support partner while retaining control over their virtualization roadmap.What distinguishes third-party support in this context isn’t just cost control, it’s methodology.  
    Risk is assessed holistically, identifying which vulnerabilities truly matter, what can be addressed through configuration, and when escalation is genuinely required. This approach recognises that most enterprises aren’t chasing bleeding-edge features. They want to run stable, well-understood environments that don’t change unpredictably. Third-party support helps them do exactly that, without being forced into a rapid, costly migration or a subscription contract that may not align with their business needs. 
    Crucially, it enables organisations to move on their own timeline.
    Much of the conversation around unsupported VMware environments focuses on technical risk. But the longer-term threat may be strategic. The end of perpetual licensing, the sharp rise in subscription pricing, and now the legal enforcement of support boundaries all points to a much bigger problem: a loss of control over infrastructure strategy. 
    Vendor-imposed timelines, licensing models, and audit policies are increasingly dictating how organizations use the very software they once owned outright. Third-party support doesn’t eliminate risk—nothing can. But it redistributes and controls it. It gives enterprises more agency over when and how they migrate, how they manage updates, and where they invest. In a landscape shaped by vendor agendas, that independence is increasingly critical. 
    Broadcom’s cease-and-desist letters represent a new phase in the relationship between software vendors and customers—one defined not by collaboration, but by contractual enforcement. And for VMware customers still clinging to the idea of “owning” their infrastructure, it’s a rude awakening: support is no longer optional, and perpetual is no longer forever. Organizations now face three paths: accept the subscription model, attempt a rapid migration to an alternative platform, or find a support model that gives them the stability to decide their future on their own terms. 
    For many, the third option is the only one that balances operational security with strategic flexibility. 
    The question now isn’t whether unsupported infrastructure is risky. The question is whether the greater risk is allowing someone else to dictate what happens next. 
    #what #vmwares #licensing #crackdown #reveals
    What VMware’s licensing crackdown reveals about control and risk 
    Over the past few weeks, VMware customers holding onto their perpetual licenses, which are often unsupported and in limbo, have reportedly begun receiving formal cease-and-desist letters from Broadcom. The message is as blunt as it is unsettling: your support contract has expired, and you are to immediately uninstall any updates, patches, or enhancements released since that expiration date. Not only that, but audits could follow, with the possibility of “enhanced damages” for breach of contract. This is a sharp escalation in an effort to push perpetual license holders toward VMware’s new subscription-only model. For many, it signals the end of an era where critical infrastructure software could be owned, maintained, and supported on long-term, stable terms. Now, even those who bought VMware licenses outright are being told that support access is off the table unless they sign on to the new subscription regime. As a result, enterprises are being forced to make tough decisions about how they manage and support one of the most foundational layers of their IT environments. VMware isn’t just another piece of enterprise software. It’s the plumbing. The foundation. The layer everything else runs on top of, which is precisely why many CIOs flinch at the idea of running unsupported. The potential risk is too great. A vulnerability or failure in your virtual infrastructure isn’t the same as a bug in a CRM. It’s a systemic weakness. It touches everything. This technical risk is, without question, the biggest barrier to any organization considering support options outside of VMware’s official offering. And it’s a valid concern.  But technical risk isn’t black and white. It varies widely depending on version, deployment model, network architecture, and operational maturity. A tightly managed and stable VMware environment running a mature release with minimal exposure doesn’t carry the same risk profile as an open, multi-tenant deployment on a newer build. The prevailing assumption is that support equals security—and that operating unsupported equals exposure. But this relationship is more complex than it appears. In most enterprise environments, security is not determined by whether a patch is available. It’s determined by how well the environment is configured, managed, and monitored. Patches are not applied instantly. Risk assessments, integration testing, and change control processes introduce natural delays. And in many cases, security gaps arise not from missing patches but from misconfigurations: exposed management interfaces, weak credentials, overly permissive access. An unpatched environment, properly maintained and reviewed, can be significantly more secure than a patched one with poor hygiene. Support models that focus on proactive security—through vulnerability analysis, environment-specific impact assessments, and mitigation strategies—offer a different but equally valid form of protection. They don’t rely on patch delivery alone. They consider how a vulnerability behaves in the attack chain, whether it’s exploitable, and what compensating controls are available.  about VMware security Hacking contest exposes VMware security: In what has been described as a historical first, hackers in Berlin have been able to demo successful attacks on the ESXi hypervisor. No workaround leads to more pain for VMware users: There are patches for the latest batch of security alerts from Broadcom, but VMware users on perpetual licences may not have access. This kind of tailored risk management is especially important now, as vendor support for older VMware versions diminishes. Many reported vulnerabilities relate to newer product components or bundled services, not the core virtualization stack. The perception of rising security risk needs to be balanced against the stability and maturity of the versions in question. In other words, not all unsupported deployments are created equal. Some VMware environments—particularly older versions like vSphere 5.x or 6.x—are already beyond the range of vendor patching. In these cases, the transition to unsupported status may be more symbolic than substantive. The risk profile has not meaningfully changed.  Others, particularly organisations operating vSphere 7 or 8 without an active support contract, face a more complex challenge. Some critical security patches remain accessible, depending on severity and version, but the margin of certainty is shrinking.   These are the cases where enterprises are increasingly turning to alternative support models to bridge the gap—ensuring continuity, maintaining compliance, and retaining access to skilled technical expertise. Third-party support is sometimes seen as a temporary fix—a way to buy time while organizations figure out their long-term plans. And it can serve that purpose well. But increasingly, it’s also being recognized as a strategic choice in its own right: a long-term solution for enterprises that want to maintain operational stability with a reliable support partner while retaining control over their virtualization roadmap.What distinguishes third-party support in this context isn’t just cost control, it’s methodology.   Risk is assessed holistically, identifying which vulnerabilities truly matter, what can be addressed through configuration, and when escalation is genuinely required. This approach recognises that most enterprises aren’t chasing bleeding-edge features. They want to run stable, well-understood environments that don’t change unpredictably. Third-party support helps them do exactly that, without being forced into a rapid, costly migration or a subscription contract that may not align with their business needs.  Crucially, it enables organisations to move on their own timeline. Much of the conversation around unsupported VMware environments focuses on technical risk. But the longer-term threat may be strategic. The end of perpetual licensing, the sharp rise in subscription pricing, and now the legal enforcement of support boundaries all points to a much bigger problem: a loss of control over infrastructure strategy.  Vendor-imposed timelines, licensing models, and audit policies are increasingly dictating how organizations use the very software they once owned outright. Third-party support doesn’t eliminate risk—nothing can. But it redistributes and controls it. It gives enterprises more agency over when and how they migrate, how they manage updates, and where they invest. In a landscape shaped by vendor agendas, that independence is increasingly critical.  Broadcom’s cease-and-desist letters represent a new phase in the relationship between software vendors and customers—one defined not by collaboration, but by contractual enforcement. And for VMware customers still clinging to the idea of “owning” their infrastructure, it’s a rude awakening: support is no longer optional, and perpetual is no longer forever. Organizations now face three paths: accept the subscription model, attempt a rapid migration to an alternative platform, or find a support model that gives them the stability to decide their future on their own terms.  For many, the third option is the only one that balances operational security with strategic flexibility.  The question now isn’t whether unsupported infrastructure is risky. The question is whether the greater risk is allowing someone else to dictate what happens next.  #what #vmwares #licensing #crackdown #reveals
    WWW.COMPUTERWEEKLY.COM
    What VMware’s licensing crackdown reveals about control and risk 
    Over the past few weeks, VMware customers holding onto their perpetual licenses, which are often unsupported and in limbo, have reportedly begun receiving formal cease-and-desist letters from Broadcom. The message is as blunt as it is unsettling: your support contract has expired, and you are to immediately uninstall any updates, patches, or enhancements released since that expiration date. Not only that, but audits could follow, with the possibility of “enhanced damages” for breach of contract. This is a sharp escalation in an effort to push perpetual license holders toward VMware’s new subscription-only model. For many, it signals the end of an era where critical infrastructure software could be owned, maintained, and supported on long-term, stable terms. Now, even those who bought VMware licenses outright are being told that support access is off the table unless they sign on to the new subscription regime. As a result, enterprises are being forced to make tough decisions about how they manage and support one of the most foundational layers of their IT environments. VMware isn’t just another piece of enterprise software. It’s the plumbing. The foundation. The layer everything else runs on top of, which is precisely why many CIOs flinch at the idea of running unsupported. The potential risk is too great. A vulnerability or failure in your virtual infrastructure isn’t the same as a bug in a CRM. It’s a systemic weakness. It touches everything. This technical risk is, without question, the biggest barrier to any organization considering support options outside of VMware’s official offering. And it’s a valid concern.  But technical risk isn’t black and white. It varies widely depending on version, deployment model, network architecture, and operational maturity. A tightly managed and stable VMware environment running a mature release with minimal exposure doesn’t carry the same risk profile as an open, multi-tenant deployment on a newer build. The prevailing assumption is that support equals security—and that operating unsupported equals exposure. But this relationship is more complex than it appears. In most enterprise environments, security is not determined by whether a patch is available. It’s determined by how well the environment is configured, managed, and monitored. Patches are not applied instantly. Risk assessments, integration testing, and change control processes introduce natural delays. And in many cases, security gaps arise not from missing patches but from misconfigurations: exposed management interfaces, weak credentials, overly permissive access. An unpatched environment, properly maintained and reviewed, can be significantly more secure than a patched one with poor hygiene. Support models that focus on proactive security—through vulnerability analysis, environment-specific impact assessments, and mitigation strategies—offer a different but equally valid form of protection. They don’t rely on patch delivery alone. They consider how a vulnerability behaves in the attack chain, whether it’s exploitable, and what compensating controls are available.  Read more about VMware security Hacking contest exposes VMware security: In what has been described as a historical first, hackers in Berlin have been able to demo successful attacks on the ESXi hypervisor. No workaround leads to more pain for VMware users: There are patches for the latest batch of security alerts from Broadcom, but VMware users on perpetual licences may not have access. This kind of tailored risk management is especially important now, as vendor support for older VMware versions diminishes. Many reported vulnerabilities relate to newer product components or bundled services, not the core virtualization stack. The perception of rising security risk needs to be balanced against the stability and maturity of the versions in question. In other words, not all unsupported deployments are created equal. Some VMware environments—particularly older versions like vSphere 5.x or 6.x—are already beyond the range of vendor patching. In these cases, the transition to unsupported status may be more symbolic than substantive. The risk profile has not meaningfully changed.  Others, particularly organisations operating vSphere 7 or 8 without an active support contract, face a more complex challenge. Some critical security patches remain accessible, depending on severity and version, but the margin of certainty is shrinking.   These are the cases where enterprises are increasingly turning to alternative support models to bridge the gap—ensuring continuity, maintaining compliance, and retaining access to skilled technical expertise. Third-party support is sometimes seen as a temporary fix—a way to buy time while organizations figure out their long-term plans. And it can serve that purpose well. But increasingly, it’s also being recognized as a strategic choice in its own right: a long-term solution for enterprises that want to maintain operational stability with a reliable support partner while retaining control over their virtualization roadmap.What distinguishes third-party support in this context isn’t just cost control, it’s methodology.   Risk is assessed holistically, identifying which vulnerabilities truly matter, what can be addressed through configuration, and when escalation is genuinely required. This approach recognises that most enterprises aren’t chasing bleeding-edge features. They want to run stable, well-understood environments that don’t change unpredictably. Third-party support helps them do exactly that, without being forced into a rapid, costly migration or a subscription contract that may not align with their business needs.  Crucially, it enables organisations to move on their own timeline. Much of the conversation around unsupported VMware environments focuses on technical risk. But the longer-term threat may be strategic. The end of perpetual licensing, the sharp rise in subscription pricing, and now the legal enforcement of support boundaries all points to a much bigger problem: a loss of control over infrastructure strategy.  Vendor-imposed timelines, licensing models, and audit policies are increasingly dictating how organizations use the very software they once owned outright. Third-party support doesn’t eliminate risk—nothing can. But it redistributes and controls it. It gives enterprises more agency over when and how they migrate, how they manage updates, and where they invest. In a landscape shaped by vendor agendas, that independence is increasingly critical.  Broadcom’s cease-and-desist letters represent a new phase in the relationship between software vendors and customers—one defined not by collaboration, but by contractual enforcement. And for VMware customers still clinging to the idea of “owning” their infrastructure, it’s a rude awakening: support is no longer optional, and perpetual is no longer forever. Organizations now face three paths: accept the subscription model, attempt a rapid migration to an alternative platform, or find a support model that gives them the stability to decide their future on their own terms.  For many, the third option is the only one that balances operational security with strategic flexibility.  The question now isn’t whether unsupported infrastructure is risky. The question is whether the greater risk is allowing someone else to dictate what happens next. 
    0 التعليقات 0 المشاركات
  • Essex Police discloses ‘incoherent’ facial recognition assessment

    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognitionuse, according to documents obtained by Big Brother Watch and shared with Computer Weekly.
    While the force claims in an equality impact assessmentthat “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Informationrules – shows it has likely failed to fulfil its public sector equality dutyto consider how its policies and practices could be discriminatory.
    The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias.
    For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”.
    However, this figure is based on the National Physical Laboratory’stesting of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use.
    Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021.
    Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology, the EIA also claims it has “a bias differential FMRof 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”.
    However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available.
    A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so.

    Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk

    Jake Hurfurt, Big Brother Watch

    “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch.
    “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent.
    “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.”
    The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges.
    In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment; and it did not comply with its PSED to consider how its policies and practices could be discriminatory.
    The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”.
    Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities.
    Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”.
    She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.”

    Computer Weekly contacted Essex Police about every aspect of the story.
    “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson.
    “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests.
    “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.”
    The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review.
    “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.”
    However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field.
    Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication.

    Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim.
    Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men.
    While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”.
    Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities.
    In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another.
    According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113, meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5.
    However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms.
    Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in.
    This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing.
    “If a developer implements 1:Nsearch as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.”
    Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time.
    “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said.
    Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm.
    “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.”
    However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification.
    Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points.
    While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point.

    The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security, claiming it demonstrated “Corsight’s capability to perform equally across all demographics”.
    However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives.
    This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual.
    The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”.
    Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance.
    In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing.
    Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force.

    While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered.
    For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer presentsaid “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward.
    The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios.
    Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police.
    For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts.
    While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”.
    However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process.
    For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database.
    While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned.
    Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.”

    Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary.
    On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”.
    They added: “The watchlisthas to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.”
    However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend.
    “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harmor murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said.
    “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.”
    Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”.
    According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR.
    “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFRcan be deployed.”
    Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments.
    “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.”
    Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power.

    Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening

    Karen Yeung, Birmingham Law School

    “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said.
    “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.”
    Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting.
    “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said.
    “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.”
    Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.”
    In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses.
    “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.”

    about police data and technology

    Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities.
    UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies.
    UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy.
    #essex #police #discloses #incoherent #facial
    Essex Police discloses ‘incoherent’ facial recognition assessment
    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognitionuse, according to documents obtained by Big Brother Watch and shared with Computer Weekly. While the force claims in an equality impact assessmentthat “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Informationrules – shows it has likely failed to fulfil its public sector equality dutyto consider how its policies and practices could be discriminatory. The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias. For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”. However, this figure is based on the National Physical Laboratory’stesting of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use. Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021. Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology, the EIA also claims it has “a bias differential FMRof 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”. However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available. A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so. Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk Jake Hurfurt, Big Brother Watch “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch. “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent. “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.” The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges. In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment; and it did not comply with its PSED to consider how its policies and practices could be discriminatory. The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”. Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities. Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”. She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.” Computer Weekly contacted Essex Police about every aspect of the story. “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson. “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests. “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.” The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review. “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.” However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field. Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication. Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim. Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men. While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”. Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities. In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another. According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113, meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5. However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms. Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in. This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing. “If a developer implements 1:Nsearch as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.” Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time. “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said. Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm. “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.” However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification. Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points. While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point. The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security, claiming it demonstrated “Corsight’s capability to perform equally across all demographics”. However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives. This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual. The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”. Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance. In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing. Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force. While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered. For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer presentsaid “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward. The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios. Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police. For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts. While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”. However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process. For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database. While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned. Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.” Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary. On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”. They added: “The watchlisthas to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.” However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend. “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harmor murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said. “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.” Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”. According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR. “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFRcan be deployed.” Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments. “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.” Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power. Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening Karen Yeung, Birmingham Law School “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said. “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.” Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting. “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said. “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.” Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.” In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses. “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.” about police data and technology Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities. UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies. UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy. #essex #police #discloses #incoherent #facial
    WWW.COMPUTERWEEKLY.COM
    Essex Police discloses ‘incoherent’ facial recognition assessment
    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognition (LFR) use, according to documents obtained by Big Brother Watch and shared with Computer Weekly. While the force claims in an equality impact assessment (EIA) that “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Information (FoI) rules – shows it has likely failed to fulfil its public sector equality duty (PSED) to consider how its policies and practices could be discriminatory. The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias. For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”. However, this figure is based on the National Physical Laboratory’s (NPL) testing of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use. Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021. Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology (NIST), the EIA also claims it has “a bias differential FMR [False Match Rate] of 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”. However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available. A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so. Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk Jake Hurfurt, Big Brother Watch “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch. “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent. “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.” The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges. In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment (DPIA); and it did not comply with its PSED to consider how its policies and practices could be discriminatory. The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”. Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities. Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”. She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.” Computer Weekly contacted Essex Police about every aspect of the story. “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson. “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests. “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.” The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review. “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.” However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field. Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication. Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim. Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men. While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”. Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities. In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another. According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113(22), meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5(1). However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms. Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in. This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing. “If a developer implements 1:N (one-to-many) search as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.” Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time. “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said. Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm. “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.” However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification. Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points. While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point. The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security (DHS), claiming it demonstrated “Corsight’s capability to perform equally across all demographics”. However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives. This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual. The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”. Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance. In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing. Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force. While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered. For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer present (whose name has been redacted from the document) said “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward. The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios. Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police. For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts. While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”. However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process. For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database (PND). While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned. Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.” Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary. On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”. They added: “The watchlist [then] has to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.” However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend. “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harm [GBH] or murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said. “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.” Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”. According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR. “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFR [automated facial recognition] can be deployed.” Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments. “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.” Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power. Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening Karen Yeung, Birmingham Law School “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said. “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.” Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting. “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said. “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.” Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.” In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses. “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.” Read more about police data and technology Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities. UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies. UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy.
    0 التعليقات 0 المشاركات
  • Klarna's losses double as more buy now, pay later customers struggle with loans

    In brief: The danger faced by buy now, pay later companies is when customers don't adhere to the "pay later" part. It's a problem being faced by industry giant Klarna, which saw its net losses more than double in the first quarter as more customers struggled to pay back their loan installments.
    Klarna's net losses for the first quarter reached million, almost double the million it lost during the same period a year earlier.
    The problem is that an increasing number of customers who have taken out the buy now, pay later loans are struggling to pay them back.
    Klarna offers its BNPL services to a range of merchants, letting customers purchase a wide range of items in installments. The company makes its money by charging fees to the merchants and customers who fail to pay on time.
    In its first quarter earnings report, Klarna revealed that consumer credit losses were up to million, an increase of around 17% compared to a year earlier.
    It seems there's a growing trend of BNPL customers being unable to meet their contractual obligations. Credit platform LendingTree carried out a survey last month that found 41% of users of BNPL loans said they paid late on one of them in the past year, up from 34% compared to a year ago. High-income borrowers were among the most likely to pay late, along with men, young people, and parents of young kids.
    // Related Stories

     
    The survey also showed that a quarter of BNPL users said they used the loans to buy groceries amid rising supermarket costs and marking a 14% increase compared to a year ago. It also revealed that nearly 1 in 4 BNPL users said they've had three or more active BNPL loans at one time.
    The Federal Reserve Bank of New York last week reported that US consumer debt rose by billion in the first quarter to reach a record trillion.
    Elsewhere in Klarna's earnings, which was presented using an AI-generated avatar of its chief executive, the company said it has used artificial intelligence to help cut costs.
    The company's headcount is down 39% over the last two years, customer service costs were down 12% YoY in the first quarter. Klarna is estimated to have replaced 700 employees with AI.
    The good news for humans is that Klarna has started hiring them again after its CEO recently admitted AI customer service chatbots offered a "lower quality" output.
    #klarna039s #losses #double #more #buy
    Klarna's losses double as more buy now, pay later customers struggle with loans
    In brief: The danger faced by buy now, pay later companies is when customers don't adhere to the "pay later" part. It's a problem being faced by industry giant Klarna, which saw its net losses more than double in the first quarter as more customers struggled to pay back their loan installments. Klarna's net losses for the first quarter reached million, almost double the million it lost during the same period a year earlier. The problem is that an increasing number of customers who have taken out the buy now, pay later loans are struggling to pay them back. Klarna offers its BNPL services to a range of merchants, letting customers purchase a wide range of items in installments. The company makes its money by charging fees to the merchants and customers who fail to pay on time. In its first quarter earnings report, Klarna revealed that consumer credit losses were up to million, an increase of around 17% compared to a year earlier. It seems there's a growing trend of BNPL customers being unable to meet their contractual obligations. Credit platform LendingTree carried out a survey last month that found 41% of users of BNPL loans said they paid late on one of them in the past year, up from 34% compared to a year ago. High-income borrowers were among the most likely to pay late, along with men, young people, and parents of young kids. // Related Stories   The survey also showed that a quarter of BNPL users said they used the loans to buy groceries amid rising supermarket costs and marking a 14% increase compared to a year ago. It also revealed that nearly 1 in 4 BNPL users said they've had three or more active BNPL loans at one time. The Federal Reserve Bank of New York last week reported that US consumer debt rose by billion in the first quarter to reach a record trillion. Elsewhere in Klarna's earnings, which was presented using an AI-generated avatar of its chief executive, the company said it has used artificial intelligence to help cut costs. The company's headcount is down 39% over the last two years, customer service costs were down 12% YoY in the first quarter. Klarna is estimated to have replaced 700 employees with AI. The good news for humans is that Klarna has started hiring them again after its CEO recently admitted AI customer service chatbots offered a "lower quality" output. #klarna039s #losses #double #more #buy
    WWW.TECHSPOT.COM
    Klarna's losses double as more buy now, pay later customers struggle with loans
    In brief: The danger faced by buy now, pay later companies is when customers don't adhere to the "pay later" part. It's a problem being faced by industry giant Klarna, which saw its net losses more than double in the first quarter as more customers struggled to pay back their loan installments. Klarna's net losses for the first quarter reached $99 million, almost double the $47 million it lost during the same period a year earlier. The problem is that an increasing number of customers who have taken out the buy now, pay later loans are struggling to pay them back. Klarna offers its BNPL services to a range of merchants, letting customers purchase a wide range of items in installments. The company makes its money by charging fees to the merchants and customers who fail to pay on time. In its first quarter earnings report, Klarna revealed that consumer credit losses were up to $136 million, an increase of around 17% compared to a year earlier. It seems there's a growing trend of BNPL customers being unable to meet their contractual obligations. Credit platform LendingTree carried out a survey last month that found 41% of users of BNPL loans said they paid late on one of them in the past year, up from 34% compared to a year ago. High-income borrowers were among the most likely to pay late, along with men, young people, and parents of young kids. // Related Stories   The survey also showed that a quarter of BNPL users said they used the loans to buy groceries amid rising supermarket costs and marking a 14% increase compared to a year ago. It also revealed that nearly 1 in 4 BNPL users said they've had three or more active BNPL loans at one time. The Federal Reserve Bank of New York last week reported that US consumer debt rose by $167 billion in the first quarter to reach a record $18.2 trillion. Elsewhere in Klarna's earnings, which was presented using an AI-generated avatar of its chief executive, the company said it has used artificial intelligence to help cut costs. The company's headcount is down 39% over the last two years, customer service costs were down 12% YoY in the first quarter. Klarna is estimated to have replaced 700 employees with AI. The good news for humans is that Klarna has started hiring them again after its CEO recently admitted AI customer service chatbots offered a "lower quality" output.
    0 التعليقات 0 المشاركات
  • Paymentology: (Implementation) Project Manager

    We’re looking for an Implementations Project Manager to drive end-to-end delivery of client implementations, ensuring seamless coordination between internal teams and external stakeholders. This role is critical in ensuring successful project delivery through strong project management, proactive communication, and detailed execution.If you thrive in high-impact environments and are passionate about delivering exceptional client experiences, this could be your next opportunity.What you get to do:Project Leadership & CoordinationOrganise and lead internal handover meetings to ensure a smooth transition from Sales to Implementation.Facilitate client project kick-offs to align on objectives, timelines, and expectations.Oversee end-to-end project execution and delivery, ensuring clear milestones and deliverables.Communication & Stakeholder ManagementCoordinate discussions with third-party vendors to align project scope and ensure contractual obligations are met.Maintain strong lines of communication with infrastructure and technical teams to ensure alignment.Manage daily communication channels with clients, providing updates and resolving queries.Documentation & Risk ManagementVerify implementation documentation, including Product Specification Forms, for accuracy and completeness.Monitor, track, and follow up on tasks that impact project delivery.Proactively identify and escalate risks or potential delays to mitigate impact.Cross-Functional SupportCollaborate with Technical Implementation Managers and other teams to ensure smooth execution of implementation activities.Assist with internal coordination and external client interaction where required.What it takes to succeed:Proficiency with project management tools such as Microsoft Project, Jira, or Asana.Strong understanding of the card issuing and payment processing environment.Proven ability to manage complex implementations and balance multiple priorities.Exceptional communication and stakeholder engagement skills.Ability to adapt quickly in a fast-paced, cross-functional environment.Strong problem-solving skills and a proactive mindset.Fluency in English.Education & Experience:Bachelor’s degree in Business, Engineering, or a related field.Project Management certification is a plus.3 - 5 years of experience in project management, ideally in implementations or a similar client-facing role within fintech or payments.Strong familiarity with software delivery and infrastructure alignment.Knowledge of cards and the payment industry is preferred.What you can look forward to: At Paymentology, it’s not just about building great payment technology, it’s about building a company where people feel they belong and their work matters. You’ll be part of a diverse, global team that’s genuinely committed to making a positive impact through what we do. Whether you’re working across time zones or getting involved in initiatives that support local communities, you’ll find real purpose in your work – and the freedom to grow in a supportive, forward-thinking environment.Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote All Other Remote JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot
    #paymentology #implementation #project #manager
    Paymentology: (Implementation) Project Manager
    We’re looking for an Implementations Project Manager to drive end-to-end delivery of client implementations, ensuring seamless coordination between internal teams and external stakeholders. This role is critical in ensuring successful project delivery through strong project management, proactive communication, and detailed execution.If you thrive in high-impact environments and are passionate about delivering exceptional client experiences, this could be your next opportunity.What you get to do:Project Leadership & CoordinationOrganise and lead internal handover meetings to ensure a smooth transition from Sales to Implementation.Facilitate client project kick-offs to align on objectives, timelines, and expectations.Oversee end-to-end project execution and delivery, ensuring clear milestones and deliverables.Communication & Stakeholder ManagementCoordinate discussions with third-party vendors to align project scope and ensure contractual obligations are met.Maintain strong lines of communication with infrastructure and technical teams to ensure alignment.Manage daily communication channels with clients, providing updates and resolving queries.Documentation & Risk ManagementVerify implementation documentation, including Product Specification Forms, for accuracy and completeness.Monitor, track, and follow up on tasks that impact project delivery.Proactively identify and escalate risks or potential delays to mitigate impact.Cross-Functional SupportCollaborate with Technical Implementation Managers and other teams to ensure smooth execution of implementation activities.Assist with internal coordination and external client interaction where required.What it takes to succeed:Proficiency with project management tools such as Microsoft Project, Jira, or Asana.Strong understanding of the card issuing and payment processing environment.Proven ability to manage complex implementations and balance multiple priorities.Exceptional communication and stakeholder engagement skills.Ability to adapt quickly in a fast-paced, cross-functional environment.Strong problem-solving skills and a proactive mindset.Fluency in English.Education & Experience:Bachelor’s degree in Business, Engineering, or a related field.Project Management certification is a plus.3 - 5 years of experience in project management, ideally in implementations or a similar client-facing role within fintech or payments.Strong familiarity with software delivery and infrastructure alignment.Knowledge of cards and the payment industry is preferred.What you can look forward to: At Paymentology, it’s not just about building great payment technology, it’s about building a company where people feel they belong and their work matters. You’ll be part of a diverse, global team that’s genuinely committed to making a positive impact through what we do. Whether you’re working across time zones or getting involved in initiatives that support local communities, you’ll find real purpose in your work – and the freedom to grow in a supportive, forward-thinking environment.Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote All Other Remote JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot #paymentology #implementation #project #manager
    WEWORKREMOTELY.COM
    Paymentology: (Implementation) Project Manager
    We’re looking for an Implementations Project Manager to drive end-to-end delivery of client implementations, ensuring seamless coordination between internal teams and external stakeholders. This role is critical in ensuring successful project delivery through strong project management, proactive communication, and detailed execution.If you thrive in high-impact environments and are passionate about delivering exceptional client experiences, this could be your next opportunity.What you get to do:Project Leadership & CoordinationOrganise and lead internal handover meetings to ensure a smooth transition from Sales to Implementation.Facilitate client project kick-offs to align on objectives, timelines, and expectations.Oversee end-to-end project execution and delivery, ensuring clear milestones and deliverables.Communication & Stakeholder ManagementCoordinate discussions with third-party vendors to align project scope and ensure contractual obligations are met.Maintain strong lines of communication with infrastructure and technical teams to ensure alignment.Manage daily communication channels with clients, providing updates and resolving queries.Documentation & Risk ManagementVerify implementation documentation, including Product Specification Forms, for accuracy and completeness.Monitor, track, and follow up on tasks that impact project delivery.Proactively identify and escalate risks or potential delays to mitigate impact.Cross-Functional SupportCollaborate with Technical Implementation Managers and other teams to ensure smooth execution of implementation activities.Assist with internal coordination and external client interaction where required.What it takes to succeed:Proficiency with project management tools such as Microsoft Project, Jira, or Asana.Strong understanding of the card issuing and payment processing environment.Proven ability to manage complex implementations and balance multiple priorities.Exceptional communication and stakeholder engagement skills.Ability to adapt quickly in a fast-paced, cross-functional environment.Strong problem-solving skills and a proactive mindset.Fluency in English (written and verbal).Education & Experience:Bachelor’s degree in Business, Engineering, or a related field.Project Management certification is a plus.3 - 5 years of experience in project management, ideally in implementations or a similar client-facing role within fintech or payments.Strong familiarity with software delivery and infrastructure alignment.Knowledge of cards and the payment industry is preferred.What you can look forward to: At Paymentology, it’s not just about building great payment technology, it’s about building a company where people feel they belong and their work matters. You’ll be part of a diverse, global team that’s genuinely committed to making a positive impact through what we do. Whether you’re working across time zones or getting involved in initiatives that support local communities, you’ll find real purpose in your work – and the freedom to grow in a supportive, forward-thinking environment.Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote All Other Remote JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot
    0 التعليقات 0 المشاركات
  • TELUS Digital: US Rater

    Looking for a freelance opportunity where you can make an impact on technology from the comfort of your home? If you are dynamic, tech-savvy, and always online to learn more, this part-time flexible project is the perfect fit for you!A Day in the Life of a Personalized Internet Assessor:In this role, you’ll be analyzing and providing feedback on texts, pages, images, and other types of  information for top search engines, using an online toolThrough reviewing and rating search results for relevance and quality, you’ll be helping to improve the overall user experience for millions of search engine users, including yourself.Join our team today and start putting your skills to work for one of the world's leading searchengines.The estimated hourly earnings for this role are 12 USD per hour.TELUS Digital AI CommunityOur global AI Community is a vibrant network of 1 million+ contributors from diverse backgrounds who help our customers collect, enhance, train, translate, and localize content to build better AI models. Become part of our growing community and make an impact supporting the machine learning models of some of the world’s largest brands.Qualification pathNo previous professional experience is required to apply to this role, however, working on this project will require you to pass the basic requirements and go through a standard assessment process. This is a part-time long-term project and your work will be subject to our standard quality assurance checks during the term of this agreement.Basic RequirementsWorking as a freelancer with excellent communication skills in EnglishBeing a resident in the United States for the last 3 consecutive years and having familiarity with current and historical business, media, sport, news, social media, and cultural affairs in the US.Active use of Gmail, Google+, and other forms of social media and experience in the use of web browsers to navigate and interact with a variety of contentDaily access to a broadband internet connection, a smartphone, and a personal computer to work on.AssessmentIn order to be hired into the program, you’ll take a language assessment and an open bookqualification exam that will determine your suitability for the position and complete ID verification.Don’t worry, our team will provide you with guidelines and learning materials before your exam. You will be required to complete the exam in a specific timeframe but at your convenience!Equal OpportunityAll qualified applicants will receive consideration for a contractual relationship without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status. At TELUS Digital AI, we are proud to offer equal opportunities and are committed to creating a diverse and inclusive community. All aspects of selection are based on applicants’ qualifications, merits, competence, and performance without regard to any characteristic related to diversity.
    #telus #digital #rater
    TELUS Digital: US Rater
    Looking for a freelance opportunity where you can make an impact on technology from the comfort of your home? If you are dynamic, tech-savvy, and always online to learn more, this part-time flexible project is the perfect fit for you!A Day in the Life of a Personalized Internet Assessor:In this role, you’ll be analyzing and providing feedback on texts, pages, images, and other types of  information for top search engines, using an online toolThrough reviewing and rating search results for relevance and quality, you’ll be helping to improve the overall user experience for millions of search engine users, including yourself.Join our team today and start putting your skills to work for one of the world's leading searchengines.The estimated hourly earnings for this role are 12 USD per hour.TELUS Digital AI CommunityOur global AI Community is a vibrant network of 1 million+ contributors from diverse backgrounds who help our customers collect, enhance, train, translate, and localize content to build better AI models. Become part of our growing community and make an impact supporting the machine learning models of some of the world’s largest brands.Qualification pathNo previous professional experience is required to apply to this role, however, working on this project will require you to pass the basic requirements and go through a standard assessment process. This is a part-time long-term project and your work will be subject to our standard quality assurance checks during the term of this agreement.Basic RequirementsWorking as a freelancer with excellent communication skills in EnglishBeing a resident in the United States for the last 3 consecutive years and having familiarity with current and historical business, media, sport, news, social media, and cultural affairs in the US.Active use of Gmail, Google+, and other forms of social media and experience in the use of web browsers to navigate and interact with a variety of contentDaily access to a broadband internet connection, a smartphone, and a personal computer to work on.AssessmentIn order to be hired into the program, you’ll take a language assessment and an open bookqualification exam that will determine your suitability for the position and complete ID verification.Don’t worry, our team will provide you with guidelines and learning materials before your exam. You will be required to complete the exam in a specific timeframe but at your convenience!Equal OpportunityAll qualified applicants will receive consideration for a contractual relationship without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status. At TELUS Digital AI, we are proud to offer equal opportunities and are committed to creating a diverse and inclusive community. All aspects of selection are based on applicants’ qualifications, merits, competence, and performance without regard to any characteristic related to diversity. #telus #digital #rater
    WEWORKREMOTELY.COM
    TELUS Digital: US Rater
    Looking for a freelance opportunity where you can make an impact on technology from the comfort of your home? If you are dynamic, tech-savvy, and always online to learn more, this part-time flexible project is the perfect fit for you!A Day in the Life of a Personalized Internet Assessor:In this role, you’ll be analyzing and providing feedback on texts, pages, images, and other types of  information for top search engines, using an online toolThrough reviewing and rating search results for relevance and quality, you’ll be helping to improve the overall user experience for millions of search engine users, including yourself.Join our team today and start putting your skills to work for one of the world's leading searchengines.The estimated hourly earnings for this role are 12 USD per hour.TELUS Digital AI CommunityOur global AI Community is a vibrant network of 1 million+ contributors from diverse backgrounds who help our customers collect, enhance, train, translate, and localize content to build better AI models. Become part of our growing community and make an impact supporting the machine learning models of some of the world’s largest brands.Qualification pathNo previous professional experience is required to apply to this role, however, working on this project will require you to pass the basic requirements and go through a standard assessment process. This is a part-time long-term project and your work will be subject to our standard quality assurance checks during the term of this agreement.Basic RequirementsWorking as a freelancer with excellent communication skills in EnglishBeing a resident in the United States for the last 3 consecutive years and having familiarity with current and historical business, media, sport, news, social media, and cultural affairs in the US.Active use of Gmail, Google+, and other forms of social media and experience in the use of web browsers to navigate and interact with a variety of contentDaily access to a broadband internet connection, a smartphone (Android 5.0, iOS 14 or higher), and a personal computer to work on.AssessmentIn order to be hired into the program, you’ll take a language assessment and an open bookqualification exam that will determine your suitability for the position and complete ID verification.Don’t worry, our team will provide you with guidelines and learning materials before your exam. You will be required to complete the exam in a specific timeframe but at your convenience!Equal OpportunityAll qualified applicants will receive consideration for a contractual relationship without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status. At TELUS Digital AI, we are proud to offer equal opportunities and are committed to creating a diverse and inclusive community. All aspects of selection are based on applicants’ qualifications, merits, competence, and performance without regard to any characteristic related to diversity.
    0 التعليقات 0 المشاركات
  • Phone Companies Failed To Warn Senators About Surveillance, Wyden Says

    Sen. Ron Wydenrevealed in a new letter to Senate colleagues Wednesday that AT&T, Verizon and T-Mobile failed to create systems for notifying senators about government surveillance on Senate-issued devices -- despite a requirement to do so. From a report: Phone service providers are contractually obligated to inform senators when a law enforcement agency requests their records, thanks to protections enacted in 2020. But in an investigation, Wyden's staff found that none of the three major carriers had created a system to send those notifications.

    "My staff discovered that, alarmingly, these crucial notifications were not happening, likely in violation of the carriers' contracts with the, leaving the Senate vulnerable to surveillance," Wyden said in the letter, obtained first by POLITICO, dated May 21. Wyden said that the companies all started providing notification after his office's investigation. But one carrier told Wyden's office it had previously turned over Senate data to law enforcement without notifying lawmakers, according to the letter.

    of this story at Slashdot.
    #phone #companies #failed #warn #senators
    Phone Companies Failed To Warn Senators About Surveillance, Wyden Says
    Sen. Ron Wydenrevealed in a new letter to Senate colleagues Wednesday that AT&T, Verizon and T-Mobile failed to create systems for notifying senators about government surveillance on Senate-issued devices -- despite a requirement to do so. From a report: Phone service providers are contractually obligated to inform senators when a law enforcement agency requests their records, thanks to protections enacted in 2020. But in an investigation, Wyden's staff found that none of the three major carriers had created a system to send those notifications. "My staff discovered that, alarmingly, these crucial notifications were not happening, likely in violation of the carriers' contracts with the, leaving the Senate vulnerable to surveillance," Wyden said in the letter, obtained first by POLITICO, dated May 21. Wyden said that the companies all started providing notification after his office's investigation. But one carrier told Wyden's office it had previously turned over Senate data to law enforcement without notifying lawmakers, according to the letter. of this story at Slashdot. #phone #companies #failed #warn #senators
    TECH.SLASHDOT.ORG
    Phone Companies Failed To Warn Senators About Surveillance, Wyden Says
    Sen. Ron Wyden (D-Ore.) revealed in a new letter to Senate colleagues Wednesday that AT&T, Verizon and T-Mobile failed to create systems for notifying senators about government surveillance on Senate-issued devices -- despite a requirement to do so. From a report: Phone service providers are contractually obligated to inform senators when a law enforcement agency requests their records, thanks to protections enacted in 2020. But in an investigation, Wyden's staff found that none of the three major carriers had created a system to send those notifications. "My staff discovered that, alarmingly, these crucial notifications were not happening, likely in violation of the carriers' contracts with the [Senate Sergeant at Arms], leaving the Senate vulnerable to surveillance," Wyden said in the letter, obtained first by POLITICO, dated May 21. Wyden said that the companies all started providing notification after his office's investigation. But one carrier told Wyden's office it had previously turned over Senate data to law enforcement without notifying lawmakers, according to the letter. Read more of this story at Slashdot.
    0 التعليقات 0 المشاركات
  • Wyden: AT&T, T-Mobile, and Verizon weren’t notifying senators of surveillance requests

    Sen. Ron Wyden sent a letter to fellow Senators on Wednesday, revealing that three major U.S. cellphone carriers did not have provisions to notify lawmakers about government surveillance requests, despite a contractual requirement to do so. 
    In the letter, Wyden, a Democrat and longstanding member of the Senate Intelligence Committee, said that an investigation by his staff found that AT&T, T-Mobile, and Verizon were not notifying Senators of legal requests — including from the White House — to surveil their phones. The companies “have indicated that they are all now providing such notice,” according to the letter.
    Politico was first to report Wyden’s letter.
    Wyden’s letter comes in the wake of a report last year by the Inspector General, which revealed that the Trump administration in 2017 and 2018 secretly obtained logs of calls and text messages of 43 congressional staffers and two serving House lawmakers, imposing gag orders on the phone companies that received the requests. The secret surveillance requests were first revealed in 2021 to have targeted Adam Schiff, who was at the time the top Democrat on the House Intelligence Committee.
    “Executive branch surveillance poses a significant threat to the Senate’s independence and the foundational principle of separation of powers,” wrote Wyden in his letter. “If law enforcement officials, whether at the federal, state, or even local level, can secretly obtain Senators’ location data or call histories, our ability to perform our constitutional duties is severely threatened.” 
    AT&T spokesperson Alex Byers told TechCrunch in a statement that, “we are complying with our obligations to the Senate Sergeant at Arms,” and that the phone company has “received no legal demands regarding Senate offices under the current contract, which began last June.”
    When asked whether AT&T received legal demands before the new contract, Byers did not respond.

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    Wyden said in the letter that one unnamed carrier “confirmed that it turned over Senate data to law enforcement without notifying the Senate.” When reached by TechCrunch, Wyden’s spokesperson Keith Chu said the reason was that, “we don’t want to discourage companies from responding to Sen. Wyden’s questions.” 
    Verizon and T-Mobile did not respond to a request for comment. 
    The letter also mentioned carriers Google Fi, US Mobile, and cellular startup Cape, which all have policies to notify “all customers about government demands whenever they are allowed to do so.” US Mobile and Cape adopted the policy after outreach from Wyden’s office.
    Chu told TechCrunch that the Senate “doesn’t have contracts with the smaller carriers.”
    Ahmed Khattak, a spokesperson for US Mobile, confirmed to TechCrunch that the company “did not have a formal customer notification policy regarding surveillance requests prior to Senator Wyden’s inquiry.” 
    “Our current policy is to notify customers of subpoenas or legal demands for information whenever we are legally permitted to do so and when the request is not subject to a court order, statutory gag provision, or other legal restriction on disclosure,” said Khattak. “To the best of our knowledge, US Mobile has not received any surveillance requests targeting the phones of Senators or their staff.”
    Google and Cape did not respond to a request for comment. 
    As Wyden’s letter notes, after Congress enacted protections in 2020 for Senate data held by third party companies, the Senate Sergeant at Arms updated its contracts to require phone carriers to send notifications of surveillance requests. 
    Wyden said that his staff discovered that “these crucial notifications were not happening.”
    None of these protections apply to phones that are not officially issued to the Senate, such as campaign or personal phones of Senators and their staffers. In the letter, Wyden encouraged his Senate colleagues to switch to carriers that now provide notifications.
    #wyden #atampampt #tmobile #verizon #werent
    Wyden: AT&T, T-Mobile, and Verizon weren’t notifying senators of surveillance requests
    Sen. Ron Wyden sent a letter to fellow Senators on Wednesday, revealing that three major U.S. cellphone carriers did not have provisions to notify lawmakers about government surveillance requests, despite a contractual requirement to do so.  In the letter, Wyden, a Democrat and longstanding member of the Senate Intelligence Committee, said that an investigation by his staff found that AT&T, T-Mobile, and Verizon were not notifying Senators of legal requests — including from the White House — to surveil their phones. The companies “have indicated that they are all now providing such notice,” according to the letter. Politico was first to report Wyden’s letter. Wyden’s letter comes in the wake of a report last year by the Inspector General, which revealed that the Trump administration in 2017 and 2018 secretly obtained logs of calls and text messages of 43 congressional staffers and two serving House lawmakers, imposing gag orders on the phone companies that received the requests. The secret surveillance requests were first revealed in 2021 to have targeted Adam Schiff, who was at the time the top Democrat on the House Intelligence Committee. “Executive branch surveillance poses a significant threat to the Senate’s independence and the foundational principle of separation of powers,” wrote Wyden in his letter. “If law enforcement officials, whether at the federal, state, or even local level, can secretly obtain Senators’ location data or call histories, our ability to perform our constitutional duties is severely threatened.”  AT&T spokesperson Alex Byers told TechCrunch in a statement that, “we are complying with our obligations to the Senate Sergeant at Arms,” and that the phone company has “received no legal demands regarding Senate offices under the current contract, which began last June.” When asked whether AT&T received legal demands before the new contract, Byers did not respond. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Wyden said in the letter that one unnamed carrier “confirmed that it turned over Senate data to law enforcement without notifying the Senate.” When reached by TechCrunch, Wyden’s spokesperson Keith Chu said the reason was that, “we don’t want to discourage companies from responding to Sen. Wyden’s questions.”  Verizon and T-Mobile did not respond to a request for comment.  The letter also mentioned carriers Google Fi, US Mobile, and cellular startup Cape, which all have policies to notify “all customers about government demands whenever they are allowed to do so.” US Mobile and Cape adopted the policy after outreach from Wyden’s office. Chu told TechCrunch that the Senate “doesn’t have contracts with the smaller carriers.” Ahmed Khattak, a spokesperson for US Mobile, confirmed to TechCrunch that the company “did not have a formal customer notification policy regarding surveillance requests prior to Senator Wyden’s inquiry.”  “Our current policy is to notify customers of subpoenas or legal demands for information whenever we are legally permitted to do so and when the request is not subject to a court order, statutory gag provision, or other legal restriction on disclosure,” said Khattak. “To the best of our knowledge, US Mobile has not received any surveillance requests targeting the phones of Senators or their staff.” Google and Cape did not respond to a request for comment.  As Wyden’s letter notes, after Congress enacted protections in 2020 for Senate data held by third party companies, the Senate Sergeant at Arms updated its contracts to require phone carriers to send notifications of surveillance requests.  Wyden said that his staff discovered that “these crucial notifications were not happening.” None of these protections apply to phones that are not officially issued to the Senate, such as campaign or personal phones of Senators and their staffers. In the letter, Wyden encouraged his Senate colleagues to switch to carriers that now provide notifications. #wyden #atampampt #tmobile #verizon #werent
    TECHCRUNCH.COM
    Wyden: AT&T, T-Mobile, and Verizon weren’t notifying senators of surveillance requests
    Sen. Ron Wyden sent a letter to fellow Senators on Wednesday, revealing that three major U.S. cellphone carriers did not have provisions to notify lawmakers about government surveillance requests, despite a contractual requirement to do so.  In the letter, Wyden, a Democrat and longstanding member of the Senate Intelligence Committee, said that an investigation by his staff found that AT&T, T-Mobile, and Verizon were not notifying Senators of legal requests — including from the White House — to surveil their phones. The companies “have indicated that they are all now providing such notice,” according to the letter. Politico was first to report Wyden’s letter. Wyden’s letter comes in the wake of a report last year by the Inspector General, which revealed that the Trump administration in 2017 and 2018 secretly obtained logs of calls and text messages of 43 congressional staffers and two serving House lawmakers, imposing gag orders on the phone companies that received the requests. The secret surveillance requests were first revealed in 2021 to have targeted Adam Schiff, who was at the time the top Democrat on the House Intelligence Committee. “Executive branch surveillance poses a significant threat to the Senate’s independence and the foundational principle of separation of powers,” wrote Wyden in his letter. “If law enforcement officials, whether at the federal, state, or even local level, can secretly obtain Senators’ location data or call histories, our ability to perform our constitutional duties is severely threatened.”  AT&T spokesperson Alex Byers told TechCrunch in a statement that, “we are complying with our obligations to the Senate Sergeant at Arms,” and that the phone company has “received no legal demands regarding Senate offices under the current contract, which began last June.” When asked whether AT&T received legal demands before the new contract, Byers did not respond. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Wyden said in the letter that one unnamed carrier “confirmed that it turned over Senate data to law enforcement without notifying the Senate.” When reached by TechCrunch, Wyden’s spokesperson Keith Chu said the reason was that, “we don’t want to discourage companies from responding to Sen. Wyden’s questions.”  Verizon and T-Mobile did not respond to a request for comment.  The letter also mentioned carriers Google Fi, US Mobile, and cellular startup Cape, which all have policies to notify “all customers about government demands whenever they are allowed to do so.” US Mobile and Cape adopted the policy after outreach from Wyden’s office. Chu told TechCrunch that the Senate “doesn’t have contracts with the smaller carriers.” Ahmed Khattak, a spokesperson for US Mobile, confirmed to TechCrunch that the company “did not have a formal customer notification policy regarding surveillance requests prior to Senator Wyden’s inquiry.”  “Our current policy is to notify customers of subpoenas or legal demands for information whenever we are legally permitted to do so and when the request is not subject to a court order, statutory gag provision, or other legal restriction on disclosure,” said Khattak. “To the best of our knowledge, US Mobile has not received any surveillance requests targeting the phones of Senators or their staff.” Google and Cape did not respond to a request for comment.  As Wyden’s letter notes, after Congress enacted protections in 2020 for Senate data held by third party companies, the Senate Sergeant at Arms updated its contracts to require phone carriers to send notifications of surveillance requests.  Wyden said that his staff discovered that “these crucial notifications were not happening.” None of these protections apply to phones that are not officially issued to the Senate, such as campaign or personal phones of Senators and their staffers. In the letter, Wyden encouraged his Senate colleagues to switch to carriers that now provide notifications.
    0 التعليقات 0 المشاركات