• The Last of Us opens up a whole new perspective for its next season

    The Last of Us just wrapped up its second season, and the finale ends in a way that sets up a major change for the show. It’s one that reflects a similar shift in perspective that fans of the game it’s adapting will remember well: one designed to make viewers see the story in a whole new way.This story contains spoilers for the first two seasons of The Last of Us, as well as both games.After the relative peace and respite of season 2’s penultimate episode, which tracked Joeland Ellie’sshifting relationship over the course of several years, the finale gets back to the dark stuff. Namely, Ellie’s quest to find and kill Abby, who murdered Joel as revenge for killing her own father.Despite being caught in a literal warzone in Seattle, and having a pregnant girlfriend who just suffered a grievous injury, and listening to a speech from Jesseabout the importance of thinking about the good of their community as a whole, it seems nothing can stop Ellie in her own quest for revenge. So when she learns Abby might be in a nearby aquarium, she abandons everything to go there, only to accidentally kill two of Abby’s friends, one of whom is pregnant herself.This moment is pivotal in that it really drives home just how determined Ellie is, and how much violence she’s willing to create and endure in pursuit of revenge. But it’s also not the end of the episode. The last two minutes suddenly shift to Abby in a massive settlement inside of a baseball stadium. After following Ellie’s three-day-long quest, we see Abby alongside the words “Seattle: Day One.”Image: HBOFor those who have played The Last of Us Part II, this mirrors the structure of the game. In the first half of the game, Abby is a villain to be chased, before players take control of her to see the same period of time through her eyes, turning her into a character that you can both hate and sympathize with, much as Ellie steadily becomes. According to co-showrunner Craig Mazin, one of the challenges of re-creating this moment came down to the differences between games and television. The team wanted to get a similar feeling across, even knowing that it wouldn’t hit quite as hard.“We can’t reproduce the shock of becoming a person,” Mazin said during a press Q&A ahead of the finale. “In the games you are Joel, you are Ellie, you are Abby. When that shift happens it’s jarring because you have been someone. But here we’re watching everybody equally on a screen.” He added, “What we’re doing is honoring the notion that there’s a time period where one person experiences it one way, and another is experiencing it so wildly different, and yet they converge.”The suggestion is that season 3 will largely follow Abby’s side of the story, something that Neil Druckmann, co-showrunner and also creative director of the games, alluded to. “Had we ended this season somewhere else, like a few moments before, I think we wouldn’t be making the right promise of what this is about,” Druckmann said. “We’re telling you that next season, one, there is just an epic nature to what’s about to happen, and two, this other story is going to be really important.”This doesn’t mean there won’t be Ellie in season 3 — or Joel, for that matter — as the TV show has been fairly liberal with shifting moments around to better suit its own style of storytelling. So it’s very unlikely that season 3 will be solely, or even primarily, from Abby’s perspective. But learning her side of the story will be important, particularly for how it both mirrors and differs from Ellie’s perspective and makes viewers root for or against both parties.“We understand that both Ellie and Abby are moving forward in trouble,” explained Mazin. “They are in moral trouble, because their certainty is beginning to fail them, and we can see it here with Ellie for sure. Because faced with the consequences of the thing she’s done, and the people that didn’t deserve to die dying, she’s starting to feel maybe a swing of the pendulum.”Even still, adding in a third major character to sympathize with will mark a major shift for the show going forward, much as killing off Joel was for season 2. And Mazin believes this is a core part of the structure of the series. “This show is going to be a different show every season,” he said.See More:
    #last #opens #whole #new #perspective
    The Last of Us opens up a whole new perspective for its next season
    The Last of Us just wrapped up its second season, and the finale ends in a way that sets up a major change for the show. It’s one that reflects a similar shift in perspective that fans of the game it’s adapting will remember well: one designed to make viewers see the story in a whole new way.This story contains spoilers for the first two seasons of The Last of Us, as well as both games.After the relative peace and respite of season 2’s penultimate episode, which tracked Joeland Ellie’sshifting relationship over the course of several years, the finale gets back to the dark stuff. Namely, Ellie’s quest to find and kill Abby, who murdered Joel as revenge for killing her own father.Despite being caught in a literal warzone in Seattle, and having a pregnant girlfriend who just suffered a grievous injury, and listening to a speech from Jesseabout the importance of thinking about the good of their community as a whole, it seems nothing can stop Ellie in her own quest for revenge. So when she learns Abby might be in a nearby aquarium, she abandons everything to go there, only to accidentally kill two of Abby’s friends, one of whom is pregnant herself.This moment is pivotal in that it really drives home just how determined Ellie is, and how much violence she’s willing to create and endure in pursuit of revenge. But it’s also not the end of the episode. The last two minutes suddenly shift to Abby in a massive settlement inside of a baseball stadium. After following Ellie’s three-day-long quest, we see Abby alongside the words “Seattle: Day One.”Image: HBOFor those who have played The Last of Us Part II, this mirrors the structure of the game. In the first half of the game, Abby is a villain to be chased, before players take control of her to see the same period of time through her eyes, turning her into a character that you can both hate and sympathize with, much as Ellie steadily becomes. According to co-showrunner Craig Mazin, one of the challenges of re-creating this moment came down to the differences between games and television. The team wanted to get a similar feeling across, even knowing that it wouldn’t hit quite as hard.“We can’t reproduce the shock of becoming a person,” Mazin said during a press Q&A ahead of the finale. “In the games you are Joel, you are Ellie, you are Abby. When that shift happens it’s jarring because you have been someone. But here we’re watching everybody equally on a screen.” He added, “What we’re doing is honoring the notion that there’s a time period where one person experiences it one way, and another is experiencing it so wildly different, and yet they converge.”The suggestion is that season 3 will largely follow Abby’s side of the story, something that Neil Druckmann, co-showrunner and also creative director of the games, alluded to. “Had we ended this season somewhere else, like a few moments before, I think we wouldn’t be making the right promise of what this is about,” Druckmann said. “We’re telling you that next season, one, there is just an epic nature to what’s about to happen, and two, this other story is going to be really important.”This doesn’t mean there won’t be Ellie in season 3 — or Joel, for that matter — as the TV show has been fairly liberal with shifting moments around to better suit its own style of storytelling. So it’s very unlikely that season 3 will be solely, or even primarily, from Abby’s perspective. But learning her side of the story will be important, particularly for how it both mirrors and differs from Ellie’s perspective and makes viewers root for or against both parties.“We understand that both Ellie and Abby are moving forward in trouble,” explained Mazin. “They are in moral trouble, because their certainty is beginning to fail them, and we can see it here with Ellie for sure. Because faced with the consequences of the thing she’s done, and the people that didn’t deserve to die dying, she’s starting to feel maybe a swing of the pendulum.”Even still, adding in a third major character to sympathize with will mark a major shift for the show going forward, much as killing off Joel was for season 2. And Mazin believes this is a core part of the structure of the series. “This show is going to be a different show every season,” he said.See More: #last #opens #whole #new #perspective
    WWW.THEVERGE.COM
    The Last of Us opens up a whole new perspective for its next season
    The Last of Us just wrapped up its second season, and the finale ends in a way that sets up a major change for the show. It’s one that reflects a similar shift in perspective that fans of the game it’s adapting will remember well: one designed to make viewers see the story in a whole new way.This story contains spoilers for the first two seasons of The Last of Us, as well as both games.After the relative peace and respite of season 2’s penultimate episode, which tracked Joel (Pedro Pascal) and Ellie’s (Bella Ramsey) shifting relationship over the course of several years, the finale gets back to the dark stuff. Namely, Ellie’s quest to find and kill Abby (Kaitlyn Dever), who murdered Joel as revenge for killing her own father.Despite being caught in a literal warzone in Seattle, and having a pregnant girlfriend who just suffered a grievous injury, and listening to a speech from Jesse (Young Mazino) about the importance of thinking about the good of their community as a whole, it seems nothing can stop Ellie in her own quest for revenge. So when she learns Abby might be in a nearby aquarium, she abandons everything to go there, only to accidentally kill two of Abby’s friends, one of whom is pregnant herself.This moment is pivotal in that it really drives home just how determined Ellie is, and how much violence she’s willing to create and endure in pursuit of revenge. But it’s also not the end of the episode. The last two minutes suddenly shift to Abby in a massive settlement inside of a baseball stadium. After following Ellie’s three-day-long quest, we see Abby alongside the words “Seattle: Day One.”Image: HBOFor those who have played The Last of Us Part II, this mirrors the structure of the game. In the first half of the game, Abby is a villain to be chased, before players take control of her to see the same period of time through her eyes, turning her into a character that you can both hate and sympathize with, much as Ellie steadily becomes. According to co-showrunner Craig Mazin, one of the challenges of re-creating this moment came down to the differences between games and television. The team wanted to get a similar feeling across, even knowing that it wouldn’t hit quite as hard.“We can’t reproduce the shock of becoming a person,” Mazin said during a press Q&A ahead of the finale. “In the games you are Joel, you are Ellie, you are Abby. When that shift happens it’s jarring because you have been someone. But here we’re watching everybody equally on a screen.” He added, “What we’re doing is honoring the notion that there’s a time period where one person experiences it one way, and another is experiencing it so wildly different, and yet they converge.”The suggestion is that season 3 will largely follow Abby’s side of the story, something that Neil Druckmann, co-showrunner and also creative director of the games, alluded to. “Had we ended this season somewhere else, like a few moments before, I think we wouldn’t be making the right promise of what this is about,” Druckmann said. “We’re telling you that next season, one, there is just an epic nature to what’s about to happen, and two, this other story is going to be really important.”This doesn’t mean there won’t be Ellie in season 3 — or Joel, for that matter — as the TV show has been fairly liberal with shifting moments around to better suit its own style of storytelling. So it’s very unlikely that season 3 will be solely, or even primarily, from Abby’s perspective. But learning her side of the story will be important, particularly for how it both mirrors and differs from Ellie’s perspective and makes viewers root for or against both parties.“We understand that both Ellie and Abby are moving forward in trouble,” explained Mazin. “They are in moral trouble, because their certainty is beginning to fail them, and we can see it here with Ellie for sure. Because faced with the consequences of the thing she’s done, and the people that didn’t deserve to die dying, she’s starting to feel maybe a swing of the pendulum.”Even still, adding in a third major character to sympathize with will mark a major shift for the show going forward, much as killing off Joel was for season 2. And Mazin believes this is a core part of the structure of the series. “This show is going to be a different show every season,” he said.See More:
    0 Σχόλια 0 Μοιράστηκε
  • Essex Police discloses ‘incoherent’ facial recognition assessment

    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognitionuse, according to documents obtained by Big Brother Watch and shared with Computer Weekly.
    While the force claims in an equality impact assessmentthat “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Informationrules – shows it has likely failed to fulfil its public sector equality dutyto consider how its policies and practices could be discriminatory.
    The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias.
    For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”.
    However, this figure is based on the National Physical Laboratory’stesting of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use.
    Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021.
    Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology, the EIA also claims it has “a bias differential FMRof 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”.
    However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available.
    A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so.

    Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk

    Jake Hurfurt, Big Brother Watch

    “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch.
    “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent.
    “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.”
    The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges.
    In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment; and it did not comply with its PSED to consider how its policies and practices could be discriminatory.
    The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”.
    Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities.
    Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”.
    She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.”

    Computer Weekly contacted Essex Police about every aspect of the story.
    “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson.
    “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests.
    “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.”
    The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review.
    “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.”
    However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field.
    Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication.

    Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim.
    Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men.
    While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”.
    Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities.
    In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another.
    According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113, meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5.
    However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms.
    Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in.
    This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing.
    “If a developer implements 1:Nsearch as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.”
    Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time.
    “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said.
    Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm.
    “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.”
    However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification.
    Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points.
    While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point.

    The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security, claiming it demonstrated “Corsight’s capability to perform equally across all demographics”.
    However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives.
    This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual.
    The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”.
    Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance.
    In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing.
    Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force.

    While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered.
    For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer presentsaid “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward.
    The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios.
    Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police.
    For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts.
    While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”.
    However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process.
    For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database.
    While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned.
    Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.”

    Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary.
    On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”.
    They added: “The watchlisthas to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.”
    However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend.
    “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harmor murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said.
    “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.”
    Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”.
    According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR.
    “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFRcan be deployed.”
    Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments.
    “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.”
    Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power.

    Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening

    Karen Yeung, Birmingham Law School

    “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said.
    “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.”
    Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting.
    “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said.
    “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.”
    Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.”
    In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses.
    “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.”

    about police data and technology

    Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities.
    UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies.
    UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy.
    #essex #police #discloses #incoherent #facial
    Essex Police discloses ‘incoherent’ facial recognition assessment
    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognitionuse, according to documents obtained by Big Brother Watch and shared with Computer Weekly. While the force claims in an equality impact assessmentthat “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Informationrules – shows it has likely failed to fulfil its public sector equality dutyto consider how its policies and practices could be discriminatory. The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias. For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”. However, this figure is based on the National Physical Laboratory’stesting of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use. Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021. Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology, the EIA also claims it has “a bias differential FMRof 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”. However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available. A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so. Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk Jake Hurfurt, Big Brother Watch “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch. “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent. “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.” The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges. In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment; and it did not comply with its PSED to consider how its policies and practices could be discriminatory. The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”. Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities. Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”. She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.” Computer Weekly contacted Essex Police about every aspect of the story. “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson. “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests. “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.” The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review. “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.” However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field. Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication. Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim. Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men. While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”. Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities. In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another. According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113, meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5. However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms. Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in. This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing. “If a developer implements 1:Nsearch as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.” Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time. “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said. Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm. “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.” However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification. Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points. While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point. The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security, claiming it demonstrated “Corsight’s capability to perform equally across all demographics”. However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives. This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual. The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”. Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance. In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing. Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force. While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered. For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer presentsaid “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward. The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios. Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police. For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts. While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”. However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process. For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database. While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned. Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.” Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary. On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”. They added: “The watchlisthas to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.” However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend. “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harmor murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said. “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.” Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”. According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR. “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFRcan be deployed.” Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments. “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.” Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power. Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening Karen Yeung, Birmingham Law School “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said. “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.” Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting. “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said. “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.” Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.” In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses. “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.” about police data and technology Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities. UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies. UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy. #essex #police #discloses #incoherent #facial
    WWW.COMPUTERWEEKLY.COM
    Essex Police discloses ‘incoherent’ facial recognition assessment
    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognition (LFR) use, according to documents obtained by Big Brother Watch and shared with Computer Weekly. While the force claims in an equality impact assessment (EIA) that “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Information (FoI) rules – shows it has likely failed to fulfil its public sector equality duty (PSED) to consider how its policies and practices could be discriminatory. The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias. For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”. However, this figure is based on the National Physical Laboratory’s (NPL) testing of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use. Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021. Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology (NIST), the EIA also claims it has “a bias differential FMR [False Match Rate] of 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”. However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available. A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so. Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk Jake Hurfurt, Big Brother Watch “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch. “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent. “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.” The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges. In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment (DPIA); and it did not comply with its PSED to consider how its policies and practices could be discriminatory. The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”. Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities. Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”. She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.” Computer Weekly contacted Essex Police about every aspect of the story. “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson. “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests. “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.” The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review. “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.” However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field. Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication. Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim. Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men. While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”. Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities. In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another. According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113(22), meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5(1). However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms. Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in. This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing. “If a developer implements 1:N (one-to-many) search as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.” Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time. “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said. Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm. “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.” However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification. Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points. While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point. The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security (DHS), claiming it demonstrated “Corsight’s capability to perform equally across all demographics”. However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives. This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual. The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”. Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance. In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing. Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force. While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered. For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer present (whose name has been redacted from the document) said “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward. The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios. Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police. For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts. While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”. However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process. For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database (PND). While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned. Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.” Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary. On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”. They added: “The watchlist [then] has to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.” However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend. “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harm [GBH] or murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said. “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.” Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”. According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR. “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFR [automated facial recognition] can be deployed.” Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments. “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.” Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power. Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening Karen Yeung, Birmingham Law School “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said. “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.” Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting. “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said. “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.” Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.” In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses. “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.” Read more about police data and technology Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities. UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies. UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy.
    0 Σχόλια 0 Μοιράστηκε
  • Trump admin tells Supreme Court: DOGE needs to do its work in secret

    DOGE in court

    Trump admin tells Supreme Court: DOGE needs to do its work in secret

    DOJ complains of "sweeping, intrusive discovery" after DOGE refused FOIA requests.

    Jon Brodkin



    May 21, 2025 5:08 pm

    |

    73

    A protest over DOGE's reductions to the federal workforce outside the Jacob K. Javits Federal Office Building on March 19, 2025 in New York City.

    Credit:

    Getty Images | Michael M. Santiago

    A protest over DOGE's reductions to the federal workforce outside the Jacob K. Javits Federal Office Building on March 19, 2025 in New York City.

    Credit:

    Getty Images | Michael M. Santiago

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    The Department of Justice today asked the Supreme Court to block a ruling that requires DOGE to provide information about its government cost-cutting operations as part of court-ordered discovery.
    President Trump's Justice Department sought an immediate halt to orders issued by US District Court for the District of Columbia. US Solicitor General John Sauer argued that the Department of Government Efficiency is exempt from the Freedom of Information Actas a presidential advisory body and not an official "agency."
    The district court "ordered USDSto submit to sweeping, intrusive discovery just to determine if USDS is subject to FOIA in the first place," Sauer wrote. "That order turns FOIA on its head, effectively giving respondent a win on the merits of its FOIA suit under the guise of figuring out whether FOIA even applies. And that order clearly violates the separation of powers, subjecting a presidential advisory body to intrusive discovery and threatening the confidentiality and candor of its advice, putatively to address a legal question that never should have necessitated discovery in this case at all."
    The nonprofit watchdog group Citizens for Responsibility and Ethics in Washingtonfiled FOIA requests seeking information about DOGE and sued after DOGE officials refused to provide the requested records.
    US District Judge Christopher Cooper has so far sided with CREW. Cooper decided in March that "USDS is likely covered by FOIA and that the public would be irreparably harmed by an indefinite delay in unearthing the records CREW seeks," ordering DOGE "to process CREW's request on an expedited timetable."

    Judge: DOGE is not just an advisor
    DOGE then asked the district court for a summary judgment in its favor, and CREW responded by filing a motion for expedited discovery "seeking information relevant to whether USDS wields substantial authority independent of the President and is therefore subject to FOIA." In an April 15 order, Cooper ruled that CREW is entitled to limited discovery into the question of whether DOGE is wielding authority sufficient to bring it within the purview of FOIA. Cooper hasn't yet ruled on the motion for summary judgment.
    "The structure of USDS and the scope of its authority are critical to determining whether the agency is 'wieldsubstantial authority independently of the President,'" the judge wrote. "And the answers to those questions are unclear from the record."
    Trump's executive orders appear to support CREW's argument by suggesting "that USDS is exercising substantial independent authority," Cooper wrote. "As the Court already noted, the executive order establishing USDS 'to implement the President's DOGE Agenda' appears to give USDS the authority to carry out that agenda, 'not just to advise the President in doing so.'"
    Not satisfied with the outcome, the Trump administration tried to get Cooper's ruling overturned in the US Court of Appeals for the District of Columbia Circuit. The appeals court ruled against DOGE last week. The appeals court temporarily stayed the district court order in April, but dissolved the stay on May 14 and denied the government's petition.
    "The government contends that the district court's order permitting narrow discovery impermissibly intrudes upon the President's constitutional prerogatives," the appeals court said. But "the discovery here is modest in scope and does not target the President or any close adviser personally. The government retains every conventional tool to raise privilege objections on the limited question-by-question basis foreseen here on a narrow and discrete ground."

    US argues for secrecy
    A three-judge panel at the appeals court was unswayed by the government's claim that this process is too burdensome.
    "Although the government protests that any such assertion of privilege would be burdensome, the only identified burdens are limited both by time and reach, covering as they do records within USDS's control generated since January 20," the ruling said. "It does not provide any specific details as to why accessing its own records or submitting to two depositions would pose an unbearable burden."
    Yesterday, the District Court set a discovery schedule requiring the government to produce all responsive documents within 14 days and complete depositions within 24 days. In its petition to the Supreme Court today, the Trump administration argued that DOGE's recommendations to the president should be kept secret:
    The district court's requirement that USDS turn over the substance of its recommendations—even when the recommendations were "purely advisory"—epitomizes the order's overbreadth and intrusiveness. The court's order compels USDS to identify every "federal agency contract, grant, lease or similar instrument that any DOGE employee or DOGE Team member recommended that federal agencies cancel or rescind," and every "federal agency employee or position that any DOGE employee or DOGE team member recommended" for termination or placement on administrative leave. Further, USDS must state "whetherrecommendation was followed."
    It is difficult to imagine a more grievous intrusion and burden on a presidential advisory body. Providing recommendations is the core of what USDS does. Because USDS coordinates with agencies across the Executive Branch on an ongoing basis, that request requires USDS to review multitudes of discussions that USDS has had every day since the start of this Administration. And such information likely falls within the deliberative-process privilege almost by definition, as internal executive-branch recommendations are inherently "pre-decisional" and "deliberative."
    Lawsuit: “No meaningful transparency” into DOGE
    The US further said the discovery "is unnecessary to answer the legal question whether USDS qualifies as an 'agency' that is subject to FOIA," and is merely "a fishing expedition into USDS's advisory activities under the guise of determining whether USDS engages in non-advisory activities—an approach to discovery that would be improper in any circumstance."

    CREW, like others that have sued the government over DOGE's operations, says the entity exercises significant power without proper oversight and transparency. DOGE "has worked in the shadows—a cadre of largely unidentified actors, whose status as government employees is unclear, controlling major government functions with no oversight," CREW's lawsuit said. "USDS has provided no meaningful transparency into its operations or assurances that it is maintaining proper records of its unprecedented and legally dubious work."
    The Trump administration is fighting numerous DOGE-related lawsuits at multiple levels of the court system. Earlier this month, the administration asked the Supreme Court to restore DOGE's access to Social Security Administration records after losing on the issue in both a district court and appeals court. That request to the Supreme Court is pending.
    There was also a dispute over discovery when 14 states sued the federal government over Trump "delegatvirtually unchecked authority to Mr. Musk without proper legal authorization from Congress and without meaningful supervision of his activities." A federal judge ruled that the states could serve written discovery requests on Musk and DOGE, but the DC Circuit appeals court blocked the discovery order. In that case, appeals court judges said the lower-court judge should have ruled on a motion to dismiss before allowing discovery.

    Jon Brodkin
    Senior IT Reporter

    Jon Brodkin
    Senior IT Reporter

    Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

    73 Comments
    #trump #admin #tells #supreme #court
    Trump admin tells Supreme Court: DOGE needs to do its work in secret
    DOGE in court Trump admin tells Supreme Court: DOGE needs to do its work in secret DOJ complains of "sweeping, intrusive discovery" after DOGE refused FOIA requests. Jon Brodkin – May 21, 2025 5:08 pm | 73 A protest over DOGE's reductions to the federal workforce outside the Jacob K. Javits Federal Office Building on March 19, 2025 in New York City. Credit: Getty Images | Michael M. Santiago A protest over DOGE's reductions to the federal workforce outside the Jacob K. Javits Federal Office Building on March 19, 2025 in New York City. Credit: Getty Images | Michael M. Santiago Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more The Department of Justice today asked the Supreme Court to block a ruling that requires DOGE to provide information about its government cost-cutting operations as part of court-ordered discovery. President Trump's Justice Department sought an immediate halt to orders issued by US District Court for the District of Columbia. US Solicitor General John Sauer argued that the Department of Government Efficiency is exempt from the Freedom of Information Actas a presidential advisory body and not an official "agency." The district court "ordered USDSto submit to sweeping, intrusive discovery just to determine if USDS is subject to FOIA in the first place," Sauer wrote. "That order turns FOIA on its head, effectively giving respondent a win on the merits of its FOIA suit under the guise of figuring out whether FOIA even applies. And that order clearly violates the separation of powers, subjecting a presidential advisory body to intrusive discovery and threatening the confidentiality and candor of its advice, putatively to address a legal question that never should have necessitated discovery in this case at all." The nonprofit watchdog group Citizens for Responsibility and Ethics in Washingtonfiled FOIA requests seeking information about DOGE and sued after DOGE officials refused to provide the requested records. US District Judge Christopher Cooper has so far sided with CREW. Cooper decided in March that "USDS is likely covered by FOIA and that the public would be irreparably harmed by an indefinite delay in unearthing the records CREW seeks," ordering DOGE "to process CREW's request on an expedited timetable." Judge: DOGE is not just an advisor DOGE then asked the district court for a summary judgment in its favor, and CREW responded by filing a motion for expedited discovery "seeking information relevant to whether USDS wields substantial authority independent of the President and is therefore subject to FOIA." In an April 15 order, Cooper ruled that CREW is entitled to limited discovery into the question of whether DOGE is wielding authority sufficient to bring it within the purview of FOIA. Cooper hasn't yet ruled on the motion for summary judgment. "The structure of USDS and the scope of its authority are critical to determining whether the agency is 'wieldsubstantial authority independently of the President,'" the judge wrote. "And the answers to those questions are unclear from the record." Trump's executive orders appear to support CREW's argument by suggesting "that USDS is exercising substantial independent authority," Cooper wrote. "As the Court already noted, the executive order establishing USDS 'to implement the President's DOGE Agenda' appears to give USDS the authority to carry out that agenda, 'not just to advise the President in doing so.'" Not satisfied with the outcome, the Trump administration tried to get Cooper's ruling overturned in the US Court of Appeals for the District of Columbia Circuit. The appeals court ruled against DOGE last week. The appeals court temporarily stayed the district court order in April, but dissolved the stay on May 14 and denied the government's petition. "The government contends that the district court's order permitting narrow discovery impermissibly intrudes upon the President's constitutional prerogatives," the appeals court said. But "the discovery here is modest in scope and does not target the President or any close adviser personally. The government retains every conventional tool to raise privilege objections on the limited question-by-question basis foreseen here on a narrow and discrete ground." US argues for secrecy A three-judge panel at the appeals court was unswayed by the government's claim that this process is too burdensome. "Although the government protests that any such assertion of privilege would be burdensome, the only identified burdens are limited both by time and reach, covering as they do records within USDS's control generated since January 20," the ruling said. "It does not provide any specific details as to why accessing its own records or submitting to two depositions would pose an unbearable burden." Yesterday, the District Court set a discovery schedule requiring the government to produce all responsive documents within 14 days and complete depositions within 24 days. In its petition to the Supreme Court today, the Trump administration argued that DOGE's recommendations to the president should be kept secret: The district court's requirement that USDS turn over the substance of its recommendations—even when the recommendations were "purely advisory"—epitomizes the order's overbreadth and intrusiveness. The court's order compels USDS to identify every "federal agency contract, grant, lease or similar instrument that any DOGE employee or DOGE Team member recommended that federal agencies cancel or rescind," and every "federal agency employee or position that any DOGE employee or DOGE team member recommended" for termination or placement on administrative leave. Further, USDS must state "whetherrecommendation was followed." It is difficult to imagine a more grievous intrusion and burden on a presidential advisory body. Providing recommendations is the core of what USDS does. Because USDS coordinates with agencies across the Executive Branch on an ongoing basis, that request requires USDS to review multitudes of discussions that USDS has had every day since the start of this Administration. And such information likely falls within the deliberative-process privilege almost by definition, as internal executive-branch recommendations are inherently "pre-decisional" and "deliberative." Lawsuit: “No meaningful transparency” into DOGE The US further said the discovery "is unnecessary to answer the legal question whether USDS qualifies as an 'agency' that is subject to FOIA," and is merely "a fishing expedition into USDS's advisory activities under the guise of determining whether USDS engages in non-advisory activities—an approach to discovery that would be improper in any circumstance." CREW, like others that have sued the government over DOGE's operations, says the entity exercises significant power without proper oversight and transparency. DOGE "has worked in the shadows—a cadre of largely unidentified actors, whose status as government employees is unclear, controlling major government functions with no oversight," CREW's lawsuit said. "USDS has provided no meaningful transparency into its operations or assurances that it is maintaining proper records of its unprecedented and legally dubious work." The Trump administration is fighting numerous DOGE-related lawsuits at multiple levels of the court system. Earlier this month, the administration asked the Supreme Court to restore DOGE's access to Social Security Administration records after losing on the issue in both a district court and appeals court. That request to the Supreme Court is pending. There was also a dispute over discovery when 14 states sued the federal government over Trump "delegatvirtually unchecked authority to Mr. Musk without proper legal authorization from Congress and without meaningful supervision of his activities." A federal judge ruled that the states could serve written discovery requests on Musk and DOGE, but the DC Circuit appeals court blocked the discovery order. In that case, appeals court judges said the lower-court judge should have ruled on a motion to dismiss before allowing discovery. Jon Brodkin Senior IT Reporter Jon Brodkin Senior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 73 Comments #trump #admin #tells #supreme #court
    ARSTECHNICA.COM
    Trump admin tells Supreme Court: DOGE needs to do its work in secret
    DOGE in court Trump admin tells Supreme Court: DOGE needs to do its work in secret DOJ complains of "sweeping, intrusive discovery" after DOGE refused FOIA requests. Jon Brodkin – May 21, 2025 5:08 pm | 73 A protest over DOGE's reductions to the federal workforce outside the Jacob K. Javits Federal Office Building on March 19, 2025 in New York City. Credit: Getty Images | Michael M. Santiago A protest over DOGE's reductions to the federal workforce outside the Jacob K. Javits Federal Office Building on March 19, 2025 in New York City. Credit: Getty Images | Michael M. Santiago Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more The Department of Justice today asked the Supreme Court to block a ruling that requires DOGE to provide information about its government cost-cutting operations as part of court-ordered discovery. President Trump's Justice Department sought an immediate halt to orders issued by US District Court for the District of Columbia. US Solicitor General John Sauer argued that the Department of Government Efficiency is exempt from the Freedom of Information Act (FOIA) as a presidential advisory body and not an official "agency." The district court "ordered USDS [US Doge Service] to submit to sweeping, intrusive discovery just to determine if USDS is subject to FOIA in the first place," Sauer wrote. "That order turns FOIA on its head, effectively giving respondent a win on the merits of its FOIA suit under the guise of figuring out whether FOIA even applies. And that order clearly violates the separation of powers, subjecting a presidential advisory body to intrusive discovery and threatening the confidentiality and candor of its advice, putatively to address a legal question that never should have necessitated discovery in this case at all." The nonprofit watchdog group Citizens for Responsibility and Ethics in Washington (CREW) filed FOIA requests seeking information about DOGE and sued after DOGE officials refused to provide the requested records. US District Judge Christopher Cooper has so far sided with CREW. Cooper decided in March that "USDS is likely covered by FOIA and that the public would be irreparably harmed by an indefinite delay in unearthing the records CREW seeks," ordering DOGE "to process CREW's request on an expedited timetable." Judge: DOGE is not just an advisor DOGE then asked the district court for a summary judgment in its favor, and CREW responded by filing a motion for expedited discovery "seeking information relevant to whether USDS wields substantial authority independent of the President and is therefore subject to FOIA." In an April 15 order, Cooper ruled that CREW is entitled to limited discovery into the question of whether DOGE is wielding authority sufficient to bring it within the purview of FOIA. Cooper hasn't yet ruled on the motion for summary judgment. "The structure of USDS and the scope of its authority are critical to determining whether the agency is 'wield[ing] substantial authority independently of the President,'" the judge wrote. "And the answers to those questions are unclear from the record." Trump's executive orders appear to support CREW's argument by suggesting "that USDS is exercising substantial independent authority," Cooper wrote. "As the Court already noted, the executive order establishing USDS 'to implement the President's DOGE Agenda' appears to give USDS the authority to carry out that agenda, 'not just to advise the President in doing so.'" Not satisfied with the outcome, the Trump administration tried to get Cooper's ruling overturned in the US Court of Appeals for the District of Columbia Circuit. The appeals court ruled against DOGE last week. The appeals court temporarily stayed the district court order in April, but dissolved the stay on May 14 and denied the government's petition. "The government contends that the district court's order permitting narrow discovery impermissibly intrudes upon the President's constitutional prerogatives," the appeals court said. But "the discovery here is modest in scope and does not target the President or any close adviser personally. The government retains every conventional tool to raise privilege objections on the limited question-by-question basis foreseen here on a narrow and discrete ground." US argues for secrecy A three-judge panel at the appeals court was unswayed by the government's claim that this process is too burdensome. "Although the government protests that any such assertion of privilege would be burdensome, the only identified burdens are limited both by time and reach, covering as they do records within USDS's control generated since January 20," the ruling said. "It does not provide any specific details as to why accessing its own records or submitting to two depositions would pose an unbearable burden." Yesterday, the District Court set a discovery schedule requiring the government to produce all responsive documents within 14 days and complete depositions within 24 days. In its petition to the Supreme Court today, the Trump administration argued that DOGE's recommendations to the president should be kept secret: The district court's requirement that USDS turn over the substance of its recommendations—even when the recommendations were "purely advisory"—epitomizes the order's overbreadth and intrusiveness. The court's order compels USDS to identify every "federal agency contract, grant, lease or similar instrument that any DOGE employee or DOGE Team member recommended that federal agencies cancel or rescind," and every "federal agency employee or position that any DOGE employee or DOGE team member recommended" for termination or placement on administrative leave. Further, USDS must state "whether [each] recommendation was followed." It is difficult to imagine a more grievous intrusion and burden on a presidential advisory body. Providing recommendations is the core of what USDS does. Because USDS coordinates with agencies across the Executive Branch on an ongoing basis, that request requires USDS to review multitudes of discussions that USDS has had every day since the start of this Administration. And such information likely falls within the deliberative-process privilege almost by definition, as internal executive-branch recommendations are inherently "pre-decisional" and "deliberative." Lawsuit: “No meaningful transparency” into DOGE The US further said the discovery "is unnecessary to answer the legal question whether USDS qualifies as an 'agency' that is subject to FOIA," and is merely "a fishing expedition into USDS's advisory activities under the guise of determining whether USDS engages in non-advisory activities—an approach to discovery that would be improper in any circumstance." CREW, like others that have sued the government over DOGE's operations, says the entity exercises significant power without proper oversight and transparency. DOGE "has worked in the shadows—a cadre of largely unidentified actors, whose status as government employees is unclear, controlling major government functions with no oversight," CREW's lawsuit said. "USDS has provided no meaningful transparency into its operations or assurances that it is maintaining proper records of its unprecedented and legally dubious work." The Trump administration is fighting numerous DOGE-related lawsuits at multiple levels of the court system. Earlier this month, the administration asked the Supreme Court to restore DOGE's access to Social Security Administration records after losing on the issue in both a district court and appeals court. That request to the Supreme Court is pending. There was also a dispute over discovery when 14 states sued the federal government over Trump "delegat[ing] virtually unchecked authority to Mr. Musk without proper legal authorization from Congress and without meaningful supervision of his activities." A federal judge ruled that the states could serve written discovery requests on Musk and DOGE, but the DC Circuit appeals court blocked the discovery order. In that case, appeals court judges said the lower-court judge should have ruled on a motion to dismiss before allowing discovery. Jon Brodkin Senior IT Reporter Jon Brodkin Senior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 73 Comments
    0 Σχόλια 0 Μοιράστηκε
  • Darth Jar Jar is now in Fortnite but it’s locked behind a tedious grind

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    Epic Games took players to a Star Wars fantasy with Fortnite’s Galactic Battle mini-season, transforming the island into a war of Jedi and Sith, featuring new locations, force powers, lightsabers, and starfighters. Players are completing challenges, such as the Han Solo Found challenges, and gaining XP in Battle Royale, Creative, and Zero Build modes to earn exclusive rewards.
    The season’s Battle Pass and Item Shop are brimming with Star Wars skins, such as General Grievous, Mace Windu, Poe Dameron, Captain Phasma, and a customizable Mandalorian, as well as added rewards like the Vanguard Zadie. To add to the excitement, an AI-powered Darth Vader NPC, voiced by James Earl Jones, was also added to the game for fans.
    The Darth Jar Jar skin, which was the galaxy’s most anticipated arrival, transforms the clumsy Gungan into a terrifying Sith Lord, confirming a long-held fan idea. However, to get the skin, players need to indulge in a tedious grind that has been a topic of frustration lately.
    Fortnite players feel exploited with new Darth Jar Jar skin grind
    The Darth Jar Jar skin, which was released on May 17 as part of Fortnite’s Galactic Battle mini-season, has sparked outrage among players due to its steep 1.28 Million XP requirement—roughly 16 Battle Pass levels—to unlock the purchase option in the Item Shop’s Prove Your Power tab, as well as a 1,500 V-Bucksprice tag for the skin and 3,200 V-Bucks for the full bundle.
    To buy Darth Jar Jar, you’ll need to earn around 1 million XP. Image by VideoGamer.
    This dual barrier has left players feeling exploited, and many have taken to social media to vent. Casual gamers juggling work and school will find the grind—demanding hours in Battle Royale, Creative, or Zero Build modes—nearly impossible by the season’s June 6 deadline. One X user complained, “Epic has turned a fun meme into a full-time job. 1.28M XP? “I’m not giving up my life for this!”
    The XP gate, along with the high price, has raised allegations of exploitative monetization. Players believe that Epic is catering to Star Wars nostalgia by not offering a free unlock path like previous event rewards. Players underscore the grind’s inaccessibility, with one user saying, “I’m a Jar Jar stan, but this feels like a cash grab disguised as a challenge.” Even dedicated gamers who have crossed the XP obstacle feel the V-Bucks cost, especially after putting in so much effort.
    Some have compared it negatively to previous skins, such as General Grievous, which needed less grinding. Players believe Epic has ruined a beloved fan theory’s moment, with one comment summarizing: “Darth Jar Jar could’ve been legendary, but Epic’s greed made it a chore.”
    While it’s not the first time Epic has brought an XP locked Item Shop, the Renegade Raider skin comeback was definitely worth the grind. If you’re looking to rack up XP faster, check out our full list of best XP maps to play.

    Fortnite

    Platform:
    Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X

    Genre:
    Action, Massively Multiplayer, Shooter

    9
    VideoGamer

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #darth #jar #now #fortnite #but
    Darth Jar Jar is now in Fortnite but it’s locked behind a tedious grind
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Epic Games took players to a Star Wars fantasy with Fortnite’s Galactic Battle mini-season, transforming the island into a war of Jedi and Sith, featuring new locations, force powers, lightsabers, and starfighters. Players are completing challenges, such as the Han Solo Found challenges, and gaining XP in Battle Royale, Creative, and Zero Build modes to earn exclusive rewards. The season’s Battle Pass and Item Shop are brimming with Star Wars skins, such as General Grievous, Mace Windu, Poe Dameron, Captain Phasma, and a customizable Mandalorian, as well as added rewards like the Vanguard Zadie. To add to the excitement, an AI-powered Darth Vader NPC, voiced by James Earl Jones, was also added to the game for fans. The Darth Jar Jar skin, which was the galaxy’s most anticipated arrival, transforms the clumsy Gungan into a terrifying Sith Lord, confirming a long-held fan idea. However, to get the skin, players need to indulge in a tedious grind that has been a topic of frustration lately. Fortnite players feel exploited with new Darth Jar Jar skin grind The Darth Jar Jar skin, which was released on May 17 as part of Fortnite’s Galactic Battle mini-season, has sparked outrage among players due to its steep 1.28 Million XP requirement—roughly 16 Battle Pass levels—to unlock the purchase option in the Item Shop’s Prove Your Power tab, as well as a 1,500 V-Bucksprice tag for the skin and 3,200 V-Bucks for the full bundle. To buy Darth Jar Jar, you’ll need to earn around 1 million XP. Image by VideoGamer. This dual barrier has left players feeling exploited, and many have taken to social media to vent. Casual gamers juggling work and school will find the grind—demanding hours in Battle Royale, Creative, or Zero Build modes—nearly impossible by the season’s June 6 deadline. One X user complained, “Epic has turned a fun meme into a full-time job. 1.28M XP? “I’m not giving up my life for this!” The XP gate, along with the high price, has raised allegations of exploitative monetization. Players believe that Epic is catering to Star Wars nostalgia by not offering a free unlock path like previous event rewards. Players underscore the grind’s inaccessibility, with one user saying, “I’m a Jar Jar stan, but this feels like a cash grab disguised as a challenge.” Even dedicated gamers who have crossed the XP obstacle feel the V-Bucks cost, especially after putting in so much effort. Some have compared it negatively to previous skins, such as General Grievous, which needed less grinding. Players believe Epic has ruined a beloved fan theory’s moment, with one comment summarizing: “Darth Jar Jar could’ve been legendary, but Epic’s greed made it a chore.” While it’s not the first time Epic has brought an XP locked Item Shop, the Renegade Raider skin comeback was definitely worth the grind. If you’re looking to rack up XP faster, check out our full list of best XP maps to play. Fortnite Platform: Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre: Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #darth #jar #now #fortnite #but
    WWW.VIDEOGAMER.COM
    Darth Jar Jar is now in Fortnite but it’s locked behind a tedious grind
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Epic Games took players to a Star Wars fantasy with Fortnite’s Galactic Battle mini-season, transforming the island into a war of Jedi and Sith, featuring new locations, force powers, lightsabers, and starfighters. Players are completing challenges, such as the Han Solo Found challenges, and gaining XP in Battle Royale, Creative, and Zero Build modes to earn exclusive rewards. The season’s Battle Pass and Item Shop are brimming with Star Wars skins, such as General Grievous, Mace Windu, Poe Dameron, Captain Phasma, and a customizable Mandalorian, as well as added rewards like the Vanguard Zadie. To add to the excitement, an AI-powered Darth Vader NPC, voiced by James Earl Jones, was also added to the game for fans. The Darth Jar Jar skin, which was the galaxy’s most anticipated arrival, transforms the clumsy Gungan into a terrifying Sith Lord, confirming a long-held fan idea. However, to get the skin, players need to indulge in a tedious grind that has been a topic of frustration lately. Fortnite players feel exploited with new Darth Jar Jar skin grind The Darth Jar Jar skin, which was released on May 17 as part of Fortnite’s Galactic Battle mini-season, has sparked outrage among players due to its steep 1.28 Million XP requirement—roughly 16 Battle Pass levels—to unlock the purchase option in the Item Shop’s Prove Your Power tab, as well as a 1,500 V-Bucks ($15) price tag for the skin and 3,200 V-Bucks for the full bundle. To buy Darth Jar Jar, you’ll need to earn around 1 million XP. Image by VideoGamer. This dual barrier has left players feeling exploited, and many have taken to social media to vent. Casual gamers juggling work and school will find the grind—demanding hours in Battle Royale, Creative, or Zero Build modes—nearly impossible by the season’s June 6 deadline. One X user complained, “Epic has turned a fun meme into a full-time job. 1.28M XP? “I’m not giving up my life for this!” The XP gate, along with the high price, has raised allegations of exploitative monetization. Players believe that Epic is catering to Star Wars nostalgia by not offering a free unlock path like previous event rewards. Players underscore the grind’s inaccessibility, with one user saying, “I’m a Jar Jar stan, but this feels like a cash grab disguised as a challenge.” Even dedicated gamers who have crossed the XP obstacle feel the V-Bucks cost, especially after putting in so much effort. Some have compared it negatively to previous skins, such as General Grievous, which needed less grinding. Players believe Epic has ruined a beloved fan theory’s moment, with one comment summarizing: “Darth Jar Jar could’ve been legendary, but Epic’s greed made it a chore.” While it’s not the first time Epic has brought an XP locked Item Shop, the Renegade Raider skin comeback was definitely worth the grind. If you’re looking to rack up XP faster, check out our full list of best XP maps to play. Fortnite Platform(s): Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre(s): Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    0 Σχόλια 0 Μοιράστηκε
  • Here’s How To Beat That Damn Lampmaster In Clair Obscur: Expedition 33

    Jump ToNo matter who it is, from Goro to General Grievous, you just can’t trust a guy with more than two arms. The creep factor is part of it, but I think you just learn fast that while two of those arms are fine, the most devastating attack always comes, from the other two arms you didn’t even know were there. Suggested ReadingNintendo Switch 2 Price Is Set at for Now, But Could Go Higher

    Share SubtitlesOffEnglishSuggested ReadingNintendo Switch 2 Price Is Set at for Now, But Could Go Higher

    Share SubtitlesOffEnglishNintendo Switch 2 Price Is Set at for Now, But Could Go HigherRead More: That goes double for the Lampmaster boss, who’s one of the more persistent pains in the ass you’ll meet in your travels through the Paintress’ domain in Clair Obscur, and they’re all arms. That doesn’t mean they’re invincible, but it does mean you’re gonna be putting in a little overtime taking this ugly son of a bitch down. Let’s hit the lights. Screenshot: Sandfall Interactive / Justin Clark / KotakuYou’re gonna be playing around with Free Aim a lot during the course of this fight; That’s really the main gimmick with this guy. That won’t be a huge problem later on, for reasons we’ll get into, but at the outset, it means blowing a lot of AP on things that aren’t the major skills you need to do the real damage. So, anything you can slap onto your characters to keep them flush with AP is gonna be your best friend. Thankfully, aside from an obvious resistance to Light, they don’t have any other major restrictions. So, getting some good Burn Pictos/Lumina going is gonna be pretty effective here, same for letting Sciel load up some Foretell. Screenshot: Sandfall Interactive / Justin Clark / KotakuSo, the second you get control, you’re gonna want to spend the AP to Free Aim and shatter the lamps floating around Lampmaster’s head before they fire a couple of hard-hitting projectiles at you. Feel free to take out all four, but if you’ve got the reflexes, you can keep one lamp alive just to parry that shot for massive damage. Aside from that, Lampmaster’s got a fairly basic move set of combos and jump attacks. The big issue is that there’s so many goddamn arms that it’s hard to figure out where the hit’s coming from. The easy tip is that only one of the arms is holding the sword, but even that can be hard to see at this stage thanks to the dark background. Still, if you can get a visual grasp on it, that’s what you’re keying off of.The only other move of note is an energy blast where the lights in their hands turn purple and they hit you with a wave of Dark energy. The dodge timing on that is that it’s hitting after the second gathering of purple energy. Go with the visual cue on that.Aside from those, they have no weaknesses and is only resistant to Light. You’re mostly just laying into this guy with any and everything you’ve got. Stack up a ton of Burns and Overdrives, that lifebar will go down fast. Screenshot: Sandfall Interactive / Justin Clark / KotakuAnd just when everything seems to be going so well, you find out this asshole has a second phase. Here, their sword combo is now four hits and has some genuinely weird delays and timing that makes it hard to dodge. Best we can tell you there is that their sword will shake a little before the first hit swings down; and the big delay is between the second and third hits. The good news is it’s easier to see them against the new backdrop, and no individual hit in the combo is dangerous on its own, just try your damndest not to eat all four strikes. They also get a second four-hit combo that’s easier to dodge, but the final hit has them flying backwards to charge at you. There’s a subtle audio cue when the charge is coming, and it’s easily parried. Take advantage of that. The real new shit in their arsenal includes a move where they summon a sword made of light after a strange ritual with the lamps. The dodge timing on the sword of light’s not terrible, just wait for them to spin around after drawing the sword. But here’s a better idea: Stop them from pulling that shit in the first place. Just before he busts out the new sword, you’ll get a message about them performing a strange ritual with their lamps. The lamps in their hand will glow red in a particular order. Your job is to shoot those four lamps in Free Aim in that exact order. Get it right, and Sword of Light never happens, and you do a nice chunk of damage to Lampmaster in the process. They also have a Ball of Light attack that telegraphs itself from a mile away, but you’ll need to jump the attack. Screenshot: Sandfall Interactive / Justin Clark / KotakuAs before, keep up the steady Burning for passive damage, and unload with everything you have, they’ll go down, but recognizing the dodge on these attacks is crucial. When they’re done for, you’ll get the At Death’s Door Picto along with a Shape of Life, some Chroma Catalysts, and some permanent emotional damage once a certain someone shows up and...well, you’ll see. Bring Kleenex. Clair Obscur: Expedition 33 is available now on PS5, Xbox Series X/S, and Windows PCs.
    #heres #how #beat #that #damn
    Here’s How To Beat That Damn Lampmaster In Clair Obscur: Expedition 33
    Jump ToNo matter who it is, from Goro to General Grievous, you just can’t trust a guy with more than two arms. The creep factor is part of it, but I think you just learn fast that while two of those arms are fine, the most devastating attack always comes, from the other two arms you didn’t even know were there. Suggested ReadingNintendo Switch 2 Price Is Set at for Now, But Could Go Higher Share SubtitlesOffEnglishSuggested ReadingNintendo Switch 2 Price Is Set at for Now, But Could Go Higher Share SubtitlesOffEnglishNintendo Switch 2 Price Is Set at for Now, But Could Go HigherRead More: That goes double for the Lampmaster boss, who’s one of the more persistent pains in the ass you’ll meet in your travels through the Paintress’ domain in Clair Obscur, and they’re all arms. That doesn’t mean they’re invincible, but it does mean you’re gonna be putting in a little overtime taking this ugly son of a bitch down. Let’s hit the lights. Screenshot: Sandfall Interactive / Justin Clark / KotakuYou’re gonna be playing around with Free Aim a lot during the course of this fight; That’s really the main gimmick with this guy. That won’t be a huge problem later on, for reasons we’ll get into, but at the outset, it means blowing a lot of AP on things that aren’t the major skills you need to do the real damage. So, anything you can slap onto your characters to keep them flush with AP is gonna be your best friend. Thankfully, aside from an obvious resistance to Light, they don’t have any other major restrictions. So, getting some good Burn Pictos/Lumina going is gonna be pretty effective here, same for letting Sciel load up some Foretell. Screenshot: Sandfall Interactive / Justin Clark / KotakuSo, the second you get control, you’re gonna want to spend the AP to Free Aim and shatter the lamps floating around Lampmaster’s head before they fire a couple of hard-hitting projectiles at you. Feel free to take out all four, but if you’ve got the reflexes, you can keep one lamp alive just to parry that shot for massive damage. Aside from that, Lampmaster’s got a fairly basic move set of combos and jump attacks. The big issue is that there’s so many goddamn arms that it’s hard to figure out where the hit’s coming from. The easy tip is that only one of the arms is holding the sword, but even that can be hard to see at this stage thanks to the dark background. Still, if you can get a visual grasp on it, that’s what you’re keying off of.The only other move of note is an energy blast where the lights in their hands turn purple and they hit you with a wave of Dark energy. The dodge timing on that is that it’s hitting after the second gathering of purple energy. Go with the visual cue on that.Aside from those, they have no weaknesses and is only resistant to Light. You’re mostly just laying into this guy with any and everything you’ve got. Stack up a ton of Burns and Overdrives, that lifebar will go down fast. Screenshot: Sandfall Interactive / Justin Clark / KotakuAnd just when everything seems to be going so well, you find out this asshole has a second phase. Here, their sword combo is now four hits and has some genuinely weird delays and timing that makes it hard to dodge. Best we can tell you there is that their sword will shake a little before the first hit swings down; and the big delay is between the second and third hits. The good news is it’s easier to see them against the new backdrop, and no individual hit in the combo is dangerous on its own, just try your damndest not to eat all four strikes. They also get a second four-hit combo that’s easier to dodge, but the final hit has them flying backwards to charge at you. There’s a subtle audio cue when the charge is coming, and it’s easily parried. Take advantage of that. The real new shit in their arsenal includes a move where they summon a sword made of light after a strange ritual with the lamps. The dodge timing on the sword of light’s not terrible, just wait for them to spin around after drawing the sword. But here’s a better idea: Stop them from pulling that shit in the first place. Just before he busts out the new sword, you’ll get a message about them performing a strange ritual with their lamps. The lamps in their hand will glow red in a particular order. Your job is to shoot those four lamps in Free Aim in that exact order. Get it right, and Sword of Light never happens, and you do a nice chunk of damage to Lampmaster in the process. They also have a Ball of Light attack that telegraphs itself from a mile away, but you’ll need to jump the attack. Screenshot: Sandfall Interactive / Justin Clark / KotakuAs before, keep up the steady Burning for passive damage, and unload with everything you have, they’ll go down, but recognizing the dodge on these attacks is crucial. When they’re done for, you’ll get the At Death’s Door Picto along with a Shape of Life, some Chroma Catalysts, and some permanent emotional damage once a certain someone shows up and...well, you’ll see. Bring Kleenex. Clair Obscur: Expedition 33 is available now on PS5, Xbox Series X/S, and Windows PCs. #heres #how #beat #that #damn
    KOTAKU.COM
    Here’s How To Beat That Damn Lampmaster In Clair Obscur: Expedition 33
    Jump ToNo matter who it is, from Goro to General Grievous, you just can’t trust a guy with more than two arms. The creep factor is part of it, but I think you just learn fast that while two of those arms are fine, the most devastating attack always comes, from the other two arms you didn’t even know were there. Suggested ReadingNintendo Switch 2 Price Is Set at $450 for Now, But Could Go Higher Share SubtitlesOffEnglishSuggested ReadingNintendo Switch 2 Price Is Set at $450 for Now, But Could Go Higher Share SubtitlesOffEnglishNintendo Switch 2 Price Is Set at $450 for Now, But Could Go HigherRead More: That goes double for the Lampmaster boss, who’s one of the more persistent pains in the ass you’ll meet in your travels through the Paintress’ domain in Clair Obscur, and they’re all arms. That doesn’t mean they’re invincible, but it does mean you’re gonna be putting in a little overtime taking this ugly son of a bitch down. Let’s hit the lights. Screenshot: Sandfall Interactive / Justin Clark / KotakuYou’re gonna be playing around with Free Aim a lot during the course of this fight; That’s really the main gimmick with this guy. That won’t be a huge problem later on, for reasons we’ll get into, but at the outset, it means blowing a lot of AP on things that aren’t the major skills you need to do the real damage. So, anything you can slap onto your characters to keep them flush with AP is gonna be your best friend. Thankfully, aside from an obvious resistance to Light, they don’t have any other major restrictions. So, getting some good Burn Pictos/Lumina going is gonna be pretty effective here, same for letting Sciel load up some Foretell. Screenshot: Sandfall Interactive / Justin Clark / KotakuSo, the second you get control, you’re gonna want to spend the AP to Free Aim and shatter the lamps floating around Lampmaster’s head before they fire a couple of hard-hitting projectiles at you. Feel free to take out all four, but if you’ve got the reflexes, you can keep one lamp alive just to parry that shot for massive damage. Aside from that, Lampmaster’s got a fairly basic move set of combos and jump attacks. The big issue is that there’s so many goddamn arms that it’s hard to figure out where the hit’s coming from. The easy tip is that only one of the arms is holding the sword, but even that can be hard to see at this stage thanks to the dark background. Still, if you can get a visual grasp on it, that’s what you’re keying off of.The only other move of note is an energy blast where the lights in their hands turn purple and they hit you with a wave of Dark energy. The dodge timing on that is that it’s hitting after the second gathering of purple energy. Go with the visual cue on that.Aside from those, they have no weaknesses and is only resistant to Light. You’re mostly just laying into this guy with any and everything you’ve got. Stack up a ton of Burns and Overdrives, that lifebar will go down fast. Screenshot: Sandfall Interactive / Justin Clark / KotakuAnd just when everything seems to be going so well, you find out this asshole has a second phase. Here, their sword combo is now four hits and has some genuinely weird delays and timing that makes it hard to dodge. Best we can tell you there is that their sword will shake a little before the first hit swings down; and the big delay is between the second and third hits. The good news is it’s easier to see them against the new backdrop, and no individual hit in the combo is dangerous on its own, just try your damndest not to eat all four strikes. They also get a second four-hit combo that’s easier to dodge, but the final hit has them flying backwards to charge at you. There’s a subtle audio cue when the charge is coming, and it’s easily parried. Take advantage of that. The real new shit in their arsenal includes a move where they summon a sword made of light after a strange ritual with the lamps. The dodge timing on the sword of light’s not terrible, just wait for them to spin around after drawing the sword. But here’s a better idea: Stop them from pulling that shit in the first place. Just before he busts out the new sword, you’ll get a message about them performing a strange ritual with their lamps. The lamps in their hand will glow red in a particular order. Your job is to shoot those four lamps in Free Aim in that exact order. Get it right, and Sword of Light never happens, and you do a nice chunk of damage to Lampmaster in the process. They also have a Ball of Light attack that telegraphs itself from a mile away, but you’ll need to jump the attack. Screenshot: Sandfall Interactive / Justin Clark / KotakuAs before, keep up the steady Burning for passive damage, and unload with everything you have, they’ll go down, but recognizing the dodge on these attacks is crucial. When they’re done for, you’ll get the At Death’s Door Picto along with a Shape of Life, some Chroma Catalysts, and some permanent emotional damage once a certain someone shows up and...well, you’ll see. Bring Kleenex. Clair Obscur: Expedition 33 is available now on PS5, Xbox Series X/S, and Windows PCs.
    0 Σχόλια 0 Μοιράστηκε
  • How to get the General Grievous skin in Fortnite

    At long last, you can play as General Grievous in Fortnite, so long as you have the Chapter 6 Galactic Battle Pass. Grievous and his second outfit, Warlord Grievous, are locked to it, but unlike previous special pass skins there aren’t any specific quests you have to complete to unlock him.

    Below we explain how to unlock the General Grievous skin in Fortnite.

    How to get the General Grievous and Warlord Grievous skins in Fortnite

    All you have to do to grab both Grievous skins is own the battle pass and level up. You need to gain 28 new levels to unlock both versions of the skin, but if you just want the base General Grievous look, then you only need 14 levels. You can see the Warlord Grievous skin below.

    Between the two skins, there are also cosmetics that unlock every two levels, like a “Magnaguard Electrostaff” pickaxe and a “Separatist Cape” back bling.

    A few of the cosmetics can actually be claimed even if you don’t have the battle pass unlocked, as sweet little freebies. The full list of Grievous cosmetics and the levels required to get them are as follows:

    “Tsmeu-6 Wheel Bike” glider“Grievous Grasp” emote“Grievous vs. Windu” loading screenMagnaguard Electrostaff” pickaxeGrievous banner icon“Separatist Cape” back blingGeneral Grievous skin and “Dual Staff Slam” built-in emote“Saber Collector” wrap“The Soulful One” guitar/back bling“Warlord’s Origin” loading screen“Kaleesh Cape” back bling“Grievous Stare” emote“Soulless One Dropper” contrailWarlord Grievous skinIn short, if you’re a battle pass owner, you should be able to unlock Grievous just by playing the game.
    #how #get #general #grievous #skin
    How to get the General Grievous skin in Fortnite
    At long last, you can play as General Grievous in Fortnite, so long as you have the Chapter 6 Galactic Battle Pass. Grievous and his second outfit, Warlord Grievous, are locked to it, but unlike previous special pass skins there aren’t any specific quests you have to complete to unlock him. Below we explain how to unlock the General Grievous skin in Fortnite. How to get the General Grievous and Warlord Grievous skins in Fortnite All you have to do to grab both Grievous skins is own the battle pass and level up. You need to gain 28 new levels to unlock both versions of the skin, but if you just want the base General Grievous look, then you only need 14 levels. You can see the Warlord Grievous skin below. Between the two skins, there are also cosmetics that unlock every two levels, like a “Magnaguard Electrostaff” pickaxe and a “Separatist Cape” back bling. A few of the cosmetics can actually be claimed even if you don’t have the battle pass unlocked, as sweet little freebies. The full list of Grievous cosmetics and the levels required to get them are as follows: “Tsmeu-6 Wheel Bike” glider“Grievous Grasp” emote“Grievous vs. Windu” loading screenMagnaguard Electrostaff” pickaxeGrievous banner icon“Separatist Cape” back blingGeneral Grievous skin and “Dual Staff Slam” built-in emote“Saber Collector” wrap“The Soulful One” guitar/back bling“Warlord’s Origin” loading screen“Kaleesh Cape” back bling“Grievous Stare” emote“Soulless One Dropper” contrailWarlord Grievous skinIn short, if you’re a battle pass owner, you should be able to unlock Grievous just by playing the game. #how #get #general #grievous #skin
    WWW.POLYGON.COM
    How to get the General Grievous skin in Fortnite
    At long last, you can play as General Grievous in Fortnite, so long as you have the Chapter 6 Galactic Battle Pass. Grievous and his second outfit, Warlord Grievous, are locked to it, but unlike previous special pass skins there aren’t any specific quests you have to complete to unlock him. Below we explain how to unlock the General Grievous skin in Fortnite. How to get the General Grievous and Warlord Grievous skins in Fortnite All you have to do to grab both Grievous skins is own the battle pass and level up. You need to gain 28 new levels to unlock both versions of the skin, but if you just want the base General Grievous look, then you only need 14 levels. You can see the Warlord Grievous skin below. Between the two skins, there are also cosmetics that unlock every two levels, like a “Magnaguard Electrostaff” pickaxe and a “Separatist Cape” back bling. A few of the cosmetics can actually be claimed even if you don’t have the battle pass unlocked, as sweet little freebies. The full list of Grievous cosmetics and the levels required to get them are as follows: “Tsmeu-6 Wheel Bike” glider (two levels) “Grievous Grasp” emote (four levels) “Grievous vs. Windu” loading screen (six levels, no battle pass required) Magnaguard Electrostaff” pickaxe (eight levels) Grievous banner icon (10 levels) “Separatist Cape” back bling (12 levels, no battle pass required) General Grievous skin and “Dual Staff Slam” built-in emote (14 levels) “Saber Collector” wrap (16 levels) “The Soulful One” guitar/back bling (18 levels) “Warlord’s Origin” loading screen (20 levels, no battle pass required) “Kaleesh Cape” back bling (22 levels) “Grievous Stare” emote (24 levels) “Soulless One Dropper” contrail (26 levels, no battle pass required) Warlord Grievous skin (28 levels) In short, if you’re a battle pass owner, you should be able to unlock Grievous just by playing the game.
    0 Σχόλια 0 Μοιράστηκε