Police Use of Facial Recognition Backfires Spectacularly When It Renders Them Unable to Convict Alleged Murderer
futurism.com
We may be hurdling ourselves toward a violent techno-dystopia, but it isn't here yet.The Cleveland Plain Dealer revealed that police in Cleveland, Ohio just botched an important murder case by relying on evidence given to them by AI.Cleveland detectives were stumped while investigating the February 2024 murder of 33-year-old Cleveland native Blake Story. So they did what any good investigator would do: they sent security footage of a suspect to the Northeast Ohio Regional Fusion Center a command center for intelligence and data gathering shared by local, state, and federal agencies to plug into its AI program.The Fusion Center has been employing bold new AI products to help their staff cut down on their workload. One of those productswas Clearview, the controversial New York-based facial-recognition platform that's facing a class-action federal lawsuit following the New York Times' bombshell revelation of its existence.The Fusion Center kicked back a positive ID on a man named Qeyeon Tolbert. Local police wasted no time securing a "no-knock" warrant for Tolbert's girlfriend's home, where they allegedly gathered additional evidence to build a case leading to his arrest and subsequent murder trial.The issue comes from the chain of logic that led police to identify Tolbert. Rather than building a case on gunpowder residue, DNA matching, eyewitness accounts, or cell tower pings, police essentially used Clearview to match CCTV footage of the murder to CCTV footage of a random man they thought matched the suspect's description.A judge has since suppressed the AI evidence and all evidence predicated on the faulty warrant, though county prosecutors are appealing the ruling.Clearview will likely want to have nothing to do with the incident. A disclaimer on their website states that "Clearview makes no guarantees as to the accuracy of its search-identification software."That copy, however, is not nearly as prominent on the startup's website as blurbs bragging about "50+ Billion images in our law enforcement database the largest in the world by far," or a claimed "99+ percent accuracy for all demographics."And though Clearview's software may not be intended to prosecute American citizens yet the website is clearly billed as a tool for law enforcement and judicial entities, bragging on its front page that a "public defender [used] facial recognition to locate a key witness in a vehicular homicide exoneration case."Legalese notwithstanding, aWashington Post investigation has found that at least eight Americans have been falsely arrested based solely on the positive IDs of facial recognition AI. In every one of those cases, the person arrested was Black.And that number is increasingas overzealous police whether aware of AI's reliability issues or not take AI panopticon tech as gospel, often choosing to arrest first and ask questions later. The ACLU notes that "even when police heed warnings to take additional investigative steps, they exacerbate the unreliability of [facial recognition] results."As tech companies seek bold and innovative use cases in order to recoup astronomical startup costs, the burden of proof should be shared by the judicial system, law enforcement, and tech companies alike not buried deep in the metadata.More on AI surveillance: Schools Are Using AI Surveillance to Catch Students Vaping Inside BathroomsShare This Article
0 Comentários ·0 Compartilhamentos ·52 Visualizações