WWW.TECHNOLOGYREVIEW.COM
How the largest gathering of US police chiefs is talking about AI
This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first,sign up here. It can be tricky for reporters to get past certain doors, and the door to the International Association of Chiefs of Police conference is one thats almost perpetually shut to the media. Thus, I was pleasantly surprised when I was able to attend for a day in Boston last month. It bills itself as the largest gathering of police chiefs in the United States, where leaders from many of the countrys 18,000 police departments and even some from abroad convene for product demos, discussions, parties, and awards. I went along to see how artificial intelligence was being discussed, and the message to police chiefs seemed crystal clear: If your department is slow to adopt AI, fix that now. The future of policing will rely on it in all its forms. In the events expo hall, the vendors (of which there were more than 600) offered a glimpse into the ballooning industry of police-tech suppliers. Some had little to do with AIbooths showcased body armor, rifles, and prototypes of police-branded Cybertrucks, and others displayed new types of gloves promising to protect officers from needles during searches. But one needed only to look to where the largest crowds gathered to understand that AI was the major draw. The hype focused on three uses of AI in policing. The pitch on VR training is that in the long run, it can be cheaper and more engaging to use than training with actors or in a classroom. If youre enjoying what youre doing, youre more focused and you remember more than when looking at a PDF and nodding your head, V-Armed CEO Ezra Kraus told me. The effectiveness of VR training systems has yet to be fully studied, and they cant completely replicate the nuanced interactions police have in the real world. AI is not yet great at the soft skills required for interactions with the public. At a different companys booth, I tried out a VR system focused on deescalation training, in which officers were tasked with calming down an AI character in distress. It suffered from lag and was generally quite awkwardthe characters answers felt overly scripted and programmatic. The second focus was on the changing way police departments are collecting and interpreting data. Police chiefs attended classes on how to build these systems, like one taught by Microsoft and the NYPD about the Domain Awareness System, a web of license plate readers, cameras, and other data sources used to track and monitor crime in New York City. Crowds gathered at massive, high-tech booths from Axon and Flock, both sponsors of the conference. Flock sells a suite of cameras, license plate readers, and drones, offering AI to analyze the data coming in and trigger alerts. These sorts of tools have come in for heavy criticism from civil liberties groups, which see themas an assault on privacythat does little to help the public. Finally, as in other industries, AI is also coming for the drudgery of administrative tasks and reporting. Weve got this thing on an officers body, and its recording all sorts of great stuff about the incident, Bryan Wheeler, a senior vice president at Axon, told me at the expo. Can we use it to give the officer a head start? On the surface, its a writing task well suited for AI, which can quickly summarize information and write in a formulaic way. It could also save lots of time officers currently spend on writing reports.But given that AI is prone to hallucination, theres an unavoidable truth: Even if officers are the final authors of their reports, departments adopting these sorts of tools risk injecting errors into some of the most critical documents in the justice system. Police reports are sometimes the only memorialized account of an incident, wrote Andrew Ferguson, a professor of law at American University, in July in the firstlaw review articleabout the serious challenges posed by police reports written with AI. Because criminal cases can take months or years to get to trial, the accuracy of these reports are critically important. Whether certain details were included or left out can affect the outcomes of everything from bail amounts to verdicts. By showing an officer a generated version of a police report, the tools also expose officers to details from their body camera recordingsbeforethey complete their report, a document intended to capture the officers memory of the incident. That poses a problem. The police certainly would never show video to a bystander eyewitness before they ask the eyewitness about what took place, as that would just be investigatory malpractice, says Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy, and Technology Project, who will soon publish work on the subject. A spokesperson for Axon says this concern isnt reflective of how the tool is intended to work, and that Draft One has robust features to make sure officers read the reports closely, add their own information, and edit the reports for accuracy before submitting them. My biggest takeaway from the conference was simply that the way US police are adopting AI is inherently chaotic.There is no one agency governing how they use the technology, and the roughly 18,000 police departments in the United Statesthe precise figure is not even knownhave remarkably high levels of autonomy to decide which AI tools theyll buy and deploy. The police-tech companies that serve them will build the tools police departments find attractive, and its unclear if anyone will draw proper boundaries for ethics, privacy, and accuracy. That will only be made more apparent in an upcoming Trump administration. In a policingagendareleased last year during his campaign, Trump encouraged more aggressive tactics like stop and frisk, deeper cooperation with immigration agencies, and increased liability protection for officers accused of wrongdoing. The Biden administration is nowreportedlyattempting to lock in some of its proposed policing reforms before January. Without federal regulation on how police departments can and cannot use AI, the lines will be drawn by departments and police-tech companies themselves. Ultimately, these are for-profit companies, and their customers are law enforcement, says Stanley. They do what their customers want, in the absence of some very large countervailing threat to their business model. Now read the rest of The Algorithm Deeper Learning The AI lab waging a guerrilla war over exploitative AI When generative AI tools landed on the scene, artists were immediately concerned, seeing them as a new kind of theft. Computer security researcher Ben Zhao jumped into action in response, and his lab at the University of Chicago started building tools like Nightshade and Glaze to help artists keep their work from being scraped up by AI models. My colleague Melissa Heikkil spent time with Zhao and his team to look at the ongoing effort to make these tools strong enough to stop AIs relentless hunger for more images, art, and data to train on. Why this matters: The current paradigm in AI is to build bigger and bigger models, and these require vast data sets to train on. Tech companies argue that anything on the public internet is fair game, while artists demand compensation or the right to refuse. Settling this fight in the courts or through regulation could take years, so tools like Nightshade and Glaze are what artists have for now. If the tools disrupt AI companies efforts to make better models, that could push them to the negotiating table to bargain over licensing and fair compensation. But its a big if.Read more from Melissa Heikkil. Bits and Bytes Tech elites are lobbying Elon Musk for jobs in Trumps administration Elon Musk is the tech leader who most has Trumps ear. As such, hes reportedly the conduit through which AI and tech insiders are pushing to have an influence in the incoming administration. (The New York Times) OpenAI is getting closer to launching an AI agent to automate your tasks AI agentsmodels that can do tasks for you on your behalfare all the rage. OpenAI is reportedly closer to releasing one, news that comes a few weeks after Anthropicannouncedits own. (Bloomberg) How this grassroots effort could make AI voices more diverse A massive volunteer-led effort to collect training data in more languages, from people of more ages and genders, could help make the next generation of voice AI more inclusive and less exploitative. (MIT Technology Review) Google DeepMind has a new way to look inside an AIs mind Autoencoders let us peer into the black box of artificial intelligence. They could help us create AI that is better understood and more easily controlled. (MIT Technology Review) Musk has expanded his legal assault on OpenAI to target Microsoft Musk has expanded his federal lawsuit against OpenAI, which alleges that the company has abandoned its nonprofit roots and obligations. Hes now going after Microsoft too, accusing it of antitrust violations in its work with OpenAI. (The Washington Post)
0 Commentarii
0 Distribuiri
35 Views