• Microsoft and Atom Computing combine for quantum error correction demo
    arstechnica.com
    Atomic power? Microsoft and Atom Computing combine for quantum error correction demo New work provides a good view of where the field currently stands. John Timmer Nov 19, 2024 4:00 pm | 4 The first-generation tech demo of Atom's hardware. Things have progressed considerably since. Credit: Atom Computing The first-generation tech demo of Atom's hardware. Things have progressed considerably since. Credit: Atom Computing Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIn September, Microsoft made an unusual combination of announcements. It demonstrated progress with quantum error correction, something that will be needed for the technology to move much beyond the interesting demo phase, using hardware from a quantum computing startup called Quantinuum. At the same time, however, the company also announced that it was forming a partnership with a different startup, Atom Computing, which uses a different technology to make qubits available for computations.Given that, it was probably inevitable that the folks in Redmond, Washington, would want to show that similar error correction techniques would also work with Atom Computing's hardware. It didn't take long, as the two companies are releasing a draft manuscript describing their work on error correction today. The paper serves as both a good summary of where things currently stand in the world of error correction, as well as a good look at some of the distinct features of computation using neutral atoms.Atoms and errorsWhile we have various technologies that provide a way of storing and manipulating bits of quantum information, none of them can be operated error-free. At present, errors make it difficult to perform even the simplest computations that are clearly beyond the capabilities of classical computers. More sophisticated algorithms would inevitably encounter an error before they could be completed, a situation that would remain true even if we could somehow improve the hardware error rates of qubits by a factor of 1,000something we're unlikely to ever be able to do.The solution to this is to use what are called logical qubits, which distribute quantum information across multiple hardware qubits and allow the detection and correction of errors when they occur. Since multiple qubits get linked together to operate as a single logical unit, the hardware error rate still matters. If it's too high, then adding more hardware qubits just means that errors will pop up faster than they can possibly be corrected.We're now at the point where, for a number of technologies, hardware error rates have passed the break-even point, and adding more hardware qubits can lower the error rate of a logical qubit based on them. This was demonstrated using neutral atom qubits by an academic lab at Harvard University about a year ago. The new manuscript demonstrates that it also works on a commercial machine from Atom Computing.Neutral atoms, which can be held in place using a lattice of laser light, have a number of distinct advantages when it comes to quantum computing. Every single atom will behave identically, meaning that you don't have to manage the device-to-device variability that's inevitable with fabricated electronic qubits. Atoms can also be moved around, allowing any atom to be entangled with any other. This any-to-any connectivity can enable more efficient algorithms and error-correction schemes. The quantum information is typically stored in the spin of the atom's nucleus, which is shielded from environmental influences by the cloud of electrons that surround it, making them relatively long-lived qubits.Operations, including gates and readout, are performed using lasers. The way the physics works, the spacing of the atoms determines how the laser affects them. If two atoms are a critical distance apart, the laser can perform a single operation, called a two-qubit gate, that affects both of their states. Anywhere outside this distance, and a laser only affects each atom individually. This allows a fine control over gate operations.That said, operations are relatively slow compared to some electronic qubits, and atoms can occasionally be lost entirely. The optical traps that hold atoms in place are also contingent upon the atom being in its ground state; if any atom ends up stuck in a different state, it will be able to drift off and be lost. This is actually somewhat useful, in that it converts an unexpected state into a clear error. Atom Computing's system. Rows of atoms are held far enough apart so that a single laser sent across them (green bar) only operates on individual atoms. If the atoms are moved to the interaction zone (red bar), a laser can perform gates on pairs of atoms. Spaces where atoms can be held can be left empty to avoid performing unneeded operations. Credit: Reichardt, et al. The machine used in the new demonstration hosts 256 of these neutral atoms. Atom Computing has them arranged in sets of parallel rows, with space in between to let the atoms be shuffled around. For single-qubit gates, it's possible to shine a laser across the rows, causing every atom it touches to undergo that operation. For two-qubit gates, pairs of atoms get moved to the end of the row and moved a specific distance apart, at which point a laser will cause the gate to be performed on every pair present.Atom's hardware also allows a constant supply of new atoms to be brought in to replace any that are lost. It's also possible to image the atom array in between operations to determine whether any atoms have been lost and if any are in the wrong state.Its only logicalAs a general rule, the more hardware qubits you dedicate to each logical qubit, the more simultaneous errors you can identify. This identification can enable two ways of handling the error. In the first, you simply discard any calculation with an error and start over. In the second, you can use information about the error to try to fix it, although the repair involves additional operations that can potentially trigger a separate error.For this work, the Microsoft/Atom team used relatively small logical qubits (meaning they used very few hardware qubits), which meant they could fit more of them within 256 total hardware qubits the machine made available. They also checked the error rate of both error detection with discard and error detection with correction.The research team did two main demonstrations. One was placing 24 of these logical qubits into what's called a cat state, named after Schrdinger's hypothetical feline. This is when a quantum object simultaneously has non-zero probability of being in two mutually exclusive states. In this case, the researchers placed 24 logical qubits in an entangled cat state, the largest ensemble of this sort yet created. Separately, they implemented what's called the Bernstein-Vazirani algorithm. The classical version of this algorithm requires individual queries to identify each bit in a string of them; the quantum version obtains the entire string with a single query, so is a notable case of something where a quantum speedup is possible.Both of these showed a similar pattern. When done directly on the hardware, with each qubit being a single atom, there was an appreciable error rate. By detecting errors and discarding those calculations where they occurred, it was possible to significantly improve the error rate of the remaining calculations. Note that this doesn't eliminate errors, as it's possible for multiple errors to occur simultaneously, altering the value of the qubit without leaving an indication that can be spotted with these small logical qubits.Discarding has its limits; as calculations become increasingly complex, involving more qubits or operations, it will inevitably mean every calculation will have an error, so you'd end up wanting to discard everything. Which is why we'll ultimately need to correct the errors.In these experiments, however, the process of correcting the errortaking an entirely new atom and setting it into the appropriate statewas also error-prone. So, while it could be done, it ended up having an overall error rate that was intermediate between the approach of catching and discarding errors and the rate when operations were done directly on the hardware.In the end, the current hardware has an error rate that's good enough that error correction actually improves the probability that a set of operations can be performed without producing an error. But not good enough that we can perform the sort of complex operations that would lead quantum computers to have an advantage in useful calculations. And that's not just true for Atom's hardware; similar things can be said for other error-correction demonstrations done on different machines.There are two ways to go beyond these current limits. One is simply to improve the error rates of the hardware qubits further, as fewer total errors make it more likely that we can catch and correct them. The second is to increase the qubit counts so that we can host larger, more robust logical qubits. We're obviously going to need to do both, and Atom's partnership with Microsoft was formed in the hope that it will help both companies get there faster.John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 4 Comments
    0 Comments ·0 Shares ·131 Views
  • AI and the War Against Plastic Waste
    www.informationweek.com
    Carrie Pallardy, Contributing ReporterNovember 19, 202410 Min ReadPollution floating in river, Mumbai, Indiapaul kennedyvia Alamy Stock PhotoPlastic pollution is easy to visualize given that many rivers are choked with such waste and the oceans are littered with it. The Great Pacific Garbage Patch, a massive collection of plastic and other debris, is an infamous result of plastics proliferation. Even if you dont live near a body of water to see the problem firsthand, youre unlikely to walk far without seeing some piece of plastic crushed underfoot. But untangling this problem is anything but easy.Enter artificial intelligence, which is being applied to many complex problems that include plastics pollution. InformationWeek spoke to research scientists and startup founders about why plastics waste is such a complicated challenge and how they use AI in their work.The Plastics ProblemPlastic is ubiquitous today as food packaging, clothing, medical devices, cars, and so much more rely on this material. Since 1950, nearly 10 billion metric tons of plastic has been produced, and over half of that was just in the last 20 years. So, it's been this extremely prolific growth in production and use. It's partially due to just the absolute versatility of plastic, Chase Brewster, project scientist at Benioff Ocean Science Laboratory, a center for marine conservation at the University of California, Santa Barbara, says.Related:Plastic isnt biodegradable and recycling is imperfect. As more plastic is produced and more of it is wasted, much of that waste ends up back in the environment, polluting land and water as it breaks down into microplastics and nanoplastics.Even when plastic products end up at waste management facilities, processing them is not simple. A lot of people think of plastic as just plastic, Bradley Sutliff, a former National Institute of Standards and Technology (NIST) researcher, says. In reality, there are many different complex polymers that fall under the plastics umbrella. Recycle and reuse isnt just a matter of sorting; its a chemistry problem, too. Not every type of plastic can be mixed and processed into a recycled material.Plastic is undeniably convenient as a low-cost material used almost everywhere. It takes major shifts in behavior to reduce its consumption, a change that is not always feasible.Virgin plastic is cheaper than recycled plastic, which means companies are more likely to use the former. In turn, consumers are faced with the same economic choice, if they even have one.There is no one single answer to solving this environmental crisis. Plastic pollution is an economic, technical, educational, and behavioral problem, Joel Tasche, co-CEO and cofounder of CleanHub, a company focused on collecting plastic waste, says in an email interview.Related:So, how can AI arm organizations, policymakers, and people with the information and solutions to combat plastic pollution?AI and Quantifying Plastic WasteThe problem of plastic waste is not new, but the sheer volume makes it difficult to gather the granular data necessary to truly understand the challenge and develop actionable solutions.If you look at the body of research on plastic pollution, especially in the marine environment, there is a large gap in terms of actually in situ collected data, says Brewster.The Benioff Ocean Science Laboratory is working to change that through the Clean Currents Coalition, which focuses on removing plastic waste from rivers before it has the chance to enter the ocean. The Coalition is partnered with local organizations in nine different countries, representing a diverse group of river systems, to remove and analyze plastic pollution.We started looking into what artificial intelligence can do to help us to collect that more fine data that can help drive our upstream action to reduce plastic production and plastic leaking into the environment in the first place, says Brewster.Related:The project is developing a machine learning model with hardware and software components. A web cam is positioned above the conveyor belts of large trash wheels used to collect plastic waste in rivers. Those cameras count and categorize trash as it is pulled from the river.This system automatically [sends] that to the cloud, to a data set, visualizing that on a dashboard that can actively tell us what types of trash are coming out of the river and at what rate, Brewster explains. We have this huge data set from all over the world, collected synchronously over three years during the same time period, very diverse cultures, communities, river sizes, river geomorphologies.That data can be leveraged to gain more insight into what kinds of plastic end up in rivers, which flow to our oceans, and to inform targeted strategies for prevention and cleanup.AI and Waste ManagementVery little plastic is actually recycled; just 5% with some being combusted and the majority ends up in landfills. Waste management plants face the challenge of sorting through a massive influx of material, some recyclable and some not. And, of course, plastic is not one uniform group that can easily be processed into reusable material.AI and imaging equipment are being put to work in waste management facilities to tackle the complex job of sorting much more efficiently.During Sutliffs time with NIST, a US government agency focused on industrial competitiveness, he worked with a team to explore how AI could make recycling less expensive.Waste management facilities can use near-visible infrared light (NIR) to visualize and sort plastics. Sutliff and his team looked to improve this approach with machine learning.Our thought was that the computer might be a lot better at distinguishing which plastic is which if you teach it, he says. You can get a pretty good prediction of things like density and crystallinity by using near infrared light if you train your models correctly.The results of that work show promise, and Sutliff released the code to NISTs GitHub page. More accurate sorting can help waste management facilities monetize more recyclable materials, rather than incinerate them, send them to landfills, or potentially leak them back into the environment.Recyclers are based off of sorting plastics and then selling them to companies that will use them. And obviously, the company buying them wants to know exactly what they're getting. So, the better the recyclers can sort it, the more profitable it is, Sutliff says.There are other organizations working with waste collectors to improve sorting and identification. CleanHub, for example, developed a track-and-trace process. Waste collectors take photos and upload them to its AI-powered app.The app creates an audit trail, and machine learning predicts the composition and weight of the collected bags of trash. We focus on collecting both recyclable and non-recyclable plastics, directing recyclables back into the economy and converting non-recyclables into alternative fuels through co-processing, which minimizes environmental impact compared to traditional incineration, explains Tasche.Greyparrot is an AI waste analytics company that started out by partnering with about a dozen recycling plants around the world, gathering a global data set to power its platform. Today, that platform provides facilities with insights into more than 89 different waste categories. Greyparrots analyzers sit above the conveyor belts in waste management facilities, capturing images and sharing AI-powered insights. The latest generation of these analyzers is made of recyclable materials.If a given plant processes 10 tons or 15 tons of waste per day that accumulates to around like 20 million objects. We actually are looking at individually all those 20 million objects moving at two to three to four meters a second, very high-speed in real time, says Ambarish Mitra, co-founder of Greyparrot. We are not only doing classification of the objects, which goes through a waste flow, we are [also] doing financial value extraction.The more capable waste management facilities are of sorting and monetizing the plastic that flows into their operations, the more competitive the market for recycled materials can become.The entire waste and recycling industry is in constant competition with the virgin material market. Everything that either lowers cost or increases the quality of the output product is a step towards a circular economy, says Tasche.AI and a Policy ApproachPlastic waste is a problem with global stakes, and policymakers are paying attention. In 2022, the United Nations announced plans to create an international legally binding agreement to end plastic pollution. The treaty is currently going through negotiations, with another session slated to begin in November.Scientists at the Benioff Ocean Science Laboratory and Eric and Wendy Schmidt Center for Data Science & Environment at UC Berkeley developed the Global Plastics AI Policy Tool with the intention of understanding how different high-level policies could reduce plastic waste.This is a real opportunity to actually quantify or estimate what the impact of some of the highest priority policies that are on the table for the treaty [is] going to be, says Neil Nathan, a project scientist at the Benioff Ocean Science Laboratory.Of the 175 nations that agreed to create the global treaty to end plastic pollution, 60 have agreed to reach that goal by 2040. Ending plastic pollution by 2040 seems like an incredibly ambitious goal. Is that even possible? asks Nathan. One of the biggest findings for us is that it actually is close to possible.The AI tool leverages historic plastic consumption data, global trade data, and population data. Machine learning algorithms, such as Random Forest, uncover historical patterns in plastic consumption and waste and project how those patterns could change in the future.The team behind the tool has been tracking the policies up for discussion throughout the treaty negotiation process to evaluate which could have the biggest impact on outcomes like mismanaged waste, incinerated waste, and landfill waste.Nathan offers the example of a minimum recycled content mandate. This is essentially requiring that new products are made with a certain percentage, in this case 40%, of post-consumer recycled content. This alone actually will reduce plastic mismanaged waste leaking into [the] environment by over 50%, he says.Its been a really wonderful experience engaging with the plastic treaty, going into the United Nations meetings, working with delegates, putting this in their hands and seeing them being able to visualize the data and actually understanding the impact of these policies, Nathan adds.AI and Product DevelopmentHow could AI impact plastic waste further upstream? Data collected and analyzed by AI systems could change how CPG companies produce plastic goods before they ever end up in the hands of consumers, waste facilities, and the environment.For example, data gathered at waste management facilities can give product manufacturers insight into how their goods are actually being recycled, or not. No two waste plants are identical, Mitra points out. If your product gets recycled in plant A, doesn't mean you'll get recycled in plant B.That insight could show companies where changes need to be made in order to make their products more recyclable.Companies could increasingly be driven to make those kinds of changes by government policy, like the European Unions Extended Producer Responsibility (EPR) policies, as well as their own ESG goals.Millions of dollars [go] into packaging design. So, whatever will come out in 25 or 26, it's already designed, and whatever is being thought [of] for 26 and 27, it's in R&D today, says Mitra. [Companies] definitely have a large appetite to learn from this and improve their packaging design to make it more recyclable rather than just experimenting with material without knowing how will they actually go through these mechanical sorting environments.In addition to optimizing the production of plastic products and packaging for recyclability, AI can hunt for viable alternatives; novel materials discovery is a promising AI application. As it sifts through vast repositories of data, AI might bring to light a material that has economic viability and less environmental impact than plastic.Plastic has a long lifecycle, persisting for decades or even longer after it is produced. AI is being applied to every point of that lifecycle: from creation, to consumer use, to garbage and recycling cans, to waste management facilities, and to its environmental pollution. As more data is gathered, AI will be a useful tool for making strides toward achieving a circular economy and reducing plastic waste.About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Comments ·0 Shares ·155 Views
  • Being in space makes it harder for astronauts to think quickly
    www.newscientist.com
    There is a lot to keep track of when working in spaceNASA via Getty ImageAstronauts aboard the International Space Station (ISS) had slower memory, attention and processing speed after six months, raising concerns about the impact of cognitive impairment on future space missions to Mars.The extreme environment of space, with reduced gravity, harsh radiation and the lack of regular sunrises and sunsets, can have dramatic effects on astronaut health, from muscle loss to an increased risk of heart disease. However, the cognitive effects of long-term space travel are less well documented. AdvertisementNow, Sheena Dev at NASAs Johnson Space Center in Houston, Texas, and her colleagues have looked at the cognitive performance of 25 astronauts during their time on the ISS.The team ran the astronauts through 10 tests, some of which were done on Earth, once before and twice after the mission, while others were done on the ISS, both early and later in the mission. These tests measured certain cognitive capacities, such as finding patterns on a grid to test abstract reasoning or choosing when to stop an inflating balloon before it pops to test risk-taking.The researchers found that the astronauts took longer to complete tests measuring processing speed, working memory and attention on the ISS than on Earth, but they were just as accurate. While there was no overall cognitive impairment or lasting effect on the astronauts abilities, some of the measures, like processing speed, took longer to return to normal after they came back to Earth. Voyage across the galaxy and beyond with our space newsletter every month.Sign up to newsletterHaving clear data on the cognitive effects of space travel will be crucial for future human spaceflight, says Elisa Raffaella Ferr at Birkbeck, University of London, but it will be important to collect more data, both on Earth and in space, before we know the full picture.A mission to Mars is not only longer in terms of time, but also in terms of autonomy, says Ferr. People there will have a completely different interaction with ground control because of distance and delays in communication, so they will need to be fully autonomous in taking decisions, so human performance is going to be key. You definitely dont want to have astronauts on Mars with slow reaction time, in terms of attention-related tasks or memory or processing speed.It isnt surprising that there were some specific decreases in cognitive performance given the unusual environment of space, says Jo Bower at the University of East Anglia in Norwich, UK. Its not necessarily a great cause for an alarm, but its something thats useful to be aware of, especially so that you know your limits when youre in these extreme environments, she says.That awareness could be especially helpful for astronauts on longer missions, adds Bower. Its not just how you do in those tests, but also what your perception of your ability is, she says. We know, for example, if youre sleep deprived, that quite often your performance will decline, but you wont realise your performance has declined.Journal reference:Frontiers in Physiology DOI: 10.3389/fphys.2024.1451269Topics:
    0 Comments ·0 Shares ·119 Views
  • Einsteins theories tested on the largest scale ever he was right
    www.newscientist.com
    The DESI instrument observing the sky from the Nicholas U. Mayall Telescope during a meteor showerKPNO/NOIRLab/NSF/AURA/R. SparksAlbert Einsteins theory of general relativity has been proven right on the largest scale yet. An analysis of millions of galaxies shows that the way they have evolved and clustered over billions of years is consistent with his predictions.Ever since Einstein put forward his theory of gravity more than a century ago, researchers have been trying to find scenarios where it doesnt hold up. But there had not been such a test at the level of the largest distances in the universe until now, says Mustapha Ishak-Boushaki at the University of Texas at Dallas. He and his colleagues used data from the Dark Energy Spectroscopic Instrument (DESI) in Arizona to conduct one.AdvertisementDetails of cosmic structure and how it has changed over time are a potent test of how well we understand gravity because it was this force that shaped galaxies as they evolved out of the small variations in the distribution of matter in the early universe.DESI has so far collected data on how nearly 6 million galaxies clustered over the course of the past 11 billion years. Ishak-Boushaki and his colleagues combined this with results from several other large surveys, such as those mapping the cosmic microwave background radiation and supernovae. Then, they compared this with predictions from a theory of gravity that encompassed both Einsteins ideas and more contemporary competing theories of modified gravity. They found no deviation from Einsteins gravity. Ishak-Boushaki says that even though there are some uncertainties in the measurements, there is still no strong evidence that any theory that deviates from Einsteins would capture the state of the universe more accurately.Itamar Allali at Brown University in Rhode Island says that while general relativity has been shown to hold in extremely precise tests conducted in laboratories, it is important to be able to test it at all scales, including across the entire cosmos. This helps eliminate the possibility that Einstein made correct predictions for objects of one size but not another, he says. Voyage across the galaxy and beyond with our space newsletter every month.Sign up to newsletterThe new analysis also offers hints for how dark energy, a mysterious force thought to be responsible for the accelerating expansion of the universe, fits within our theories of gravity, says Nathalie Palanque-Delabrouille at Lawrence Berkeley National Laboratory in California. Einsteins earliest formulations of general relativity included a cosmological constant a kind of anti-gravitational force that played the same role as dark energy but previous DESI results have suggested that dark energy isnt constant. It may have changed as the universe aged, says Palanque-Delabrouille.The fact that we see agreement with [general relativity] and still see this departure from the cosmological constant really open the Pandoras box of what the data could actually be telling us, says Ishak-Boushaki.DESI will keep collecting data for several more years and ultimately record the positions and properties of 40 million galaxies, which the three scientists all say will bring clarity on how to correctly marry general relativity and theories of dark energy. This new analysis only used one year of DESIs data, but in March 2025 the team will share takeaways from the instruments first three years of observations.Allali says he is anticipating these results to be consequential in several important ways, such as pinpointing shifts in the Hubble constant, which is a measure of the rate of the universes expansion, narrowing down the mass of elusive particles called neutrinos and even searching for new cosmic ingredients like dark radiation.This analysis will weigh in on a lot more than gravity, it will weigh in on all of cosmology, he says.Topics:
    0 Comments ·0 Shares ·128 Views
  • Inside Clears ambitions to manage your identity beyond the airport
    www.technologyreview.com
    If youve ever been through a large US airport, youre probably at least vaguely aware of Clear. Maybe your interest (or irritation) has been piqued by the pods before the security checkpoints, the attendants in navy blue Its position in airports has made Clear Secure, with its roughly $3.75 billion market cap, the most visible biometric identity company in the United States. Over the past two decades, Clear has put more than 100 lanes in 58 airports across the US, and in the past decade it has entered 17 sports arenas and stadiums, from San Jose to Denver to Atlanta. Now you can also use its identity verification platform to rent tools at Home Depot, put your profile in front of recruiters on LinkedIn, and, as of this month, verify your identity as a rider on Uber. And soon enough, if Clear has its way, it may also be in your favorite retailer, bank, and even doctors officeor anywhere else that you currently have to pull out a wallet (or, of course, wait in line). The company that has helped millions of vetted members skip airport security lines is now working to expand its frictionless, face-first line-cutting service from the airport to just about everywhere, online and off, by promising to verify that you are who you say you are and you are where you are supposed to be. In doing so, CEO Caryn Seidman Becker told investors in an earnings call earlier this year, it has designs on being no less than the identity layer of the internet, as well as the universal identity platform of the physical world. All you have to do is show upand show your face. This is enabled by biometric technology, but Clear is far more than just a biometrics company. As Seidman Becker has told investors, biometrics arent the product they are a feature. Or, as she put it in a 2022 podcast interview, Clear is ultimately a platform company no different than Amazon or Applewith dreams, she added, of making experiences safer and easier, of giving people back their time, of giving people control, of using technology for frictionless experiences. (Clear did not make Seidman Becker available for an interview.) While the company has been building toward this sweeping vision for years, it now seems the time has finally come. A confluence of factors is currently accelerating the adoption ofeven necessity foridentity verification technologies: increasingly sophisticated fraud, supercharged by artificial intelligence that is making it harder to distinguish who or what is real; data breaches that seem to occur on a near daily basis; consumers who are more concerned about data privacy and security; and the lingering effects of the pandemics push toward contactless experiences. All of this is creating a new urgency around ways to verify information, especially our identitiesand, in turn, generating a massive opportunity for Clear. For years, Seidman Becker has been predicting that biometrics will go mainstream. But now that biometrics have, arguably, gone mainstream, whatand whobears the cost? Because convenience, even if chosen by only some of us, leaves all of us wrestling with the effects. Some critics warn that not everyone will benefit from a world where identity is routed through Clearmaybe because its too expensive, maybe because biometric technologies are often less effective at identifying people of color, people with disabilities, or those whose gender identity may not match what official documents say. Whats more, says Kaliya Young, an identity expert who has advised the US government, having a single private company disintermediating our biometric dataespecially facial datais the wrong architecture to manage identity. It seems they are trying to create a system like login with Google, but for everything in real life, Young warns. While the single sign-on option that Google (or Facebook or Apple) provides for websites and apps may make life easy, it also poses greater security and privacy risks by putting both our personal data and the keys to it in the hands of a single profit-driven entity: Were basically selling our identity soul to a private company, whos then going to be the gatekeeper everywhere one goes. Though Clear remains far less well known than Google, more than 27 million people have already helped it become that very gatekeeperand one of the largest private repositories of identities on the planet, as Nicholas Peddy, Clears chief technology officer, put it in an interview with MIT Technology Review this summer. With Clear well on the way to realizing its plan for a frictionless future, its time to try to understand both how we got here and what we have (been) signed up for. A new frontier in identity management Imagine this: On a Friday morning in the near future, you are rushing to get through your to-do list before a weekend trip to New York. In the morning, you apply for a new job on LinkedIn. During lunch, assured that recruiters are seeing your professional profile because its been verified by Clear, you pop out to Home Depot, confirm your identity with a selfie, and rent a power drill for a quick bathroom repair. Then, in the midafternoon, you drive to your doctors office; having already verified your identityprompted by a text message sent a few days earlieryou confirm your arrival with a selfie at a Clear kiosk.Before you go to bed, you plan your morning trip to the airport and set an alarmbut not too early, because you know that with Clear, you can quickly drop your bags and breeze through security. Once youre in New York, you head to Barclays Center, where youll be seeing your favorite singer; you skip the long queue out front to hop in the fast-track Clear line. Its late when the show is over, so you grab an Uber home and barely need to wait for a driver, who feels more comfortable thanks to your verified rider profile. At no point did you pull out your drivers license or fill out repetitive paperwork. All that was already on file. Everything was easy; everything was frictionless. More than 27 million people have already helped Clear become one of the largest private repositories of identities on the planet. This, at least, is the world that Clear is actively building toward. Part of Clears power, Seidman Becker often says, is that it can wholly replace our wallets: our credit cards, drivers licenses, health insurance cards, perhaps even building key fobs. But you cant just suddenly be all the cards you carry. For Clear to link your digital identity to your real-world self, you must first give up a bit of personal dataspecifically, your biometric data. Biometrics refers to the unique physical and behavioral characteristicsfaces, fingerprints, irises, voices, and gaits, among othersthat identify each of us as individuals. For better or worse, they typically remain stable during our lifetimes. Relying on biometrics for identification can be convenient, since people are apt to misplace a wallet or forget the answer to a security question. But on the other hand, if someone manages to compromise a database of biometric information, that convenience can become dangerous: We cannot easily change our face or fingerprint to secure our data again, the way we could change a compromised password. On a practical level, there are generally two ways that biometrics are used to identify individuals. The first, generally referred to one-to-many or one-to-n matching, compares one persons biometric identifier with a database full of them. This is sometimes associated with a stereotypical idea of dystopian surveillance in which real-time facial recognition from live video could allow authorities to identify anyone walking down the street. The other, one-to-one matching, is the basis for Clear; it compares a biometric identifier (like the face of a live person standing before an airport agent) with a previously recorded biometric template (such as a passport photo) to verify that they match. This is usually done with the individuals knowledge and consent, and it arguably poses a lower privacy risk. Often, one-to-one matching includes a layer of document verification, like checking that your passport is legitimate and matches a photograph you used to register with the system. The US Congress urgently saw the need for better identity management following the September 11 terrorist attacks; 18 of the 19 hijackers used fake identity documents to board their flights. In the aftermath, the newly created Transportation Security Administration (TSA) implemented security processes that slowed down air travel significantly. Part of the problem was that everybody was just treated the same at airports, recalls the serial media entrepreneur Steven Brillincluding, famously, former vice president Al Gore. It sounded awfully democratic but in terms of basic risk management and allocation of resources, it just didnt make any sense. Congress agreed, authorizing the TSA to create a program that would allow people who passed background checks to be recognized as trusted travelers and skip some of the scrutiny at the airport. In 2007, San Francisco's then-mayor, Gavin Newsom, had his irises scanned by Clear at the San Francisco International Airport.DAVID PAUL MORRIS/GETTY In 2003, Brill teamed up with Ajay Amlani, a technology entrepreneur and former adviser to the Department of Homeland Security, and founded a company called Verified Identity Pass (VIP) to provide biometric identity verification in the TSAs new program. The vision, says Amlani, was a unified fast lanesimilar to a toll lane. It appeared to be a win-win solution. The TSA had a private-sector partner for its registered-traveler program; VIP had a revenue stream from user fees; airports got a cut of the fees in exchange for leasing VIP space; By 2005, VIP had launched in its first airport, Orlando International in Florida. Membersinitially paying $80received Clear cards that contained a cryptographic representation of their fingerprint, iris scans, and a photo of their face taken at enrollment. They could use those cards at the airport to be escorted to the front of the security lines. The defense contracting giant Lockheed Martin, which already provided biometric capabilities to the US Department of Defense and the FBI, was responsible for deploying and providing technology for VIPs system, with additional technical expertise from Oracle and others. This left VIP to focus on marketing, pricing, branding, customer service, and consumer privacy policies," as the president of Lockheed Transportation and Security Solutions, Don Antonucci, said at the time. By 2009, nearly 200,000 people had joined. The company had received $116 million in investments and signed contracts with about 20 airports. It all seemed so promisingif VIP had not already inadvertently revealed the risks inherent in a system built on sensitive personal data. A lost laptop and a big opportunity From the beginning, there were concerns about the implications of VIPs Clear card for privacy, civil liberty, and equity, as well as questions about its effectiveness at actually stopping future terrorist attacks. Advocacy groups like the Electronic Privacy Information Center (EPIC) warned that the biometrics-based system would result in a surveillance infrastructure built on sensitive personal information, but data from the Pew Research Center shows that a majority of the public at the time felt that it was generally necessary to sacrifice some civil liberties in the name of safety. Then a security lapse sent the whole operation crumbling. In the summer of 2008, VIP reported that an unencrypted company laptop containing addresses, birthdays, and drivers license and passport numbers of 33,000 applicants had gone missing from an office at San Francisco International Airport (SFO)even though TSAs security protocol required it to encrypt all laptops holding personal data. NEIL WEBB The laptop was found about two weeks later and the company said no data was compromised. But it was still a mess for VIP. Months later, investors pushed Brill out, and associated costs led the company to declare bankruptcy and close the following year. Disgruntled users filed a class action lawsuit against VIP to recoup membership fees and punitive damages. Some users were upset they had recently renewed their subscriptions, and others worried about what would happen to their personal information. A judge temporarily prevented the company from selling user data, but the decision didnt hold. Seidman Becker and her longtime business partner Ken Cornick, both hedge fund managers, saw an opportunity. In 2010, they bought VIPand its user datain a bankruptcy sale for just under $6 million and registered a new company called Alclear. I was a big believer in biometrics, Seidman Becker told the tech journalists Kara Swisher and Lauren Goode in 2017. I wanted to build something that made the world a better place, and Clear was that platform. Initially, the new Clear followed closely in the footsteps of its predecessor: Lockheed Martin transferred the members information to the new company, which had acquired VIPs hardware and continued to use Clear cards to hold members biometrics. After the relaunch, Clear also started building partnerships with other companies in the travel industryincluding American Express, United Airlines, Alaska Airlines, Delta Airlines, and Hertz Rental Carsto bundle its service for free or at a discount. (Clear declined to specify how many of its users have such discounts, but in earnings calls the company has stressed its efforts to reduce the number of members paying reduced rates.) By 2014, improvements in internet latency and biometric processing speeds allowed Clear to eliminate the cards and migrate to a server-based systemwithout compromising data security, the company says. Clear emphasizes that it meets industry standards for keeping data secure, with methods including encryption, firewalls, and regular penetration testing by both internal and external teams. The company says it also maintains locked boxes around data relating to air travelers. Still, the reality is that every database of this kind is ultimately a target, and almost every day theres a massive breach or hack, says Chris Gilliard, a privacy and surveillance researcher who was recently named co-director of the Critical Internet Studies Institute. Over the years, even apparently well-protected biometric information has been compromised. Last year, for instance, a data breach at the genetic testing company 23andMe exposed sensitive informationincluding geographic locations, birth years, family trees, and user-uploaded photosfrom nearly 7 million customers. This is what Young, who helped facilitate the creation of the open-source identity management standards Open ID Connect and OAuth, means when she says that Clear has the wrong architecture for managing digital identity; its too much of a risk to keep our digital identities in a central database, cryptographically protected or not. She and many other identity and privacy experts believe that the most privacy-protecting way to manage digital identity is to use credentials, like a mobile drivers license, stored on peoples devices in digital wallets, she says. These digital credentials can have biometrics, but the biometrics in a central database are not being pinged for day to day use. But its not just data thats potentially vulnerable. In 2022 and 2023, Clear faced three high-profile security incidents in airports, including one in which a passenger successfully got through the companys checks using a boarding pass found in the trash. In another, a traveler in Alabama used someone elses ID to register for Clear and, later, to successfully pass initial security checks; he was discovered only when he tried to bring ammunition through a subsequent checkpoint. This spurred an investigation by the TSA, which turned up more alarming information: Nearly 50,000 photos used by Clear to enroll customers were flagged as non-matches by the companys facial recognition software. Some photos didnt even contain full faces, according to Bloomberg. (In a press release after the incident, the company refuted the reporting, describing it as a single human errorhaving nothing to do with our technology and stating that the images in question were not relied upon during the secure, multi-layered enrollment process.) How do you get to be the one? When I spoke to Brill this spring, he told me hed always envisioned that Clear would expand far beyond the airport. The idea I had was that once you had a trusted identity, you would potentially be able to use it for a lot of different things, he said, but the trick is to get something that is universally accepted. And thats the battle that Clear and anybody else has to fight, which is: How do you get to be the one? Goode Intelligence, a market research firm that focuses on the booming identity space, estimates that by 2029, there will be 1.5 billion digital identity wallets around the worldwith use for travel leading the way and generating an estimated $4.6 billion in revenue. Clear is just one player, and certainly not the biggest. ID.me, for instance, provides similar face-based identity verification and has over 130 million users, dwarfing Clears roughly 27 million. Its also already in use by numerous US federal and state agencies, including the IRS. The reality is that every database of this kind is ultimately a target, and almost every day theres a massive breach or hack. But as Goode Intelligence CEO Alan Goode tells me, Clears early-mover advantage, particularly in the US, puts it in a good space within North America [to] be more pervasiveor to become what Brill called the one that is most closely stitched into peoples daily lives. Clear began growing beyond travel in 2015, when it started offering biometric fast-pass access to what was then AT&T Park in San Francisco. Stadiums across California, Colorado, and Washington, and in major cities in other states, soon followed. Then came the pandemic, hitting Clear (and the entire travel industry) hard. But the crisis for Clears primary business actually accelerated its move into new spaces with Health Pass, which allowed organizations to confirm the health status of employees, residents, students, and visitors who sought access to a physical space. Users could upload vaccination cards to the Health Pass section in the Clear mobile app; the program was adopted by nearly 70 partners in 110 unique locations, including NFL stadiums, the Mariners T-Mobile Park, and the 9/11 Memorial Museum. Demand for vaccine verification eventually slowed, and Health Pass shut down in March 2024. But as Jason Sherwin, Clears senior director of health-care business development, said in a podcast interview earlier this year, it was the companys first foray into health carethe business line that currently represents its primary focus across everything were doing outside of the airport. Today, Clear kiosks for patient sign-ins are being piloted at Georgias Wellstar Health Systems, in conjunction with one of the largest providers of electronic health records in the United States: Epic (which is unrelated to the privacy nonprofit). Whats more, Health Pass enabled Clear to expand at a time when the survival of travel-focused businesses wasnt guaranteed. In November 2020, Clear had roughly 5 million members; today, that number has grown fivefold. The company went public in 2021 and has experienced double-digit revenue growth annually. These doctors office sign-ins, in which the system verifies patient identity via a selfie, rely on whats called Clear Verified, a platform the company has rolled out over the past several years that allows partners (health-care systems, as well as brick-and-mortar retailers, hotels, and online platforms) to integrate Clears identity checks into their own user-verification processes. It again seems like a win-win situation: Clear gets more users and a fee from companies using the platform, while companies confirm customers identity and information, and customers, in theory, get that valuable frictionless experience. One high-profile partnership, with LinkedIn, was announced last year: We know authenticity matters and we want the people, companies and jobs you engage with everyday to be real and trusted," Oscar Rodriguez, LinkedIns head of trust and privacy, said in a press release. All this comes together to create the foundation for what is Clears biggest advantage today: its network. The companys executives often speak about its embedded users across various services and platforms, as well as its ecosystem, meaning the venues where it is used. As Peddy explains, the value proposition for Clear today is not necessarily any particular technology or biometric algorithm, but how it all comes togetherand can work universally. Clear would be wherever our consumers need us to be, he saysit would sort of just be this ubiquitous thing that everybody has. Clear CEO Caryn Seidman Becker (left) rings the bell at the New York Stock Exchange in 2021.NYSE VIA TWITTER A prospectus to investors from the companys IPO makes the pitch simple: We believe Clear enables our partners to capture not just a greater share of their customers wallet, but a greater share of their overall lives. The more Clear is able to reach into customers lives, the more valuable customer data it can collect. All user interactions and experiences can be tracked, the companys privacy policy explains. While the policy states that Clear will not sell data and will never share biometric or health information without express consent, it also lays out the non-health and non-biometric data that it collects and can use for consumer research and marketing. This includes members demographic details, a record of every use of Clears various products, and even digital images and videos of the user. Documents obtained by OneZero offer some further detail into what Clear has at least considered doing with customer data: David Gershgorn writes about a 2015 presentation to representatives from Los Angeles International Airport, titled Identity DashboardValuable Marketing Data, which showed off what the company had collected, including the number of sports games users had attended and with whom, which credit cards they had, their favorite airlines and top destinations, and how often they flew first class or economy. Clear representatives emphasized to MIT Technology Review that the company does not share or sell information without consent, though they had nothing to add in response to a question about whether Clear can or does aggregate data to derive its own marketing insights, a business model popularized by Facebook. At Clear, privacy and security are job one, spokesperson Ricardo Quinto wrote in an email. We are opt-in. We never sell or share our members information and utilize a multilayered, best-in-class infosec system that meets the highest standards and compliance requirements. Nevertheless, this influx of customer data is not just good for business; its risky for customers. It creates another attack surface, Gilliard warns. This makes us less safe, not more, as a consistent identifier across your entire public and private life is the dream of every hacker, bad actor, and authoritarian. A face-based future for some Today, Clear is in the middle of another major change: replacing its use of iris scans and fingerprints with facial verification in airportspart of a TSA-required upgrade in identity verification, a TSA spokesperson wrote in an email to MIT Technology Review. For a long time, facial recognition technology for the highest security purposes was not ready for prime time, Seidman Becker told Swisher and Goode back in 2017. It wasnt operating with five nines, she addedthat is, 99.999% from a matching and an accuracy perspective. But today, facial recognition has significantly improved and the company has invested in enhancing image quality through improved capture, focus, and illumination, according to Quinto. The move is part of a broader shift toward facial recognition technology in US travel, bringing the country in line with practices at many international airports. The TSA began expanding facial identification from a few pilot programs this year, while airlines including Delta and United are also introducing face-based boarding, baggage drops, and even lounge access. And the International Air Transport Association, a trade group for the airline industry, is rolling out a contactless travel process that will allow passengers to check in, drop off their bags, and board their flightsall without showing either passports or tickets, just their faces. NEIL WEBB Privacy experts worry that relying on faces for identity verification is even riskier than other biometric methods. After all, its a lot easier to scan peoples faces passively than it is to scan irises or takefingerprints, Senator Jeff Merkley of Oregon, an outspoken critic of government surveillance and of the TSAs plans to employ facial verification at airports, said in an email. The point is that once a database of faces is built, it is potentially far more useful for surveillance purposes than, say, fingerprints. Everyone who values privacy, freedom, and civil rights should be concerned about the increasing, unchecked use of facial recognition technology by corporations and the federal government, Merkley wrote. Even if Clear is not in the business of surveillance today, it could, theoretically, pivot or go bankrupt and (again) sell off its parts, including user data. Jeramie Scott, senior counsel and director of the Project on Surveillance Oversight at EPIC, says that ultimately, the lack of federal [privacy] regulation means that were just taking the promises of companies like Clear at face value: Whatever they say about how they implement facial recognition today does not mean that thats how they'll be implementing facial recognition tomorrow. Making this particular scenario potentially more concerning is that the images stored by this private company are generally going to be much higher quality than those collected by scraping the internetwhich Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project (STOP), says would make its data far more useful for surveillance than that held by more controversial facial recognition companies like Clearview AI. Even a far less pessimistic read of Clears data collection reveals the challenges of using facial identification systems, whichas a 2019 report from the National Institute for Standards and Technology revealedhave been shown to work less effectively in certain populations, particularly people of African and East Asian descent, women, and elderly and very young people. NIST has also not tested identification accuracy for individuals who are transgender, but Gilliard says he expects the algorithms would fall short. More recent testing shows that some algorithms have improved, NIST spokesperson Chad Boutin tells MIT Technology Reviewthough accuracy is still short of the five nines that Seidman Becker once said Clear was aiming for. (Quinto, the Clear representative, maintains that Clears recent upgrades, combined with the fact that the companys testing involves comparing member photos to smaller galleries, rather than the millions used in NIST scenarios, means its technology remains accurate and suitable for secure environments like airports.) Even a very small error rate in a system that is deployed hundreds of thousands of times a day could still leave a lot of people at risk of misidentification, explains Hannah Quay-de La Vallee, a technologist at the Center for Democracy & Technology, a nonprofit based in Washington, DC. All this could make Clears services inaccessible to someeven if they can afford it, which is less likely given the recent increase in the subscription fee for travelers to $199 a year. The free Clear Verified Platform is already giving rise to access problems in at least one partnership, with LinkedIn. The professional networking site encourages users to verify their identities either with an employer email address or with Clear, which marketing materials say will yield more engagement. But some LinkedIn users have expressed concerns, claiming that even after uploading a selfie, they were unable to verify their identities with Clear if they were subscribed to a smaller phone company or if they had simply not had their phone number for enough time. As one Reddit user emphasized, Getting verified is a huge deal when getting a job. LinkedIn said it does not enable recruiters to filter, rank, or sort by whether a candidate has a verification badge, but also said that verified information does help people make more informed decisions as they build their network or apply for a job. Clear only said it works with our partners to provide them with the level of identity assurance that they require for their customers and referred us back to LinkedIn. An opt-in future that may not really be optional Maybe whats worse than waiting in line, or even being cut in front of, is finding yourself stuck in what turns out to be the wrong lineperhaps one that you never want to be in. That may be how it feels if you dont use Clear and similar biometric technologies. When I look at companies stuffing these technologies into vending machines, fast-food restaurants, schools, hospitals, and stadiums, what I see is resignation rather than acceptancepeople often dont have a choice, says Gilliard, the privacy and surveillance scholar. The life cycle of these things is that even when it is optional, oftentimes it is difficult to opt out. And while the stakes may seem relatively lowClear is, after all, a voluntary membership programthey will likely grow as the system is deployed more widely. As Seidman Becker said on Clears latest earnings call in early November, The lines between physical and digital interactions continue to blur. A verified identity isnt just a check mark. Its the foundation for everything we do in a high-stakes digital world. Consider a job ad posted by Clear earlier this year, seeking to hire a vice president for business development; it noted that the company has its eye on a number of additional sectors, including financial services, e-commerce, P2P networking, online trust, gaming, government, and more. Increasingly, companies and the government are making the submission of your biometrics a barrier to participation in society, Gilliard says. This will be particularly true at the airport, with the increasing ubiquity of facial recognition across all security checks and boarding processes, and where time-crunched travelers could be particularly vulnerable to Clears sales pitch. Airports have even privately expressed concerns about these scenarios to Clear. Correspondence from early 2022 between the company and staff at SFO, released in response to a public records request, reveals that the airport received a number of complaints about Clear staff improperly and deceitfully soliciting approaching passengers in the security checkpoint lanes outside of its premises, with an airport employee calling it completely unacceptable and aggressive and deceptive behavior. Of course, this isnt to say everyone with a Clear membership was coerced into signing up. Many people love it; the company told MIT Technology Review that it had a nearly 84% retention rate earlier this year. Still, for some experts, its worrisome to think that what Clear users are comfortable with ends up setting the ground rules for the rest of us. Were going to normalize potentially a bunch of biometric stuff but not have a sophisticated conversation about where and how were normalizing what, says Young. She worries this will empower actors who want to move toward a creepy surveillance state, or corporate surveillance capitalism on steroids. Without understanding what were building or how or where the guardrails are, she adds, I also worry that there could be major public backlash, and then legitimate uses [of biometric technology] are not understood and supported. But in the meantime, even superfans are grumbling about an uptick in wait times in the airports Clear lines. After all, if everyone decides to cut to the front of the line, that just creates a new long line of line-cutters.
    0 Comments ·0 Shares ·128 Views
  • How the largest gathering of US police chiefs is talking about AI
    www.technologyreview.com
    This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first,sign up here. It can be tricky for reporters to get past certain doors, and the door to the International Association of Chiefs of Police conference is one thats almost perpetually shut to the media. Thus, I was pleasantly surprised when I was able to attend for a day in Boston last month. It bills itself as the largest gathering of police chiefs in the United States, where leaders from many of the countrys 18,000 police departments and even some from abroad convene for product demos, discussions, parties, and awards. I went along to see how artificial intelligence was being discussed, and the message to police chiefs seemed crystal clear: If your department is slow to adopt AI, fix that now. The future of policing will rely on it in all its forms. In the events expo hall, the vendors (of which there were more than 600) offered a glimpse into the ballooning industry of police-tech suppliers. Some had little to do with AIbooths showcased body armor, rifles, and prototypes of police-branded Cybertrucks, and others displayed new types of gloves promising to protect officers from needles during searches. But one needed only to look to where the largest crowds gathered to understand that AI was the major draw. The hype focused on three uses of AI in policing. The pitch on VR training is that in the long run, it can be cheaper and more engaging to use than training with actors or in a classroom. If youre enjoying what youre doing, youre more focused and you remember more than when looking at a PDF and nodding your head, V-Armed CEO Ezra Kraus told me. The effectiveness of VR training systems has yet to be fully studied, and they cant completely replicate the nuanced interactions police have in the real world. AI is not yet great at the soft skills required for interactions with the public. At a different companys booth, I tried out a VR system focused on deescalation training, in which officers were tasked with calming down an AI character in distress. It suffered from lag and was generally quite awkwardthe characters answers felt overly scripted and programmatic. The second focus was on the changing way police departments are collecting and interpreting data. Police chiefs attended classes on how to build these systems, like one taught by Microsoft and the NYPD about the Domain Awareness System, a web of license plate readers, cameras, and other data sources used to track and monitor crime in New York City. Crowds gathered at massive, high-tech booths from Axon and Flock, both sponsors of the conference. Flock sells a suite of cameras, license plate readers, and drones, offering AI to analyze the data coming in and trigger alerts. These sorts of tools have come in for heavy criticism from civil liberties groups, which see themas an assault on privacythat does little to help the public. Finally, as in other industries, AI is also coming for the drudgery of administrative tasks and reporting. Weve got this thing on an officers body, and its recording all sorts of great stuff about the incident, Bryan Wheeler, a senior vice president at Axon, told me at the expo. Can we use it to give the officer a head start? On the surface, its a writing task well suited for AI, which can quickly summarize information and write in a formulaic way. It could also save lots of time officers currently spend on writing reports.But given that AI is prone to hallucination, theres an unavoidable truth: Even if officers are the final authors of their reports, departments adopting these sorts of tools risk injecting errors into some of the most critical documents in the justice system. Police reports are sometimes the only memorialized account of an incident, wrote Andrew Ferguson, a professor of law at American University, in July in the firstlaw review articleabout the serious challenges posed by police reports written with AI. Because criminal cases can take months or years to get to trial, the accuracy of these reports are critically important. Whether certain details were included or left out can affect the outcomes of everything from bail amounts to verdicts. By showing an officer a generated version of a police report, the tools also expose officers to details from their body camera recordingsbeforethey complete their report, a document intended to capture the officers memory of the incident. That poses a problem. The police certainly would never show video to a bystander eyewitness before they ask the eyewitness about what took place, as that would just be investigatory malpractice, says Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy, and Technology Project, who will soon publish work on the subject. A spokesperson for Axon says this concern isnt reflective of how the tool is intended to work, and that Draft One has robust features to make sure officers read the reports closely, add their own information, and edit the reports for accuracy before submitting them. My biggest takeaway from the conference was simply that the way US police are adopting AI is inherently chaotic.There is no one agency governing how they use the technology, and the roughly 18,000 police departments in the United Statesthe precise figure is not even knownhave remarkably high levels of autonomy to decide which AI tools theyll buy and deploy. The police-tech companies that serve them will build the tools police departments find attractive, and its unclear if anyone will draw proper boundaries for ethics, privacy, and accuracy. That will only be made more apparent in an upcoming Trump administration. In a policingagendareleased last year during his campaign, Trump encouraged more aggressive tactics like stop and frisk, deeper cooperation with immigration agencies, and increased liability protection for officers accused of wrongdoing. The Biden administration is nowreportedlyattempting to lock in some of its proposed policing reforms before January. Without federal regulation on how police departments can and cannot use AI, the lines will be drawn by departments and police-tech companies themselves. Ultimately, these are for-profit companies, and their customers are law enforcement, says Stanley. They do what their customers want, in the absence of some very large countervailing threat to their business model. Now read the rest of The Algorithm Deeper Learning The AI lab waging a guerrilla war over exploitative AI When generative AI tools landed on the scene, artists were immediately concerned, seeing them as a new kind of theft. Computer security researcher Ben Zhao jumped into action in response, and his lab at the University of Chicago started building tools like Nightshade and Glaze to help artists keep their work from being scraped up by AI models. My colleague Melissa Heikkil spent time with Zhao and his team to look at the ongoing effort to make these tools strong enough to stop AIs relentless hunger for more images, art, and data to train on. Why this matters: The current paradigm in AI is to build bigger and bigger models, and these require vast data sets to train on. Tech companies argue that anything on the public internet is fair game, while artists demand compensation or the right to refuse. Settling this fight in the courts or through regulation could take years, so tools like Nightshade and Glaze are what artists have for now. If the tools disrupt AI companies efforts to make better models, that could push them to the negotiating table to bargain over licensing and fair compensation. But its a big if.Read more from Melissa Heikkil. Bits and Bytes Tech elites are lobbying Elon Musk for jobs in Trumps administration Elon Musk is the tech leader who most has Trumps ear. As such, hes reportedly the conduit through which AI and tech insiders are pushing to have an influence in the incoming administration. (The New York Times) OpenAI is getting closer to launching an AI agent to automate your tasks AI agentsmodels that can do tasks for you on your behalfare all the rage. OpenAI is reportedly closer to releasing one, news that comes a few weeks after Anthropicannouncedits own. (Bloomberg) How this grassroots effort could make AI voices more diverse A massive volunteer-led effort to collect training data in more languages, from people of more ages and genders, could help make the next generation of voice AI more inclusive and less exploitative. (MIT Technology Review) Google DeepMind has a new way to look inside an AIs mind Autoencoders let us peer into the black box of artificial intelligence. They could help us create AI that is better understood and more easily controlled. (MIT Technology Review) Musk has expanded his legal assault on OpenAI to target Microsoft Musk has expanded his federal lawsuit against OpenAI, which alleges that the company has abandoned its nonprofit roots and obligations. Hes now going after Microsoft too, accusing it of antitrust violations in its work with OpenAI. (The Washington Post)
    0 Comments ·0 Shares ·129 Views
  • Save Over $50 On The Illustrated Biography Of Final Fantasy's Original Designer
    www.gamespot.com
    A lot of talent has helped shape the Final Fantasy franchise over the years, but artist Yoshitaka Amano has arguably had the biggest impact on the visual direction of the series. The Final Fantasy designer's career is celebrated in a gorgeous 328-page book titled Yoshitaka Amano: The Illustrated Biography. Final Fantasy fans can save $56 on this unique biography, as Amazon has dropped the price from $150 down to $94. You can pair it with The Sky: The Art of Final Fantasy Box Set for $125 (was $200), a three-volume set with art from the first 10 mainline games.Yoshitaka Amano: The Illustrated Biography$94 ($150)Yoshitaka Amano: The Illustrated BiographyThe book itself contains nearly 400 illustrations and photos from across Amano's Final Fantasy portfolio and it also comes with a 96-page landscape-style softcover book. This is a collection of sketches from Amano during a tour of Paris, and is a lovely companion piece to his Final Fantasy work. Additionally, there's a Blu-ray with almost three hours of material, two mini-lithographs made by Amano exclusively for this release, and an individually signed card from the illustrator.Continue Reading at GameSpot
    0 Comments ·0 Shares ·110 Views
  • Several Of The Best Marvel Board Games Have Supersized Discounts For Black Friday
    www.gamespot.com
    Marvel Champions: The Card Game $37.33 (was $80) with coupon See at Amazon Marvel: Crisis Protocol Miniatures Game $59.49 (was $100) See at Amazon Amazon has excellent deals on hundreds of board games ahead of Black Friday, including a wide assortment of tabletop adventures starring superheroes and villains from the Marvel Cinematic Universe. Marvel Champions: The Card Game is available for only $37.33, over 50% off the strategy card game's $80 list price. You can also save big on several campaign expansion sets for Marvel Champions. If you like miniatures games, Marvel: Crisis Protocol is definitely worth checking out for $59.49 (was $100). Other highlights include Marvel Villainous for only $14.69 and Marvel Splendor for $35.We've rounded up the best Marvel Black Friday board game deals below. Some of these Marvel games are eligible for Amazon's Buy Two, Get One Free Board Game Sale, which we've noted next to the price. Marvel Champions: The Card Game $37.33 (was $80) with coupon Marvel Champions is a cooperative strategy game for up to four players. The base game includes 350-plus cards, 100 tokens, five hit point counters, status cards, and movable dials to track stats throughout each run. Champions is designed for Marvel fans ages 14 and up, and each game, on average, takes anywhere from 45 to 90 minutes.Though designed as a cooperative experience, Marvel Champions is known for being an excellent single-player game.Several Marvel Champions campaign expansion sets are on sale for steep discounts, too. The Galaxy's Most Wanted, a Guardians of the Galaxy-themed campaign, is available for only $20.66 (was $45); the X-Men-focused Mutant Genesis campaign is down to $28 (was $45); and The Mad Titan's Shadow starring Thanos is up for grabs for $16.66. Marvel Champions Core Set -- $37.33 ($80) with couponMarvel Champions: The Galaxy's Most Wanted Expansion -- $20.66 ($45) with couponMarvel Champions: The Mad Titan's Shadow Expansion -- $16.66 ($25)Marvel Champions: Mutant Genesis Campaign Expansion -- $28 ($45)Marvel Champions: NeXt Evolution Expansion -- $36 ($45)Marvel Champions: Sinister Motives Expansion -- $40.40 ($45) See at Amazon Marvel: Crisis Protocol Miniatures Game $59.49 (was $100) Marvel: Crisis Protocol is a two-player tactical strategy game featuring an ensemble cast of heroes and villains. You and your opponent create squads of miniatures and race to see who can complete missions the fastest. Each game lasts roughly 90 minutes. The Core Set comes 10 miniatures and display stands for each character, various terrain pieces, cards, tokens, dice, and more. Here are the characters you get with the Crisis Protocol Core Set:Spider-ManIron ManCaptain AmericaBlack WidowCaptain MarvelUltronCrossbonesBaron ZemoDoctor OctopusRed SkullYou can expand your roster of heroes and villains with character packs, several of which are on sale, too:Marvel: Crisis Protocol Core Set -- $59.49 ($100)Marvel: Crisis Protocol - Thanos Expansion Pack -- $38 ($65)Marvel: Crisis Protocol - Deadpool & Hydra Agent Bob Character Pack -- $32 ($55)Marvel: Crisis Protocol - Emma Frost & Psylocke Character Pack -- $30 ($40)Marvel: Crisis Protocol - Electro/Sandman/Shocker/Vulture Character Pack -- $55 ($80) See at Amazon Continue Reading at GameSpot
    0 Comments ·0 Shares ·115 Views
  • RPGs With The Most Romance Options
    gamerant.com
    For some gamers, romance is an integral part of any RPG experience. In the midst of saving the world, grinding out enemies, chasing down better gear, and leveling up, some players feel the need to find that special someone for their characters. For many, it's an important part of role-playing within the world of the game.
    0 Comments ·0 Shares ·127 Views
  • How to Farm Hide in Towers of Aghasba
    gamerant.com
    Hide is one of the most necessary resources in Towers of Aghasba, and players will need to scour the islands in search of huge quantities of these items if they want to progress further in the Main Quest, as well as complete some of the secondary Quests in the game.
    0 Comments ·0 Shares ·132 Views