WWW.TECHNOLOGYREVIEW.COM
Palmer Luckeys vision for the future of mixed reality
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.War is a catalyst for change, an expert in AI and warfare told me in 2022. At the time, the war in Ukraine had just started, and themilitary AI business was booming. Two years later, things have only ramped up as geopolitical tensions continue to rise.Silicon Valley players are poised to benefit. One of them is Palmer Luckey, the founder of the virtual-reality headset company Oculus, which he sold to Facebook for $2 billion. After Luckeys highly public ousting from Meta, he founded Anduril, which focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense. The company is now valued at $14 billion. My colleague James ODonnellinterviewedLuckey about his new pet project: headsets for the military.Luckey is increasingly convinced that the military, not consumers, will see the value of mixed-reality hardware first:Youre going to see an AR headset on every soldier, long before you see it on every civilian, he says. In the consumer world, any headset company is competing with the ubiquity and ease of the smartphone, but he sees entirely different trade-offs in defense.Read the interview here.The use of AI for military purposes is controversial.Back in 2018, Google pulled out of the Pentagons Project Maven, an attempt to build image recognition systems to improve drone strikes, following staff walkouts over the ethics of the technology. (Google has sincereturned to offering servicesfor the defense sector.) There has been a long-standing campaign to ban autonomous weapons, also known as killer robots, which powerful militaries such as the US have refused to agree to.But the voices that boom even louder belong to an influential faction in Silicon Valley, such as Googles former CEO Eric Schmidt, who has called for the military to adopt and invest more in AI to get an edge over adversaries. Militaries all over the world have been very receptive to this message.Thats good news for the tech sector.Military contracts are long and lucrative, for a start. Most recently, the Pentagon purchased services from Microsoft and OpenAI to do search, natural-language processing, machine learning, and data processing, reportsThe Intercept. In the interview with James, Palmer Luckey says the military is a perfect testing ground for new technologies. Soldiers do as they are told and arent as picky as consumers, he explains. Theyre also less price-sensitive: Militaries dont mind spending a premium to get the latest version of a technology.But there are serious dangers in adopting powerful technologies prematurely in such high-risk areas.Foundation models pose serious national security and privacy threats by, for example, leaking sensitive information, argue researchers at the AI Now Institute and Meredith Whittaker, president of the communication privacy organization Signal, in anew paper. Whittaker, who was a core organizer of the Project Maven protests, has said that the push to militarize AI is really more about enriching tech companies than improving military operations.Despite calls for stricter rules around transparency, we are unlikely to see governments restrict their defense sectors in any meaningful way beyond voluntary ethical commitments. We are in the age of AI experimentation, and militaries are playing with the highest stakes of all. And because of the militarys secretive nature, tech companies can experiment with the technology without the need for transparency or even much accountability. That suits Silicon Valley just fine.Now read the rest of The AlgorithmDeeper LearningHow Wayves driverless cars will meet one of their biggest challenges yetThe UK driverless-car startup Wayve is headed west. The firms cars learned to drive on the streets of London. But Wayve has announced that it will begin testing its tech in and around San Francisco as well. And that brings a new challenge: Its AI will need to switch from driving on the left to driving on the right.Full speed ahead:As visitors to or from the UK will know, making that switch is harder than it sounds. Your view of the road, how the vehicle turnsits all different. The move to the US will be a test of Wayves technology, which the company claims is more general-purpose than what many of its rivals are offering. Across the Atlantic, the company will now go head to head with the heavyweights of the growing autonomous-car industry, including Cruise, Waymo, and Tesla.Join Will Douglas Heaven on a ride in one of its cars to find out more.Bits and BytesKids are learning how to make their own little language modelsLittle Language Models is a new application from two PhD researchers at MITs Media Lab that helps children understand how AI models workby getting to build small-scale versions themselves. (MIT Technology Review)Google DeepMind is making its AI text watermark open sourceGoogle DeepMind has developed a tool for identifying AI-generated text called SynthID, which is part of a larger family of watermarking tools for generative AI outputs. The company is applying the watermark to text generated by its Gemini models and making it available for others to use too. (MIT Technology Review)Anthropic debuts an AI model that can use a computerThe tool enables the companys Claude AI model to interact with computer interfaces and take actions such as moving a cursor, clicking on things, and typing text. Its a very cumbersome and error-prone version of what some have saidAI agentswill be able to do one day. (Anthropic)Can an AI chatbot be blamed for a teens suicide?A 14-year-old boy committed suicide, and his mother says it was because he was obsessed with an AI chatbot created by Character.AI. She is suing the company. Chatbots have been touted as cures for loneliness, but critics say they actually worse isolation. (The New York Times)Google, Microsoft, and Perplexity are promoting scientific racism in search resultsThe internets biggest AI-powered search engines are featuring the widely debunked idea that white people are genetically superior to other races. (Wired)
0 Commentarios
0 Acciones
34 Views