GIZMODO.COM
OpenAI Shuts Down Developer Who Made AI-Powered Gun Turret
OpenAI has cut off a developer who built a device that could respond to ChatGPT queries to aim and fire an automated rifle. The device went viral after a video on Reddit showed its developer reading firing commands aloud, after which a rifle beside him quickly began aiming and firing at nearby walls.ChatGPT, were under attack from the front left and front right, the developer told the system in the video. Respond accordingly. The speed and accuracy which with the rifle responds is impressive, relying on OpenAIs Realtime API to interpret input and then return directions the contraption can understand. It would only require some simple training for ChatGPT to receive a command such as turn left and understand how to translate that into a machine-readable language. In a statement to Futurism, OpenAI said it had viewed the video and shut down the developer behind it. We proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry, the company told the outlet. The potential to automate lethal weapons is one fear that critics have raised about AI technology like that developed by OpenAI. The companys multi-modal models are capable of interpreting audio and visual inputs to understand a persons surroundings and respond to queries about what they are seeing. Autonomous drones are already being developed that could be used on the battlefield to identify and strike targets without a humans input. That is, of course, a war crime, and risks humans becoming complacent, allowing an AI to make decisions and making it tough to hold anyone accountable. The concern does not appear to be theoretical either. A recent report from the Washington Post found that Israel has already used AI to select bombing targets, sometimes indiscriminately. Soldiers who were poorly trained in using the technology attacked human targets without corroborating Lavenders predictions at all the story reads, referring to a piece of AI software. At certain times the only corroboration required was that the target was a male. Proponents of AI on the battlefield say it will make soldiers safer by allowing them to stay away from the frontlines and neutralize targets, like missile stockpiles, or conduct reconnaissance from a distance. And AI-powered drones could strike with precision. But that depends on how they are used. Critics say the U.S. should get better at jamming enemy communications systems instead, so adversaries like Russia have a harder time launching their own drones or nukes.OpenAI prohibits the use of its products to develop or use weapons, or to automate certain systems that can affect personal safety. But the company last year announced a partnership with defense-tech company Anduril, a maker of AI-powered drones and missiles, to create systems that can defend against drone attacks. The company says it will rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness.It is not hard to understand why tech companies are interested in moving into warfare. The U.S. spends nearly a trillion dollars annually on defense, and it remains an unpopular idea to cut that spending. With President-elect Trump filling his cabinet with conservative-leaning tech figures like Elon Musk and David Sacks, a whole slew of defense tech players are expected to benefit greatly and potentially supplant existing defense companies like Lockheed Martin. Although OpenAI is blocking its customers from using its AI to build weapons, there exists a whole host of open-source models that could be employed for the same use.
0 Comments 0 Shares 53 Views