MEDIUM.COM
AI in Warfare: Should Robots Decide Who Lives or Dies?
AI in Warfare: Should Robots Decide Who Lives or Dies?The Terrifying Rise of Killer Algorithms — And Why Humanity Can’t Afford to Look AwayPublished inFollower Booster Hub·3 min read·Just now--Photo by Sergey Koznov on UnsplashA Drone Strike in Libya Changed EverythingIn 2020, a Turkish-made Kargu-2 drone hunted down a human target autonomously. No joystick. No pilot. Just code. The UN called it the first recorded AI-powered kill. The machine decided. The machine executed.This isn’t sci-fi. It’s Tuesday.How We Got Here: From Roomba to TerminatorWe skipped the ethics chapterAI in warfare started innocently enough — logistics algorithms to track socks and ammo. Then came target recognition systems. Now, we’re fielding swarms of drones that communicate like pack animals.The slippery slope:2016: DARPA’s “Sea Hunter” drone crossed the Pacific solo.2021: The U.S. tested an AI F-16 pilot that beat humans in dogfights.2023: Israel’s “Lavender” system allegedly marked 37,000 Gazans as Hamas targets using AI (Haaretz report).The problem? AI doesn’t understand “collateral damage.” It calculates probabilities.The Black Box of Death“I’m sorry, Dave. I can’t explain that decision.”Modern combat AI runs on machine learning — a tangle of math even coders can’t fully decode. When a drone blows up a convoy, why did it choose that truck? Was it the license plate? The heat signature? A glitch?Real-world fallout:In Afghanistan, flawed facial recognition led to airstrikes on civilians misidentified as Taliban (NYT).Russia’s “Marker” robots in Ukraine reportedly prioritize targets based on social media intel.The nightmare scenario: An AI arms race where mistakes scale exponentially.The Myth of “Ethical Autonomy”Tech giants swear their algorithms follow rules. Lockheed Martin claims its systems…
0 Comments 0 Shares 29 Views