
Eric Schmidt Suggests Countries Could Engage in Mutual Assured AI Malfunction (MAIM)
gizmodo.com
By Thomas Maxwell Published March 6, 2025 | Comments (1) | Google co-founder Eric Schmidt, others say U.S. is approaching Cold War-style arms race in AI defense technology. South Korean Defense Ministry/Getty Former Google CEO Eric Schmidt and Scale AI founder Alexandr Wang are co-authors on a new paper called Superintelligence Strategy that warns against the U.S. government creating a Manhattan Project for so-called Artificial General Intelligence (AGI) because it could quickly get out of control around the world. The gist of the argument is that the creation of such a program would lead to retaliation or sabotage by adversaries as countries race to have the most powerful AI capabilities on the battlefield. Instead, the U.S. should focus on developing methods like cyberattacks that could disable threatening AI projects. Schmidt and Wang are big boosters of AIs potential to advance society through applications like drug development and workplace efficiency. Governments, meanwhile, see it as the next frontier in defense, and the two industry leaders are essentially concerned that countries are going to end up in a race to create weapons with increasingly dangerous potential. Similar to how international agreements have reined in the development of nuclear weapons, Schmidt and Wang believe nation states should go slow on AI development and not fall prey to racing one another in AI-powered killing machines. At the same time, however, both Schmidt and Wang are building AI products for the defense sector. The formers White Stork is building autonomous drone technologies, while Wangs Scale AI this week signed a contract with the Department of Defense to create AI agents that can assist with military planning and operations. After years of shying away from selling technology that could be used in warfare, Silicon Valley is now patriotically lining up to collect lucrative defense contracts.All military defense contractors have a conflict of interest to promote kinetic warfare, even when not morally justified. Other countries have their own military industrial complexes, the thinking goes, so the U.S. needs to maintain one too. But in the end, innocent people suffer and die while powerful people play chess. Palmer Luckey, the founder of defense tech darling Anduril, has argued that AI-powered targeted drone strikes are safer than launching nukes that could have a larger impact zone or planting land mines that have no targeting. And if other countries are going to continue building AI weapons, we should have the same capabilities as deterrence. Anduril has been supplying Ukraine with drones that can target and attack Russian military equipment over enemy lines.Anduril recently ran an ad campaign that displayed the basic text Work at Anduril.com covered with the word Dont written in giant, graffiti-style spray-painted letters, seemingly playing to the idea that working for the military industrial complex is the counterculture now. Schmidt and Wang have argued that humans should always remain in the loop on any AI-assisted decision making. But as recent reporting has demonstrated, the Israeli military is already relying on faulty AI programs to make lethal decisions. Drones have long been a divisive topic, as critics say that soldiers are more complacent when they are not directly in the line of fire or do not see the consequences of their actions first-hand. Image recognition AI is notorious for making mistakes, and we are quickly heading to a point where killer drones will fly back and forth hitting imprecise targets. The Schmidt and Wang paper makes a lot of assumptions that AI is soon going to be superintelligent, capable of performing as good if not better as humans in most tasks. That is a big assumption as the most cutting-edge thinking models continue to produce major gaffs, and companies get flooded with poorly-written job applications assisted by AI. These models are crude imitations of humans with often unpredictable and strange behavior. Schmidt and Wang are selling a vision of the world and their solutions. If AI is going to be all-powerful and dangerous, governments should go to them and buy their products because they are the responsible actors. In the same vein, OpenAIs Sam Altman has been criticized for making lofty claims about the risks of AI, which some say is an attempt to influence policy in Washington and capture power. It is sort of like saying, AI is so powerful it can destroy the world, but we have a safe version we are happy to sell you.Schmidts warnings are not likely to have much impact as President Trump drops Biden-era guidelines around AI safety and pushes the U.S. to become a dominant force in AI. Last November, a Congressional commission proposed the Manhattan Project for AI that Schmidt is warning about and as people like Sam Altman and Elon Musk gain greater influence in Washington, its easy to see it gaining traction. If that continues, the paper warns, countries like China might retaliate in ways such as intentionally degrading models or attacking physical infrastructure. It is not an unheard of threat, as China has wormed its way into major U.S. tech companies like Microsoft, and others like Russia are reportedly using freighter ships to strike undersea fiber optic cables. Of course, we would do the same to them. Its all mutual. It is unclear how the world could come to any agreement to stop playing with these weapons. In that sense, the idea of sabotaging AI projects to defend against them might be a good thing.Daily NewsletterYou May Also Like By Isaac Schultz Published March 5, 2025 By Matt Novak Published March 5, 2025 By Thomas Maxwell Published March 4, 2025 By AJ Dellinger Published February 28, 2025 By Thomas Maxwell Published February 28, 2025 By Thomas Maxwell Published February 27, 2025
0 Comments
·0 Shares
·35 Views