Eric Schmidt argues against a Manhattan Project for AGI
techcrunch.com
In a policy paper published Wednesday, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks said that the U.S. should not pursue a Manhattan Project-style push to develop AI systems with superhuman intelligence, also known as AGI.The paper, titled Superintelligence Strategy, asserts that an aggressive bid by the U.S. to exclusively control superintelligent AI systems could prompt fierce retaliation from China, potentially in the form of a cyberattack, which could destabilize international relations.[A] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it, the co-authors write. What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure.Co-authored by three highly influential figures in Americas AI industry, the paper comes just a few months after a U.S. congressional commission proposed a Manhattan Project-style effort to fund AGI development, modeled after Americas atomic bomb program in the 1940s. U.S. Secretary of Energy Chris Wright recently said the U.S. is at the start of a new Manhattan Project on AI while standing in front of a supercomputer site alongside OpenAI co-founder Greg Brockman.The Superintelligence Strategy paper challenges the idea, championed by several American policy and industry leaders in recent months, that a government-backed program pursuing AGI is the best way to compete with China.In the opinion of Schmidt, Wang, and Hendrycks, the U.S. is in something of an AGI standoff not dissimilar to mutually assured destruction. In the same way that global powers do not seek monopolies over nuclear weapons which could trigger a preemptive strike from an adversary Schmidt and his co-authors argue that the U.S. should be cautious about racing toward dominating extremely powerful AI systems.While likening AI systems to nuclear weapons may sound extreme, world leaders already consider AI to be a top military advantage. Already, the Pentagon says that AI is helping speed up the militarys kill chain. Schmidt et al. introduce a concept they call Mutual Assured AI Malfunction (MAIM), in which governments could proactively disable threatening AI projects rather than waiting for adversaries to weaponize AGI.Schmidt, Wang, and Hendrycks propose that the U.S. shift its focus from winning the race to superintelligence to developing methods that deter other countries from creating superintelligent AI. The co-authors argue the government should expand [its] arsenal of cyberattacks to disable threatening AI projects controlled by other nations as well as limit adversaries access to advanced AI chips and open source models.The co-authors identify a dichotomy that has played out in the AI policy world. There are the doomers, who believe that catastrophic outcomes from AI development are a foregone conclusion and advocate for countries slowing AI progress. On the other side, there are the ostriches, who believe nations should accelerate AI development and essentially just hope itll all work out.The paper proposes a third way: a measured approach to developing AGI that prioritizes defensive strategies.That strategy is particularly notable coming from Schmidt, who has previously been vocal about the need for the U.S. to compete aggressively with China in developing advanced AI systems. Just a few months ago, Schmidt released an op-ed saying DeepSeek marked a turning point in Americas AI race with China.The Trump administration seems dead set on pushing ahead in Americas AI development. However, as the co-authors note, Americas decisions around AGI dont exist in a vacuum.As the world watches America push the limit of AI, Schmidt and his co-authors suggest it may be wiser to take a defensive approach.
0 Comentários ·0 Compartilhamentos ·66 Visualizações