time.com
This year, hundreds of billions of dollars will be spent to scale AI systems in pursuit of superhuman capabilities. CEOs of leading AI companies, such as OpenAIs Sam Altman and xAIs Elon Musk, expect that within the next four years, their systems will be smart enough to do most cognitive workthink any job that can be done with just a laptopas effectively as or better than humans.Such an advance, leaders agree, would fundamentally transform society. Google CEO Sundar Pichai has repeatedly described AI as the most profound technology humanity is working on. Demis Hassabis, who leads Googles AI research lab Google DeepMind, argues AIs social impact will be more like that of fire or electricity than the introduction of mobile phones or the Internet.AdvertisementAdvertisementIn February, in the wake of an international AI Summit in Paris, Anthropic CEO Dario Amodei restated his belief that by 2030 AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people. In the same month, Musk, speaking on the Joe Rogan Experience podcast, said I think we're trending toward having something that's smarter than the smartest human in the next few years. He continued: There's a level beyond that which is smarter than all humans combined, which frankly is around 2029 or 2030.If these predictions are even partly correct, the world could soon radically change. But there is no consensus on how this transformation will or should be handled.With exceedingly advanced AI models released on a monthly basis, and the Trump administration seemingly uninterested in regulating the technology, the decisions of private-sector leaders matter more than ever. But they differ in their assessments of which risks are most salient, and whats at stake if things go wrong. Heres how:Existential risk or unmissable opportunity?I always thought AI was going to be way smarter than humans and an existential risk, and that's turning out to be true, Musk said in February, noting he thinks there is a 20% chance of human annihilation by AI. While estimates vary, the idea that advanced AI systems could destroy humanity traces back to the origin of many of the labs developing the technology today. In 2015, Altman called the development of superhuman machine intelligence probably the greatest threat to the continued existence of humanity. Alongside Hassabis and Amodei, he signed a statement in May 2023 declaring that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.It strikes me as odd that some leaders think that AI can be so brilliant that it will solve the worlds problems, using solutions we didn't think of, but not so brilliant that it cant escape whatever control constraints we think of, says Margaret Mitchell, Chief Ethics Scientist at Hugging Face. She notes that discourse sometimes conflates AI that supplements people with AI that supplants them. You cant have the benefits of both and the drawbacks of neither, she says.For Mitchell, risk increases as humans cede control to increasingly autonomous agents. Because we cant fully control or predict the behaviour of AI agents, we run a massive risk of AI agents that act without consent to, for example, drain bank accounts, impersonate us saying and doing horrific things, or bomb specific populations, she explains.Most people think of this as just another technology and, and not as a new species, which is the way you should think about it, says Professor Max Tegmark, co-founder and president of the Future of Life Institute. He explains that the default outcome when building machines at this level is losing control over them, which could lead to unpredictable and potentially catastrophic outcomes.But despite the apprehensions, other leaders avoid the language of superintelligence and existential risk, focusing instead on the positive upside. I think when history looks back it will see this as the beginning of a golden age of innovation, Pichai said at the Paris Summit in February. The biggest risk could be missing out.Similarly, asked in mid-2023 whether he thinks were on a path to creating superintelligence, Microsoft CEO Satya Nadella said he was much more focused on the benefits to all of us. I am haunted by the fact that the industrial revolution didn't touch the parts of the world where I grew up until much later. So I am looking for the thing that may be even bigger than the industrial revolution, and really doing what the industrial revolution did for the West, for everyone in the world. So I'm not at all worried about AGI [artificial general intelligence] showing up, or showing up fast, he said.A race between countries and companiesEven among those that do believe AI poses an existential risk, there is a widespread belief that any slowdown in Americas AI development will allow foreign adversariesparticularly Chinato pull ahead in the race to create transformative AI. Future AI systems could be capable of creating novel weapons of mass destruction, or covertly hacking a countrys nuclear arsenaleffectively flipping the global balance of power overnight.My feeling is that almost every decision I make is balanced on the edge of a knife, Amodei said earlier this month, explaining that building too fast risks humanity losing control, whereas if we dont build fast enough, then the authoritarian countries could win.These dynamics play out not just between countries, but between companies. As Helen Toner, a director at Georgetowns Center for Security and Emerging Technology explains, there's often a disconnect between the idealism in public statements and the hard-nosed business logic that drives their decisions. Toner points to competition over release dates as a clear example of this. There have been multiple instances of AI teams being forced to cut corners and skip steps in order to beat a competitor to launch day, she says.Read More: How China Is Advancing in AI Despite U.S. Chip RestrictionsFor Meta CEO Mark Zuckerberg, ensuring advanced AI systems are not controlled by a single entity is key to safety. I kind of liked the theory that its only God if only one company or government controls it, he said in January. The best way to make sure it doesnt get out of control is to make it so that its pretty equally distributed, he claimed, pointing to the importance of open-source models.Parameters for controlWhile almost every company developing advanced AI models has their own internal policies and procedures around safetyand most have made voluntary commitments to the U.S. government regarding issues of trust, safety, and allowing third parties to evaluate their modelsnone of this is backed by the force of law. Tegmark is optimistic that if the U.S. national security establishment accepts the seriousness of the threat, safety standards will follow. Safety standard number one, he says, will be requiring companies to demonstrate how they plan to keep their models under control.Some CEOs are feeling the weight of their power. There's a huge amount of responsibilityprobably too muchon the people leading this technology, Hassabis said in February. The Google DeepMind leader has previously advocated for the creation of new institutions, akin to the European Organization for Nuclear Research (CERN) or the International Energy Agency, to bring together governments to monitor AI developments. Society needs to think about what kind of governing bodies are needed, he said.This is easier said than done. While creating binding international agreements has always been challenging, its more unrealistic than ever, says Toner. On the domestic front, Tegmark points out that right now, there are more safety standards for sandwich shops than for AI companies in America.Nadella, discussing AGI and superintelligence on a podcast in February, emphasized his view that legal infrastructure will be the biggest rate limiter to the power of future systems, potentially preventing their deployment. Before it is a real problem, the real problem will be in the courts, he said.An 'Oppenheimer moment'Mitchell says that AIs corporate leaders bring different levels of their own human concerns and thoughts to these discussions. Tegmark fears, however, that some of these leaders are falling prey to wishful thinking by believing theyre going to be able to control superintelligence, and that many are now facing their own Oppenheimer moment." He points to a poignant scene in that film where scientists watch their creation being taken away by military authorities. That's the moment where the builders of the technology realize they're losing control over their creation, he says. Some of the CEOs are beginning to feel that right now.