TIME.COM
Silicon Valley Takes Artificial General Intelligence SeriouslyWashington Must Too
IdeasBy Daniel ColsonOctober 18, 2024 7:10 AM EDTDaniel Colson is the Executive Director of the AI Policy Institute.Artificial General Intelligencemachines that can learn and perform any cognitive task that a human canhas long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; its an impending reality that demands our immediate attention.On Sept. 17, during a Senate Judiciary Subcommittee hearing titled Oversight of AI: Insiders Perspectives, whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. Helen Toner, a former board member of OpenAI and director of strategy at Georgetown Universitys Center for Security and Emerging Technology, testified that, The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence. She continued that leading AI companies such as OpenAI, Google, and Anthropic are treating building AGI as an entirely serious goal.Toners co-witness William Saundersa former researcher at OpenAI who recently resigned after losing faith in OpenAI acting responsiblyechoed similar sentiments to Toner, testifying that, Companies like OpenAI are working towards building artificial general intelligence and that they are raising billions of dollars towards this goal.All three leading AI labsOpenAI, Anthropic, and Google DeepMindare more or less explicit about their AGI goals. OpenAIs mission states: To ensure that artificial general intelligenceby which we mean highly autonomous systems that outperform humans at most economically valuable workbenefits all of humanity. Anthropic focuses on building reliable, interpretable, and steerable AI systems, aiming for safe AGI. Google DeepMind aspires to solve intelligence and then to use the resultant AI systems to solve everything else, with co-founder Shane Legg stating unequivocally that he expects human-level AI will be passed in the mid-2020s. New entrants into the AI race, such as Elon Musks xAI and Ilya Sutskevers Safe Superintelligence Inc., are similarly focused on AGI.Policymakers in Washington have mostly dismissed AGI as either marketing hype or a vague metaphorical device not meant to be taken literally. But last months hearing might have broken through in a way that previous discourse of AGI has not. Senator Josh Hawley (R-MO), Ranking Member of the subcommittee, commented that the witnesses are folks who have been inside [AI] companies, who have worked on these technologies, who have seen them firsthand, and I might just observe dont have quite the vested interest in painting that rosy picture and cheerleading in the same way that [AI company] executives have.Senator Richard Blumenthal (D-CT), the subcommittee Chair, was even more direct. The idea that AGI might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future. Its very far from science fiction. Its here and nowone to three years has been the latest prediction, he said. He didnt mince words about where responsibility lies: What we should learn from social media, that experience is, dont trust Big Tech.The apparent shift in Washington reflects public opinion that has been more willing to entertain the possibility of AGIs imminence. In a July 2023 survey conducted by the AI Policy Institute, the majority of Americans said they thought AGI would be developed within the next 5 years. Some 82% of respondents also said we should go slowly and deliberately in AI development.Thats because the stakes are astronomical. Saunders detailed that AGI could lead to cyberattacks or the creation of novel biological weapons, and Toner warned that many leading AI figures believe that in a worst-case scenario AGI could lead to literal human extinction.Despite these stakes, the U.S. has instituted almost no regulatory oversight over the companies racing toward AGI. So where does this leave us?First, Washington needs to start taking AGI seriously. The potential risks are too great to ignore. Even in a good scenario, AGI could upend economies and displace millions of jobs, requiring society to adapt. In a bad scenario, AGI could become uncontrollable.Second, we must establish regulatory guardrails for powerful AI systems. Regulation should involve government transparency into whats going on with the most powerful AI systems that are being created by tech companies. Government transparency will reduce the chances that society is caught flat-footed by a tech company developing AGI before anyone else is expecting. And mandated security measures are needed to prevent U.S. adversaries and other bad actors from stealing AGI systems from U.S. companies. These light-touch measures would be sensible even if AGI werent a possibility, but the prospect of AGI heightens their importance.In a particularly concerning part of Saunders testimony, he said that during his time at OpenAI there were long stretches where he or hundreds of other employees would be able to bypass access controls and steal the companys most advanced AI systems, including GPT-4. This lax attitude toward security is bad enough for U.S. competitiveness today, but it is an absolutely unacceptable way to treat systems on the path to AGI. The comments were another powerful reminder that tech companies cannot be trusted to self-regulate.Finally, public engagement is essential. AGI isnt just a technical issue; its a societal one. The public must be informed and involved in discussions about how AGI could impact all of our lives.No one knows how long we have until AGIwhat Senator Blumenthal referred to as the 64 billion dollar questionbut the window for action may be rapidly closing. Some AI figures including Saunders think it may be in as little as three years. Ignoring the potentially imminent challenges of AGI wont make them disappear. Its time for policymakers to begin to get their heads out of the cloud.
0 Commentarios
0 Acciones
106 Views