
AGI is suddenly a dinner table topic
www.technologyreview.com
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first,sign up here. The concept of artificial general intelligencean ultra-powerful AI system we dont have yetcan be thought of as a balloon, repeatedly inflated with hype during peaks of optimism (or fear) about its potential impact and then deflated as reality fails to meet expectations. This week, lots of news went into that AGI balloon. Im going to tell you what it means (and probably stretch my analogy a little too far along the way). First, lets get the pesky business of defining AGI out of the way. In practice, its a deeply hazy and changeable term shaped by the researchers or companies set on building the technology. But it usually refers to a future AI that outperforms humans on cognitive tasks. Which humans and which tasks were talking about makes all the difference in assessing AGIs achievability, safety, and impact on labor markets, war, and society. Thats why defining AGI, though an unglamorous pursuit, is not pedantic but actually quite important, as illustrated in a new paper published this week by authors from Hugging Face and Google, among others. In the absence of that definition, my advice when you hear AGI is to ask yourself what version of the nebulous term the speaker means. (Dont be afraid to ask for clarification!) Okay, on to the news. First, a new AI model from China called Manus launched last week. A promotional video for the model, which is built to handle agentic tasks like creating websites or performing analysis, describes it as potentially, a glimpse into AGI. The model is doing real-world tasks on crowdsourcing platforms like Fiverr and Upwork, and the head of product at Hugging Face, an AI platform, called it the most impressive AI tool Ive ever tried. Its not clear just how impressive Manus actually is yet, but against this backdropthe idea of agentic AI as a stepping stone toward AGIit was fitting that New York Times columnist Ezra Klein dedicated his podcast on Tuesday to AGI. It also means that the concept has been moving quickly beyond AI circles and into the realm of dinner table conversation. Klein was joined by Ben Buchanan, a Georgetown professor and former special advisor for artificial intelligence in the Biden White House. They discussed lots of thingswhat AGI would mean for law enforcement and national security, and why the US government finds it essential to develop AGI before Chinabut the most contentious segments were about the technologys potential impact on labor markets. If AI is on the cusp of excelling at lots of cognitive tasks, Klein said, then lawmakers better start wrapping their heads around what a large-scale transition of labor from human minds to algorithms will mean for workers. He criticized Democrats for largely not having a plan. We could consider this to be inflating the fear balloon, suggesting that AGIs impact is imminent and sweeping. Following close behind and puncturing that balloon with a giant safety pin, then, is Gary Marcus, a professor of neural science at New York University and an AGI critic who wrote a rebuttal to the points made on Kleins show. Marcus points out that recent news, including the underwhelming performance of OpenAIs new ChatGPT-4.5, suggests that AGI is much more than three years away. He says core technical problems persist despite decades of research, and efforts to scale training and computing capacity have reached diminishing returns. Large language models, dominant today, may not even be the thing that unlocks AGI. He says the political domain does not need more people raising the alarm about AGI, arguing that such talk actually benefits the companies spending money to build it more than it helps the public good. Instead, we need more people questioning claims that AGI is imminent. That said, Marcus is not doubting that AGI is possible. Hes merely doubting the timeline. Just after Marcus tried to deflate it, the AGI balloon got blown up again. Three influential peopleGoogles former CEO Eric Schmidt, Scale AIs CEO Alexandr Wang, and director of the Center for AI Safety Dan Hendryckspublished a paper called Superintelligence Strategy. By superintelligence, they mean AI that would decisively surpass the worlds best individual experts in nearly every intellectual domain, Hendrycks told me in an email. The cognitive tasks most pertinent to safety are hacking, virology, and autonomous-AI research and developmentareas where exceeding human expertise could give rise to severe risks. In the paper, they outline a plan to mitigate such risks: mutual assured AI malfunction, inspired by the concept of mutual assured destruction in nuclear weapons policy. Any state that pursues a strategic monopoly on power can expect a retaliatory response from rivals, they write. The authors suggest that chipsas well as open-source AI models with advanced virology or cyberattack capabilitiesshould be controlled like uranium. In this view, AGI, whenever it arrives, will bring with it levels of risk not seen since the advent of the atomic bomb. The last piece of news Ill mention deflates this balloon a bit. Researchers from Tsinghua University and Renmin University of China came out with an AGI paper of their own last week. They devised a survival game for evaluating AI models that limits their number of attempts to get the right answers on a host of different benchmark tests. This measures their abilities to adapt and learn. Its a really hard test. The team speculates that an AGI capable of acing it would be so large that its parameter countthe number of knobs in an AI model that can be tweaked to provide better answerswould be "five orders of magnitude higher than the total number of neurons in all of humanitys brains combined. Using todays chips, that would cost 400 million times the market value of Apple. The specific numbers behind the speculation, in all honesty, dont matter much. But the paper does highlight something that is not easy to dismiss in conversations about AGI: Building such an ultra-powerful system may require a truly unfathomable amount of resourcesmoney, chips, precious metals, water, electricity, and human labor. But if AGI (however nebulously defined) is as powerful as it sounds, then its worth any expense. So what should all this news leave us thinking? Its fair to say that the AGI balloon got a little bigger this week, and that the increasingly dominant inclination among companies and policymakers is to treat artificial intelligence as an incredibly powerful thing with implications for national security and labor markets. That assumes a relentless pace of development in which every milestone in large language models, and every new model release, can count as a stepping stone toward something like AGI. If you believe this, AGI is inevitable. But its a belief that doesnt really address the many bumps in the road AI research and deployment have faced, or explain how application-specific AI will transition into general intelligence. Still, if you keep extending the timeline of AGI far enough into the future, it seems those hiccups cease to matter. Now read the rest of The Algorithm Deeper Learning How DeepSeek became a fortune teller for Chinas youth Traditional Chinese fortune tellers are called upon by people facing all sorts of life decisions, but they can be expensive. People are now turning to the popular AI model DeepSeek for guidance, sharing AI-generated readings, experimenting with fortune-telling prompt engineering, and revisiting ancient spiritual texts. Why it matters: The popularity of DeepSeek for telling fortunes comes during a time of pervasive anxiety and pessimism in Chinese society. Unemployment is high, and millions of young Chinese now refer to themselves as the last generation, expressing reluctance about committing to marriage and parenthood in the face of a deeply uncertain future. But since Chinas secular regime makes religious and spiritual exploration difficult, such practices unfold in more private settings, on phones and computers. Read the whole story from Caiwei Chen. Bits and Bytes AI reasoning models can cheat to win chess games Researchers have long dealt with the problem that if you train AI models by having them optimize ways to reach certain goals, they might bend rules in ways you dont predict. Thats proving to be the case with reasoning models, and theres no simple way to fix it. (MIT Technology Review) The Israeli military is creating a ChatGPT-like tool using Palestinian surveillance data Built with telephone and text conversations, the model forms a sort of surveillance chatbot, able to answer questions about people its monitoring or the data its collected. This is the latest in a string of reports suggesting that the Israeli military is bringing AI heavily into its information-gathering and decision-making efforts. (The Guardian) At RightsCon in Taipei, activists reckoned with a US retreat from promoting digital rights Last week, our reporter Eileen Guo joined over 3,200 digital rights activists, tech policymakers, and researchers and a smattering of tech company representatives in Taipei at RightsCon, the worlds largest digital rights conference. She reported on the foreign impact of cuts to US funding of digital rights programs, which are leading many organizations to do content moderation with AI instead of people. (MIT Technology Review) TSMC says its $100 billion expansion in the US is driven by demand, not political pressure Chipmaking giant TSMC had already been expanding in the US under the Biden administration, but it announced a new expansion with President Trump this week. The company will invest another $100 billion into its operations in Arizona. (Wall Street Journal) The US Army is using CamoGPT to purge DEI from training materialsFollowing executive orders from President Trump, agencies are under pressure to remove mentions of anything related to diversity, equity, and inclusion. The US Army is prototyping a new AI model to do that, Wired reports. (Wired)
0 Comentários
·0 Compartilhamentos
·79 Visualizações