AI 'godfather' Yoshua Bengio says AI agents could be the 'most dangerous path'
www.businessinsider.com
Talk of AI agents is everywhere in Davos. AI pioneer Yoshua Bengio warned against them.Bengio said that agents with the power of AGI could lead to "catastrophic scenarios."Bengio is researching how to build non-agentic systems to keep the agents in check.Artificial intelligence pioneer Yoshua Bengio has been at the World Economic Forum in Davos this week with a message: AI agents could end badly.The topic of AI agents artificial intelligence that can act independently of human input has been one of the buzziest at this year's gathering in snowy Switzerland. The event has drawn a collection of pioneering AI researchers to debate where AI goes next, how it should be governed, and when we may see signs of machines that can reason as well as humans a milestone known as artificial general intelligence (AGI)."All of the catastrophic scenarios with AGI or superintelligence happen if we have agents," Bengio told BI in an interview. He said he believes it's possible to achieve AGI without building agentic systems."All of the AI for science and medicine, all the things people care about, is not agentic," Bengio said. "And we can continue building more powerful systems that are non-agentic."Bengio, a Canadian research scientist whose early research in deep learning and neural networks laid the foundation for the modern AI boom, is considered one of the "AI godfathers" alongside Geoffrey Hinton and Yann LeCun. Like Hinton, Bengio has warned against the potential harms of AI and called for collective action to mitigate the risks.After two years of testing AI, businesses recognize the tangible return on investment offered by AI agents, which could enter the workforce meaningfully as soon as this year. OpenAI, which doesn't have a presence at this year's Davos, this week revealed an AI agent that can surf the web for you and perform tasks such as booking restaurants or adding groceries to your basket. Google has previewed a similar tool of its own.The problem Bengio sees is that people will keep building agents no matter what, especially as competing companies and countries worry that others will get to agentic AI before them."The good news is that if we build non-agentic systems, they can be used to control agentic systems," he told BI.One way would be to build more sophisticated "monitors" that can do that, although this would require significant investment, said Bengio.He also called for national regulation that would prevent AI companies from building agentic models without first proving that the system would be safe."We could advance our science of safe and capable AI, but we need to acknowledge the risks, understand scientifically where it's coming from, and then do the technological investment to make it happen before it's too late, and we build things that can destroy us," Bengio said.'I want to raise a red flag'Before speaking with BI, Bengio spoke on a panel about AI safety with Google DeepMind CEO Demis Hassabis."I want to raise a red flag. This is the most dangerous path," Bengio told the audience when asked about AI agents. He pointed to ways AI can be used for scientific discovery, such as DeepMind's breakthrough in protein folding, as examples of how it can still be profound without being agentic. Bengio said he believes it's possible to get to AGI without giving AI agency."It's a bet, I agree," he said, "but I think it's a worthwhile bet."Hassabis agreed with Bengio that measures should be taken to mitigate risks, such as cybersecurity protections or experimenting with agents in simulations before releasing them. This would only work if everyone agreed to build them the same way, he added."Unfortunately I do think there's an economic gradient, beyond science and workers, that people want for their systems to be agentic," Hassabis said. "When you say 'recommend me a restaurant,' why would you not want the next step, which is, book the table."
0 Comentários ·0 Compartilhamentos ·50 Visualizações