TIME.COM
Which AI Companies Are the Safestand Least Safe?
As companies race to build more powerful AI, safety measures are being left behind. A report published Wednesday takes a closer look at how companies including OpenAI and Google DeepMind are grappling with the potential harms of their technology. It paints a worrying picture: flagship models from all the developers in the report were found to have vulnerabilities, and some companies have taken steps to enhance safety, others lag dangerously behind.The report was published by the Future of Life Institute, a nonprofit that aims to reduce global catastrophic risks. Theopen letter calling for a pause on large-scale AI model training drew unprecedented support from 30,000 signatories, including some of technology's most prominent voices. For the report, the Future of Life Institute brought together a panel of seven independent expertsincluding Turing Award winner Yoshua Bengio and Sneha Revanur from Encode Justicewho evaluated technology companies across six key areas: risk assessment, current harms, safety frameworks, existential safety strategy, governance & accountability, and transparency & communication. Their review considered a range of potential harms, from carbon emissions to the risk of an AI system going rogue.The findings of the AI Safety Index project suggest that although there is a lot of activity at AI companies that goes under the heading of safety, it is not yet very effective, said Stuart Russell, a professor of computer science at University of California, Berkeley and one of the panelists, in a statement.Despite touting its responsible approach to AI development, Meta, Facebooks parent company, and developer of the popular Llama series of AI models, was rated the lowest, scoring a F-grade overall. X.AI, Elon Musks AI company, also fared poorly, receiving a D- grade overall. Neither Meta nor x.AI responded to a request for comment.The company behind ChatGPT, OpenAIwhich early in the year was accused of prioritizing shiny products over safety by the former leader of one of its safety teamsreceived a D+, as did Google DeepMind. Neither company responded to a request for comment.commitment to AI safety during the Seoul AI Summit in May, was rated D overall. Zhipu could not be reached for comment.Anthropic, the company behind the popular chatbot Claude, which has made safety a core part of its ethos, ranked the highest. Even still, the company received a C grade, highlighting that there is room for improvement among even the industrys safest players. Anthropic did not respond to a request for comment.In particular, the report found that all of the flagship models evaluated were found to be vulnerable to jailbreaks, or techniques that override the system guardrails. Moreover, the review panel deemed the current strategies of all companies inadequate for ensuring that hypothetical future AI systems which rival human intelligence remain safe and under human control.I think it's very easy to be misled by having good intentions if nobody's holding you accountable, says Tegan Maharaj, assistant professor in the department of decision sciences at HEC Montral, who served on the panel. Maharaj adds that she believes there is a need for independent oversight, as opposed to relying solely on companies to conduct in-house evaluations.There are some examples of low-hanging fruit, says Maharaj, or relatively simple actions by some developers to marginally improve their technologys safety. Some companies are not even doing the basics, she adds. For example, Zhipu AI, x.AI, and Meta, which each rated poorly on risk assessments, could adopt existing guidelines, she argues.However, other risks are more fundamental to the way AI models are currently produced, and overcoming them will require technical breakthroughs. None of the current activity provides any kind of quantitative guarantee of safety; nor does it seem possible to provide such guarantees given the current approach to AI via giant black boxes trained on unimaginably vast quantities of data, Russell said. And its only going to get harder as these AI systems get bigger. Researchers are studying techniques to peer inside the black box of machine learning models.In a statement, Bengio, who is the founder and scientific director for Montreal Institute for Learning Algorithms, underscored the importance of initiatives like the AI Safety Index. They are an essential step in holding firms accountable for their safety commitments and can help highlight emerging best practices and encourage competitors to adopt more responsible approaches, he said.
0 Comments 0 Shares 18 Views