Anthropic CEO says DeepSeek was the worst on a critical bioweapons data safety test
techcrunch.com
Anthropics CEO Dario Amodei is worried about competitor DeepSeek, the Chinese AI company that took Silicon Valley by storm with its R1 model. And his concerns could be more serious than the typical ones raised about DeepSeek sending user data back to China.In an interview on Jordan Schneiders ChinaTalk podcast, Amodei said DeepSeek generated rare information about bioweapons in a safety test run by Anthropic.DeepSeeks performance was the worst of basically any model wed ever tested, Amodei claimed. It had absolutely no blocks whatsoever against generating this information. Amodei stated that this was part of evaluations Anthropic routinely runs on various AI models to assess their potential national security risks. His team looks at whether models can generate bioweapons-related information that isnt easily found on Google or in textbooks. Anthropic positions itself as the AI foundational model provider that takes safety seriously.Amodei said he didnt think DeepSeeks models today are literally dangerous in providing rare and dangerous information but that they might be in the near future. Although he praised DeepSeeks team as talented engineers, he advised the company to take seriously these AI safety considerations.Amodei has also supported strong export controls on chips to China, citing concerns that they could give Chinas military an edge.Amodei didnt clarify in the ChinaTalk interview which DeepSeek model Anthropic tested, nor did he give more technical details about these tests. Anthropic didnt immediately reply to a request for comment from TechCrunch. Neither did DeepSeek.DeepSeeks rise has sparked concerns about its safety elsewhere, too. For example, Cisco security researchers said last week that DeepSeek R1 failed to block any harmful prompts in its safety tests, achieving a 100% jailbreak success rate.Cisco didnt mention bioweapons but said it was able to get DeepSeek to generate harmful information about cybercrime and other illegal activities. Its worth mentioning, though, that Metas Llama-3.1-405B and OpenAIs GPT-4o also had high failure rates of 96% and 86%, respectively.It remains to be seen whether safety concerns like these will make a serious dent in DeepSeeks rapid adoption. Companies like AWS and Microsoft have publicly touted integrating R1 into their cloud platforms ironically enough, given that Amazon is Anthropics biggest investor.On the other hand, theres a growing list of countries, companies, and especially government organizations like the U.S. Navy and the Pentagon that have started banning DeepSeek.Time will tell if these efforts catch on or if DeepSeeks global rise will continue. Either way, Amodei says he does consider DeepSeek a new competitor thats on the level of the U.S.s top AI companies.The new fact here is that theres a new competitor, he said on ChinaTalk. In the big companies that can train AI Anthropic, OpenAI, Google, perhaps Meta and xAI now DeepSeek is maybe being added to that category.
0 Σχόλια ·0 Μοιράστηκε ·60 Views