Chinas DeepSeek Suspects Cyberattack as Chatbot Prompts Security Concerns
www.informationweek.com
Shane Snider, Senior Writer, InformationWeekJanuary 28, 20254 Min ReadRokas Tenys via Alamy StockDeepSeek, the China-based AI startup that upended US technology stocks Monday, said cyberattacks have disrupted services for its chatbot platform. And the companys vulnerability raises concerns about users data security and use, experts say.DeepSeek caused Wall Street panic with the launch of its low cost, energy efficient language model as nations and companies compete to develop superior generative AI platforms. Users raced to experiment with the DeepSeeks R1 model, dethroning ChatGPT from its No. 1 spot as a free app on Apples mobile devices. Nvidia, the worlds leading maker of high-powered AI chips suffered a staggering $593 billion market capitalization loss -- a new single-day stock market loss record.The companys wild ride continued Monday night as the company reported outages it said were the result of large-scale malicious attacks, disrupting services and limiting new registrations.Ilia Kolochenko, CEO at ImmuniWeb and adjunct professor of cybersecurity at Marylands Capital Technology University, says it may be too early to accept the companys attack explanation. It is not completely excluded that DeepSeek simply could not handle the legitimate user traffic due to insufficiently scalable IT infrastructure, while presenting this unforeseen outage as a cyberattack, he says in an email message.He adds, Most importantly, this incident indicates that while many corporations and investors are obsessed with the ballooning AI hype, we still fail to address foundational cybersecurity issues despite having access to allegedly super powerful GenAI technologies.The Devil Is in the User DetailsConsidering the potential breach, security experts also worry about DeepSeeks access to users data, which under Chinas strict AI regulations, must be shared with the government.All AI models have the same risks that any other software has and should be treated the same way, Mike Lieberman, CTO of software supply chain security firm Kusari, says in an email interview. Generally, AI could have vulnerabilities or malicious behaviors injected Assuming youre running AI following reasonable security practices, e.g., sandboxing, the big concerns are that the model is biased or manipulated in some way to respond to prompts inaccurately or maliciously.Chinas access to potentially sensitive user information should be a top security concern, says Adrianus Warmenhoven, a cybersecurity expert at NordVPN. DeepSeeks privacy policy, which can be found in English, makes it clear: User data, including conversations and generated responses, is stored in servers on China, Warmenhoven says in an email message. This raises concerns because of the data collection outlined -- ranging from user-shared information to data from external sources -- which falls under the potential risks associated with storing such data in a jurisdiction with different privacy and security standards.Warmenhoven says users need to be on guard: To mitigate these risks, users should adopt a proactive approach to their cybersecurity. This includes scrutinizing the terms and conditions of any platform they engage with, understanding where their data is stored and who has access to it.Optivs Jennifer Mahoney, advisory practice manager for data governance, privacy and protection, says, As generative AI platforms from foreign adversaries enter the market, users should question the origin of the data used to rain these technologies When a service is free, you become the product and your user data is valuable. Should an unregulated an unsecure technology suffer a cyberattack, you could become a victim of identity theft or social engineering.The Risk to National SecurityChina and the US have been locked in a strategic battle over AI dominance. The US, under the previous Biden administration, blocked Chinas access to powerful AI chips. DeepSeeks ability to create an AI chatbot comparable to the best US-produced GenAI models at a fraction of the cost and power could give the adversarial nation the upper hand as the countries race to develop artificial general intelligence (AGI).AI and associated cloud compute are now a nations strategic asset, Gunter Ollman, CTO at security firm Cobalt, tells InformationWeek in an email interview. Its security is paramount and is increasing targeted by competing nations with the full cyber and physical resources they can muster. AI code/models are inherently more difficult to assess and preempt vulnerabilities Organizations should also be wary of using DeepSeeks open-source technology, Ollman says. Organizations building atop open-source AI should plan for a potential future bloodbath of vulnerabilities and exploits in the near future.A popular GenAI tool could lure unsuspecting users to fall for adversarial nation-state propaganda. The definition of backdoor attacks that normally involve malicious code should be expanded to included malicious misinformation, Ollman says. Backdoors may extend to political and social influence, such as a models answers modifying history Perhaps country-led open-source AI models are the modern equivalent of religious missionaries of past centuries.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
0 Reacties
·0 aandelen
·76 Views