DeepSeek panic triggers tech stock sell-off as Chinese AI tops App Store
arstechnica.com
war of the weights DeepSeek panic triggers tech stock sell-off as Chinese AI tops App Store A new Chinese AI app is sparking existential panic in American AI companies and investors. Benj Edwards Jan 27, 2025 11:29 am | 65 Credit: Luis Diaz Devesa via Getty Images Credit: Luis Diaz Devesa via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOn Monday morning, Nvidia stock dove 11 percent amid worries over the rise of Chinese AI company DeepSeek, whose R1 reasoning model stunned industry observers last week by challenging American AI supremacy with a low-cost, freely available AI model, and whose AI assistant app jumped to the top of the iPhone App Store's "Free Apps" category over the weekend, overtaking ChatGPT.Whats the big deal about DeepSeek?The drama started around January 20 when Chinese AI startup DeepSeek announced R1, a new simulated reasoning (SR) model that it claimed could match OpenAI's o1 in reasoning benchmarks. Like o1, R1 is trained to work through a simulated chain of thought process before providing an answer, which can potentially improve the accuracy or usefulness of the AI models' outputs for some types of questions posed by the user.That first part wasn't too surprising since other AI companies like Google are hot on the heels of OpenAI with their own simulated reasoning models. In addition, OpenAI itself has announced an upcoming SR model (dubbed "o3") that can surpass o1 in performance.There are three elements of DeepSeek R1 that really shocked experts. First, the Chinese startup appears to have trained the model for only $6 million as a so-called "side project" while using less powerful Nvidia H800 AI-acceleration chips due to US export restrictions on cutting-edge GPUs. Secondly, it appeared just four months after OpenAI announced o1 in September 2024. Finally, and perhaps most importantly, DeepSeek released the model weights for free with an open MIT license, meaning anyone can download it, run it, and fine-tune (modify) it.It suddenly seemed to many observers on social media that American tech companies like OpenAI and Googlewhich have so far thrived on proprietary, closed modelshave "no moat," as tech insiders often say, which means that those companies' technological lead, access to cutting-edge hardware, or impressive bankrolls do not necessarily protect them from upstart market challengers.On Friday, venture capitalist Marc Andreessen wrote on X that DeepSeek R1 is "one of the most amazing and impressive breakthroughs I've ever seen" and a "profound gift to the world." The endorsement from the Andreessen Horowitz cofounder added fuel to the growing buzz around DeepSeek.On top of that, over the weekend, DeepSeek's app, which allows users to experiment with both the R1 model and the company's V3 conventional large language model (LLM) for free, shot to the top of the US iPhone App Store. Multiple AI-related Reddit threads have suddenly been plastered with DeepSeek-related posts, leading to so-far unfounded accusations that someone in China is astroturfingpretending to be ordinary users but actually posting with an agenda to support somethingto artificially drum up support for the Chinese AI company.Over the past weekend, social media has been overtaken with a sort of "sky is falling" in AI mentality, coupled with geopolitical angst about US economic rival China catching up with America, which perhaps inspired a measure of panic in big tech investors and led to the Nvidia stock sell-off, despite the fact that DeepSeek used Nvidia chips for training.As tempting as it is to frame this as a geopolitical tech battle, the "US versus China" framing has been overblown, according to some experts. On LinkedIn, Meta Chief AI Scientist Yann LeCun, who frequently champions open-weights AI models and open source AI research, wrote, "To people who see the performance of DeepSeek and think: 'China is surpassing the US in AI.' You are reading this wrong. The correct reading is: 'Open source models are surpassing proprietary ones.'"But is DeepSeek R1 any good?From the start, DeepSeek has claimed that R1 can match OpenAI's o1 model in AI benchmarks, but benchmarks have historically been easy to game and do not necessarily tell you much about how the models might be used in everyday scenarios.Over the past week, we have experimented with both DeepSeek-V3 (which is roughly the counterpart to OpenAI's GPT-4o), and DeepSeek-R1, and from informal testing, they both seem to be roughly equivalent to OpenAI's ChatGPT models, although that can vary dramatically based on how they are used and prompted. DeepSeek's AI assistant, which you can try at chat.deepseek.com, can even search the web like ChatGPT. We will likely evaluate R1 more formally in a future article.Ultimately, a cheaply trained open weights AI model that can match America's best commercial models is genuinely a threat to closed-source AI companies, but it should not be a surprise to anyone who has been watching the rapid rate of progress in AI. The history of computing is replete with examples of information technology getting cheaper and smaller, becoming a commodity, and eventually being absorbed as a component into larger products.Many software components of modern operating systems (including built-in apps, features, codecs, and utilities) were once separate products that retailed for thousands of dollars when they were first invented. Microprocessors supplanted massive, expensive computer systems and eventually became embedded into everything. We suspect that AI models and software that processes data with simulated reasoningeven hypothetical human-level AI or beyond (if it is ever achieved)will be no different. Tech companies come and go, the next new thing is created, and the cycle repeats itself.Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 65 Comments
0 Comments ·0 Shares ·47 Views