Former Intel CEO Pat Gelsinger is already using DeepSeek instead of OpenAI at his startup, Gloo
techcrunch.com
DeepSeeks new open source AI reasoning model, R1, sparked a sell-off of Nvidias stock and caused its consumer app to soar to the top of the app stores.Last month DeepSeek said it trained a model using a data center of some 2,000 of Nvidias H800 GPUs in just about two months at a cost of around $5.5 million. Last week, it published a paper showing that its latest models performance matched the most advanced reasoning models in the world. These models are being trained in data centers that are spending billions on Nvidias faster, very pricey AI chips.The reaction across the tech industry to DeepSeeks high-performance, lower-cost model has been wild.Pat Gelsinger, for instance, took to X with glee, posting, Thank you DeepSeek.Gelsinger is, of course, the recently former CEO of Intel, a hardware engineer, and current chairman of his own IPO-bound startup, Gloo, a messaging and engagement platform for churches. He left Intel in December after four years and an attempt at chasing Nvidia with Intels alternative AI GPUs, the Gaudi 3 AI.Gelsinger wrote that DeepSeek should remind the tech industry of its three most important lessons: lower costs mean wider-spread adoption; ingenuity flourishes under constraints; and open wins. DeepSeek will help reset the increasingly closed world of foundational AI model work, he wrote. OpenAI and Anthropic are both closed source.Gelsinger told TechCrunch that R1 is so impressive, Gloo has already decided not to adopt and pay for OpenAI. Gloo is building an AI service called Kallm, which will offer a chatbot and other services.My glue engineers are running R1 today, he said. They couldve run o1 well, they can only access o1, through the APIs.Instead, in two weeks, Gloo expects to have rebuilt Kallm from scratch with our own foundational model thats all open source, he said. Thats exciting.He said he thinks DeepSeek will make AI so affordable, AI wont just be everywhere. Good AI will be everywhere. I want better AI in my Oura Ring. I want better AI in my hearing aid. I want more AI in my phone. I want better AI in my embedded devices, like the voice recognition in my EV, he says.Gelsingers happy reaction was perhaps at odds with others who were less thrilled that reasoning foundational models now have a higher-performing and far more affordable challenger. AI has been growing more expensive, not less.Other reacted by implying DeepSeek must have fudged its numbers somehow and training must have been more costly. Some thought it couldnt say it used higher-end chips because of U.S. AI chip export restrictions to China. Others were poking holes in its performance, finding spots where other models did better. Still others believe that OpenAIs next model, o3, will so outpace R1 when it is released that the status quo will be repaired.Gelsinger shrugs all of that off. You will never have full transparency, given most of the work was done in China, he said. But still, all evidence is that its 10-50x cheaper in their training than o1. DeepSeek proves that AI can be moved forward by engineering creativity, not throwing more hardware power and compute resources at the problem. So thats thrilling, he said.As for this being a Chinese developer with all that implies, like concerns over privacy and censorship, Gelsinger metaphorically shakes his head.Having the Chinese remind us of the power of open ecosystems is maybe a touch embarrassing for our community, for the Western world, he said.
0 Comments ·0 Shares ·48 Views