OpenAIs Deep Research Answers To Deep Seek Impact; And Mine
www.forbes.com
ChatGPT's new Research toolNurPhoto via Getty ImagesDiscussions about Deep Seeks impact on Nvidia is everywhere. Yesterday, I heard an investor on CNBCs "Fast Money" program pontificate that Deep Seek and its disruptive technology mean that Nobody needs an Nvidia H100 anymore, much less a Blackwell. I struggle to square that with Jensen Huangs claim that the inference task will require 100 times more compute power, which could overwhelm any potential reduction in training systems. The financial center of AI gravity is shifting towards inference. (Nvidia is a client of Cambrian-AI Research.) Immediately I started writing a blog. But then wondered, how well could AIs new reasoning capabilities research this assertion?So, I presented the query to ChatGPTs new Deep Research tool to both learn more about the controversial topic, and to see just how well the new reasoning capabilities could do the job (and put me out of one!).Heres my query: "I would like some help writing a research report about the controversy surrounding DeepSeek and its impact on Nvidia and other semiconductor providers. Specifically, I watched a Smart Money episode yesterday where the talking head surmised that "nobody needs an H100" anymore, much less a Blackwell. But Jensen Huang claims the inference of reasoning models, the future of conversational AI, requires 100 times the compute power of a simple ChatGPT search. Can you outline the cases, the growth of inference vs training, and perhaps provide some forecasts? Focus on high-level, with a few financial details as support. Focus on Nvidia, but include AMD and the Cloud providers ASICs which cannot (yet) run reasoning models well. Dont focus on Deep seek, but rather the general market disruption over a 1-3 year timeframe."Let Her (Him?) Rip!After OpenAI Deep Research read 42 sources and thought for 6 minutes, it produced a well-laid-out report, covering 13 pages and 4720 words, which I have posted on my website to avoid using AI to write a piece for Forbes:Cambrian AI ResearchCambrian AI Research Papers - Cambrian AI ResearchIt is frankly really good and much more complete than anything this blogger could produce. What about the cost? If you assume it took an 8 GPU cluster to answer the query (normal for ChatGPT), the experiment cost about $2 ($1.992) to run. A normal ChatGPT response is widely estimated to cost about $.01, so the query I posed cost about 200 times more than a simple inference. Once again, Jensen is right.Deep Seek Impact On NvidiaThe CNBC talking head knows a lot more about investing (theoretically) than AI and industry trends. While some may refute Deep Seeks claims of the number and type of GPUs used to create V3 and the reasoning chatbot R1, it will lower the cost of training and reasoning inference. We should see every foundation model builder adopt it or something like it. TuanChe Limited, Microsoft Azure, and Perplexity already have added Deep Seek R1.However, the six minutes ChatGPT required to create the report, and the outstanding content it produced, validate Jensen Huangs assertion and the value of reasoning models. Now it will become a matter of how the math balances out with the (reduced cost of training + the increased number of trained models) offset by the increase in computation needed to produce thoughtful answers. I have no idea how that exactly plays out, but Id bet my house that the number of compute cycles overall will increase by one or two orders of magnitude.Nvidia CEO Jensen Huang delivers a keynote address at the Consumer Electronics Show (CES) in Las ... [+] Vegas, Nevada on January 6, 2025.AFP via Getty ImagesSo, How Good Was ChatGPT?In short, it was amazing. The bot produced a report that includes:Training vs. Inference: Diverging Compute Demands in Conversational AIThe Shift to Inference: From One-Time Training to Everyday AI ServicesNobody Needs an H100 Anymore? The Push for Cheaper InferenceNVIDIAs Blackwell Generation: Upping the Ante for Training and InferenceCloud Providers ASICs: Google and Amazon Bet on In-House SiliconOutlook: Inference Growth, Market Forecasts, and Financial ImplicationsEach section was insightful and objective and I did not detect any errors. If I were younger, Id be looking for another job. Again, check it out on my website.In Summary, What Did We Learn?I learned that my intuition and experienced viewpoints on this controvery are the same as ChatGPT, for better or worse. Devons paradox aside, using these models (inference) will drive significantly more compute demand; perhaps significantly more than any reduction in training, as AI inference will become (or already is) the main growth driver of comnpute demand. The financial center of AI gravity is shifting towards inference. The next 13 years will likely redefine market share in accelerated computing more than any time in the last decade.I also learned that ChatGPT Deep Research can be an excellent research tool, the likes of which can help analysts learn more about topics and test hypotheses. Or it could replace us!
0 Комментарии ·0 Поделились ·41 Просмотры