Nvidia CEO claims reasoning models will boost GPU demand
www.computerweekly.com
NewsNvidia CEO claims reasoning models will boost GPU demandThe availability of the DeepSeek-R1 model resulted in a big drop in Nvidias share price, but CEO Jensen Huang believes this is just a blipByCliff Saran,Managing EditorPublished: 27 Feb 2025 15:32 Nvidia has continued its dominance of artificial intelligence (AI) datacentres, with its latest quarterly results showing revenue growth of 16% a 93% increase from the same period last year.The companys datacentre business reported quarterly revenue of $35.6bn, and revenue of $115bn for the full year a 142% increase compared to last year.In his prepared remarks, Nvidia CEO and founder Jensen Huang said: Demand for Blackwell is amazing as reasoning AI adds another scaling law increasing compute for training makes models smarter and increasing compute for long thinking makes the answer smarter.Weve successfully ramped up the massive-scale production of Blackwell AI supercomputers, achieving billions of dollars in sales in its first quarter. AI is advancing at light speed as agentic AI and physical AI set the stage for the next wave of AI to revolutionise the largest industries.During the earnings call, financial analysts questioned Nvidia over DeepSeek, which requires less powerful graphics processing units (GPUs), and the fact that cloud service providers (CSPs) such as Microsoft are designing their own custom chips optimised for AI workloads.According to a transcript of the earnings call posted on Seeking Alpha, CSPs account for about half of Nvidias business. But there is also growing demand from enterprise customers. Huang said: We see the growth of enterprise going forward, which he believes represents a larger opportunity to sell Nvidia GPUs long term.Huang used the earnings call to discuss why he thinks new AI models will drive up demand, even as AI models become more computationally efficient. The more the model thinks, the smarter the answer, he said. Models like OpenAI, Grok-3 and DeepSeek-R1 are reasoning models that apply inference time scaling. Reasoning models can consume 100 times more compute. Future reasoning models can consume much more compute.When asked about the risk that CSPs were developing application-specific integrated circuits (ASICs) instead of using GPUs, Huang responded by talking about the complexity of the technology stack that sits on top of the challenge, implying that this would be a challenge if custom chips were deployed instead of standard GPUs. The software stack is incredibly hard. Building an ASIC is no different to what we do we build a new architecture, he said.According to Huang, the technology ecosystem that sits on top of this Nvidia architecture is 10 times more complex today than it was two years ago. Thats fairly obvious, he said, because the amount of software that the world is building on top of architecture is growing exponentially and AI is advancing very quickly. So bringing that whole ecosystem [together] on top of multiple chips is hard.Discussing the Nvidia results, Forrester senior analyst Alvin Nguyen said: Having yet another record performance from Nvidia seems commonplace despite the enormity of the feat. The record earnings represent the continued demand for Nvidia AI products. The emphasis on reasoning models driving more, not less, computation is a good verbal counter to the worries about DeepSeek impacting their demand.However, in Nguyens opinion, Huangs responses to questions on custom chips that offer an alternative to Nvidias GPUs was dismissive.Their response to the question about custom chips from Amazon, Microsoft and Google threatening their business was dismissive and ignores the need for these companies to have options outside of Nvidia and to have semiconductors tailored specifically to their AI training and inferencing needs, he said.Read more about AI accelerationBudget flexibility for on-prem AI: Given that some data cannot ever be shared with a public large language model, IT leaders need to consider their options for deploying AI systems that need to remain private.DeepSeek-R1 budgeting challenges for on-premise deployments: The availability of the DeepSeek-R1 large language model shows its possible to deploy AI on modest hardware. But thats only half the story.In The Current Issue:An action plan for net zero compatible with budget constraintsWhat is Dunelm doing for women in tech?Download Current IssueF5 details multi-Terabit VELOS hardware to power AI workloads CW Developer NetworkSpotlight on an indulgent tech sector Cliff Saran's Enterprise blogView All Blogs
0 Commentaires ·0 Parts ·56 Vue