DeepSeek claims 545% cost-profit ratio, challenging AI industry economics
www.computerworld.com
Chinese AI startup DeepSeek has claimed its V3 and R1 models achieve a theoretical daily cost-profit ratio of 545%, highlighting cost implications for enterprises adopting similar models from other cloud providers.In a GitHub post published over the weekend, DeepSeek estimated its daily inference cost for V3 and R1 models at $87,072, assuming a $2 per hour rental for Nvidias H800 chips.Theoretical daily revenue was pegged at $562,027, implying a 545% cost-profit ratio and over $200 million in potential annual revenue.However, the company noted actual earnings are significantly lower due to free web and app access, lower V3 model costs, and discounted developer rates during off-peak hours.This is the first time the Hangzhou-based company has disclosed profit margins from inference tasks, where trained AI models generate responses or perform functions like chatbot interactions.Authenticity and enterprise impactAnalysts say DeepSeeks focus on scalability and efficiency is notable, but caution that it is too early to view its claims as an industry benchmark applicable to companies in or outside China.Also, in theory versus practice, there is a significant difference, as cost metrics are also highly subjective to geography, resources, and revenue generation, said Neil Shah, partner & co-founder at Counterpoint Research. However, we dont know the purpose of these public claims, but they will definitely put pressure on Western companies to at least reveal and/or internally optimize their costs.If accurate, DeepSeeks profitability despite deep discounts would signal a sustainable low-cost AI model, potentially pressuring rivals to cut prices while prompting enterprises to reassess vendor choices and long-term AI strategies.There are no US-based AI firms of scale that are profitable right now, said Hyoun Park, CEO and chief analyst at Amalgam Insights. Open AI is not even close and both Microsoft and Google are spending billions of dollars to enter the market.Park noted that while DeepSeeks figures are theoretical and difficult to verify, one thing is clear DeepSeek has massively reduced the cost of inference.DeepSeeks AI models, when hosted on established cloud platforms such as AWS and Microsoft Azure, can offer enterprises a balance of performance, governance, and affordability, said Abhiram Srivasta, senior analyst at Everest Group. These models are reportedly more cost-efficient than those from leading US AI firms, requiring significantly less compute power, which translates to lower operational costs.Threat to US companiesDeepSeeks claims of cost efficiency and its open-source approach could intensify competition in the AI market, particularly for US firms investing heavily in proprietary models.The company currently offers its models as open source, allowing US-based enterprises to audit and modify them.As long as the deployment does not rely on Chinese-hosted infrastructure, there may not be any significant barriers to global adoption. Given that many current models are good enough for established generative AI use cases, DeepSeek is absolutely a threat to US based AI model builders, Park said. AI developers, focusing on theoretical artificial general intelligence are likely to be quickly surpassed by those making more practical agentic models that can get work done and provide interaction visibility.
0 Commentaires ·0 Parts ·19 Vue