Sam Altman Says OpenAI Has Run Out of GPUs
futurism.com
OpenAI CEO Sam Altman has unveiled the company's latest large language model, GPT-4.5.The AI model isn't just powerful; it's extremely expensive for users. OpenAI is charging a whopping $75 per million tokens, which is equivalent to the input of around 750,000 words a staggering 30 times as much as OpenAI's preceding GPT-4o reasoning model, as TechCrunch reports.There's a good reason for that: the new model is so resource intensive that Altman claimed in a recent tweet the company has run "out of GPUs" the graphics processing units that are conventionally used to power AI models forcing OpenAI to stagger the its rollout."We will add tens of thousands of GPUs next week and roll it out to the plus tier then," he promised. "This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages."It's a notable admission, highlighting just how hardware-reliant the technology is. AI industry leaders are racing to build out data centers to keep their increasingly unwieldy AI models running and are ready to put up hundreds of billions of dollars for the cause.Companies are practically tripping over themselves to secure hardware, especially AI cardsfrom leading chipmaker NVIDIA. The Jensen Huang-led firm announced on Wednesday that it had sold $11 billion of its next-gen AI chips, dubbed Blackwell, with CFO Collette Kress describing it as the "fastest product ramp in our companys history."The payoff from all of this investment, however, has remained somewhat muted, as AI companies are still struggling to meaningfully address some of the tech's glaring shortcomings, from widespread "hallucinations" to considerable cybersecurity concerns.Despite the sky-high price the company's charging for GPT-4.5, Altman attempted to manage expectations, tweeting in his announcement that "this isnt a reasoning model and wont crush benchmarks.""Its a different kind of intelligence and theres a magic to it I havent felt before," he added, without elaborating on what he meant. "Reallyexcited for people to try it!""What sets the model apart is its ability to engage in warm, intuitive, naturally flowing conversations, and we think it has a stronger understanding of what users mean when they ask for something," OpenAI VP of research Mia Glaese told the New York Times.Altman has previously complained that shortages in computing power have forced OpenAI to delay shipping new products.Ironically, GPT-4.5 was designed to lower the amount of compute required. In its "system card" detailing the model's capabilities, OpenAI revealed that "GPT-4.5 is not a frontier model, but it is OpenAIs largest LLM, improving on GPT-4s computational efficiency by more than 10x."That's despite Altman describing the model as being "giant" and "expensive."According to the document, the model's "performance is below that of o1, o3-mini, and deep research on most preparedness evaluations."Interestingly, as The Verge points out, a new version of the document no longer includes that last quote, suggesting the company is still trying to figure out how to sell its underwhelming new AI.OpenAI is still looking to follow up on GPT-4.5 with GPT-5, which Altman has described as a "system that integrates a lot of our technology."But whether it'll live up to expectations, let alone bring the company closer to its purported goal of realizing what it refers to as artificial general intelligence, remains to be seen.Share This Article
0 Comments ·0 Shares ·36 Views