WWW.FASTCOMPANY.COM
AI vaporware: 7 products that didnt materialize in 2024
Whether to raise money, placate shareholders, or generate positive press, AIs biggest companies have a habit of announcing advancements that are nowhere near ready to ship.The industry term for this is vaporware, defined as products that arrive much later than initially anticipated or in some cases not at all. The AI industry has puffed out plenty of vaporware over the last year, making it tough to distinguish hype from reality.As we head into 2025, lets look at the biggest AI products and promises that havent quite panned out yet:Alexas generative AI overhaulBack in September 2023, Amazon announced a smarter and more conversational Alexa powered by large language models. A future version of the voice assistant, Amazon claimed, would not only allow for more naturally flowing conversations, but would understand nonverbal cues such as body language and eye contact using the cameras built into certain Echo devices. Once this optional Alexa overhaul arrived, Amazon suggested that youd have to pay for it.More than a year later, the future of Alexa is in limbo. A report by Bloomberg in October said that beta testers were unhappy with the new version, which gave stiff, long-winded responses and failed to work with existing smart home integrations. Reuters also reported on problems with Amazons in-house language models, prompting the company to seek help from Anthropics Claude. Having scrapped plans to debut the new Alexa in October, Amazon is reportedly targeting a 2025 launch now.OpenAIs GPT-5In fairness, OpenAI never officially said that it would release GPT-5 in 2024, but CEO Sam Altman confirmed in late 2023 that the company had started working on it. In March, Business Insider cited unnamed sources in claiming that the new large language model would arrive around midyear, quoting an unnamed CEO who called it materially better after reportedly seeing a demo.But by midyear, Altman was already tempering expectations, saying that we still have a lot of work to do on GPT-5, and in October he told an audience on Reddit not to expect the new model this year. The company instead released GPT-o1, which it claims can process queries more thoroughly but is also much slower and more expensive.Claude 3.5 OpusOpenAI isnt alone in pushing back new large language models. As recently as October, Anthropics model documentation page was promising a 2024 launch for Claude 3.5 Opus, a new version of its largest language model. Now, that same page doesnt mention Opus at all.Dario Amodei, Anthropics cofounder and CEO, told Lex Fridman last month that the plan is still to have a Claude 3.5 Opus, but didnt provide any timing. And while theres some talk of Anthropic just using 3.5 Opus to feed synthetic data to its cheaper models, it could also be another sign of diminishing returns for cutting-edge models.Apple Intelligences Personal ContextLook back at the first press release for Apple Intelligence from June, and youll notice a big focus on personal context. The phrase appears three times in the announcements first two paragraphs, hinting at a version of Siri that can dig up information from emails, texts, notes, and more. Our unique approach combines generative AI with a users personal context to deliver truly helpful intelligence, CEO Tim Cook proclaimed at the time.Although Apple Intelligence arrived in October, the personal context is still missing. Apples opted to stagger its AI feature releases over the next year, and Siris contextual powers didnt make the cut for 2024. Its still unclear exactly when those capabilities will arrive.Multi-step reasoning in Google SearchDuring its I/O conference in May, Google said it would soon bring multi-step reasoning capabilities to Searchs AI Overviews, allowing users to ask complex questions with nuances and caveats. For instance, you could ask Find the best yoga or pilates studios in Boston and show me details on their intro offers, and walking time from Beacon Hill, and Googles Gemini AI would pull all of that info together into a series of info boxes.Google now says those capabilities will have to wait for the Gemini 2.0 version of AI Overviews, which just entered limited testing. Supposedly youll start seeing multi-step reasoning in search results early next year.Googles Project AstraGoogles biggest AI-related I/O reveal was Project Astra, a prototype app that can identify what youre looking at in real-time, answer back-and-forth questions about it, and remember key points of what youre talking about over a 10-minute conversation. In other words, its supposed to be a usable version of last years Gemini AI demo video that turned out to be staged.Only you still cant use it today. Although Google originally planned to launch something like Astra this year, its pushed the timeframe into 2025. And while Google is working with Samsung on an augmented reality headset and has given on-rails demos to the press, theres no price or release date on that product either.Samsungs and LGs AI home robotsLast Januarys CES trade show was great venue for demonstrating the desperation of electronics vendors to position themselves as AI leaders. Cases in point are Samsung and LG, which both showed off competing smart home robots with AI capabilities. A teaser video for Samsungs Ballie, for instance, showed the robot alerting its owner via text message that her dog was making a mess, then responding to instructions to give the dog a snack and put on its favorite video. LG promised similar capabilities for its Zero Labor Home Smart Home AI Agent, touting its ability to understand context and intentions as well as actively communicate with users.Like so much other CES vaporware, neither of these robots have materialized as actual products you can buy, but we shouldnt be surprised; Samsung has been talking about some version of Ballie without shipping it for nearly five years now.
0 Commentaires 0 Parts 56 Vue