For the first time ever, I wish Google would act more like Amazon
www.computerworld.com
Fair warning: This isnt your average article about whats happening with all the newfangled AI hullabaloo in this weird and wild world of ours.Nope therell be no oohing and ahhing or talk about how systems like Gemini and ChatGPT and their brethren are, like, totally gonna revolutionize the world and change life as we know it.Instead, I want to look at the state of these generative AI systems through as practical and realistic of a lens as possible focusing purely on how they work right now and what theyre able to accomplish.And with that in mind, my friend, theres no way around it: These things seriously suck.Sorry for the bluntness, but for Googs sake, someones gotta say it. For all their genuinely impressive technological feats and all the interesting ways theyre able to help with mundane work tasks, Googles Gemini and other such generative AI systems are doing us all a major disservice in one key area and everyone seems content in looking the other way and pretending it isnt a problem.Thats why I was so pleasantly surprised to see that one tech giant seemingly isnt taking the bait and is instead lagging behind and taking its time to get this right instead of rushing it out half-baked, like everyone else.Its the antithesis to the strategy were seeing play out from Google and virtually every other tech player right now. And my goodness, is it ever a refreshing contrast.[Get level-headed insight in your inbox with my Android Intelligence newsletter. Three new things to know and try each Friday!]The Google Gemini Bizarro WorldI wont keep you waiting: The company thats getting it right, at least in terms of its process and philosophy, is none other than Amazon.Ill be the first to admit: Im typically not a huge fan of Amazon or its approach. But within this specific area, it really is creating a model for how tech companies should be thinking about these generative AI systems.My revelation comes via a locked-down article that went mostly unnoticed at The Financial Times last week. The reports all about how Amazon is scrambling to upgrade its Alexa virtual assistant with generative AI and relaunch it as a powerful agent for offering up complex answers and completing all kinds of online tasks.More of the same, right? Sure sounds that way but hang on: Theres a twist.Allow me to quote a pertinent passage from behind the paywall for ya:Rohit Prasad, who leads the artificial general intelligence (AGI) team at Amazon, told the Financial Times the voice assistant still needed to surmount several technical hurdles before the rollout.This includes solving the problem of hallucinations or fabricated answers, its response speed or latency, and reliability.Hallucinations have to be close to zero, said Prasad. Its still an open problem in the industry, but we are working extremely hard on it.(Insert exaggerated record-scratch sound effect here.)Wait what? Did we read that right?!Lets look to another passage to confirm:One former senior member of the Alexa team said while LLMs were very sophisticated, they came with risks, such as producing answers that were completely invented some of the time.At the scale that Amazon operates, that could happen large numbers of times per day, they said, damaging its brand and reputation.Well, tickle me tootsies and call me Tito. Someone actually gives a damn.If the contrast here still isnt apparent, let me spell it out: These large-language-model systems the type of technology under the hood of Gemini, ChatGPT, and pretty much every other generative AI service weve seen show up over the past year or two they dont really know anything, in any human-like sense. They work purely by analyzing massive amounts of data, observing patterns within that data, and then using sophisticated statistics to predict what word is likely to come next in any scenario relying on all the info theyve ingested as a guide.Or, put into laymans terms: They have no idea what theyre saying or if its right. Theyre just coughing up characters based on patterns and probability.And that gets us to the core problem with these systems and why, as I put it so elegantly a moment ago, they suck.As I mused whilst explaining why Gemini is, in many ways, the new Google+ recently:The reality is that large-language models like Gemini and ChatGPT are wildly impressive at a very small set of specific, limited tasks. They work wonders when it comes to unambiguous data processing, text summarizing, and other low-level, closely defined and clearly objective chores. Thats great! Theyre an incredible new asset for those sorts of purposes.But everyone in the tech industry seems to be clamoring to brush aside an extremely real asterisk to that and thats the fact that Gemini, ChatGPT, and other such systems simply dont belong everywhere. They arent at all reliable as creative tools or tools intended to parse information and provide specific, factual answers. And we, as actual human users of the services associated with this stuff, dont need this type of technology everywhere and might even be actively harmed by having it forced into so many places where it doesnt genuinely belong.That, mdear, is a pretty pressing problem.Allow me to borrow a quote collected by my Computerworld colleague Lucas Mearian in a thoroughly reported analysis of how, exactly, these large-language models work:Hallucinations happen because LLMs, in their in most vanilla form, dont have an internal state representation of the world, said Jonathan Siddharth, CEO of Turing, a Palo Alto, California company that uses AI to find, hire, and onboard software engineers remotely. Theres no concept of fact. Theyre predicting the next word based on what theyve seen so far its a statistical estimate.And there we have it.Thats why Gemini, ChatGPT, and other such systems so frequently serve up inaccurate info and present it as fact something thats endlessly amusing to see examples of, sure, but thats also an extremely serious issue. Whats more, its only growing more and more prominent as these systems show up everywhere and increasingly overshadow traditional search methods within Google and beyond.And that brings us back to Amazons seemingly accidental accomplishment.Amazon and Google: A tale of two AI journeysWhats especially interesting about the slow-moving state of Amazons Alexa AI rollout is how its being presented as a negative by most market-watchers.Back to that same Financial Times article I quoted a moment ago, the conclusion is unambiguous:In June, Mihail Eric, a former machine learning scientist at Alexa and founding member of its conversational modelling team, said publicly that Amazon had dropped the ball on becoming the unequivocal market leader in conversational AI with Alexa.But, ironically, thats exactly where I see Amazon doing something admirable and creating that striking contrast between its efforts and those of Google and others in the industry.The reality is that all these systems share those same foundational flaws. Remember: By the very nature of the technology, generative-AI-provided answers are woefully inconsistent and unreliable.And yet, Googles been going in overdrive to get Gemini into every possible place and get us all in the habit of relying on it for almost every imaginable purpose including those where it simply isnt reliable. (Remember my analogy from a minute ago? Yuuuuuup.)In doing so, its chasing short-term market gains at the cost of long-term trust. All other variables aside, being wrong or misleading with basic information 20% of the time or, heck, even just 10% of the time is a pretty substantial problem. Ive said it before, and Ill say it again: If something is inaccurate or unreliable 10% of the time, its useful precisely 0% of the time.And to be clear, the stakes here couldnt be higher. In terms of their answer-offering and info-providing capabilities, Gemini and other such systems are being framed and certainly perceived as magical answer machines. Most people arent treating em with a hefty degree of skepticism and taking the time to ask all the right questions, verify answers, and so on. Theyre asking questions, seeing or hearing answers, and then assuming theyre right.And by golly, are they getting an awful lot of confidently stated inaccuracies as a result something that, as we established a moment ago, is likely inevitable with this type of technology in its current state.On some level, Google is clearly aware of this. The company had been developing the technology behind Gemini for years before rushing it out into the world following the success and attention around ChatGPTs initial rollout but, as had been said in numerous venues over time, it hadnt felt like it was mature enough to be ready for public use.So what changed? Not the nature of the technology nope; by all counts, it was just the competitive pressure that forced Google to say screw it, its good enough and go all-in with systems that werent and still arent ready for primetime, at least with all of their promoted purposes.And that, my fellow accuracy-obsessed armadillo, is where Amazon is getting it right. Rather than just rushing to replace Alexa with some new half-baked replacement, the company is actually waiting until it feels like its got the new system ready with reliability, yes, but also with branding and a consistent-seeming user experience. (Anyone whos been trying to navigate the comically complex web of Gemini and Assistant on Android and beyond can surely relate!)Whether Amazon will keep up this pattern or eventually relent and go the good enough route remains to be seen. Sooner or later, investor pressure may force it to follow Googles path and put its next-gen answer agent out there, even if it in all likelihood still isnt ready by any reasonable standard.For now, though, man: I cant help but applaud the fact that the companys taking its time instead of prematurely fumbling to the finish line like everyone else. And I cant help but wish Google would have taken that same path, too, rather than doing its usual Google Thang and forcing some undercooked new concept into every last nook and cranny no matter the consequences.Maybe, hopefully, thisll all settle out in some sensible way and turn into a positive in the future. For the moment, though, Googles strategy sure seems like more of a minus than a plus for us, as users of its most important products and especially in this arena, it sure seems like getting it right should mean more than getting it out into the world quickly, flaws and all and at any cost.Get plain-English perspective on the news that matters with my free Android Intelligence newsletter three things to know and try in your inbox each Friday.
0 Comentários ·0 Compartilhamentos ·55 Visualizações