FUTURISM.COM
"You Can’t Lick a Badger Twice": Google's AI Is Making Up Explanations for Nonexistent Folksy Sayings
Have you heard of the idiom "You Can’t Lick a Badger Twice?"We haven't, either, because it doesn't exist — but Google's AI seemingly has. As netizens discovered this week that adding the word "meaning" to nonexistent folksy sayings is causing the AI to cook up invented explanations for them."The idiom 'you can't lick a badger twice' means you can't trick or deceive someone a second time after they've been tricked once," Google's AI Overviews feature happily suggests. "It's a warning that if someone has already been deceived, they are unlikely to fall for the same trick again."Author Meaghan Wilson-Anastasios, who first noticed the bizarre bug in a Threads post over the weekend, found that when she asked for the "meaning" of the phrase "peanut butter platform heels," the AI feature suggested it was a "reference to a scientific experiment" in which "peanut butter was used to demonstrate the creation of diamonds under high pressure."There are countless other examples. We found, for instance, that Google's AI also claimed that the made-up expression "the bicycle eats first" is a "humorous idiom" and a "playful way of saying that one should prioritize their nutrition, particularly carbohydrates, to support their cycling efforts."Even this author's name wasn't safe. Asked to explain the meaningless phrase "if you don't love me at my Victor, you don't deserve me at my Tangermann" the AI dutifully reported that it means "if someone can't appreciate or love you when you're at your lowest point (Victor), then they're not worthy of the positive qualities you bring to the relationship (Tangermann)."The bizarre replies are the perfect distillation of one of AI's biggest flaws: rampant hallucinations. Large language model-based AIs have a long and troubled history of rattling off made-up facts and even gaslighting users into thinking they were wrong all along.And despite AI companies' extensive attempts to squash the bug, their models continue to hallucinate. Even OpenAI's latest reasoning models, dubbed o3 and o4-mini, tend to hallucinate even more than their predecessors, showing that the company is actually headed in the wrong direction.Google's AI Overviews feature, which the company rolled out in May of last year, still has a strong tendency to hallucinate facts as well, making it far more of an irritating nuisance than a helpful research assistant for users.When it launched, it even told users that glue belongs on pizza to ensure that toppings don't slide off. Its other outrageous gaffes have included claiming that baby elephants are small enough to sit in the palm of a human hand.Following public outrage over the feature's baffling — and often comedic — inaccuracy, Google admitted in a statement last year that "some odd, inaccurate or unhelpful AI Overviews certainly did show up."To tackle the issue, Google kicked off a massive game of cat and mouse, limiting some responses when it detected "nonsensical queries that shouldn't show an AI Overview."But considering the fictional idioms almost a year after the product was launched, Google still has a lot of work to do.Even worse, the feature is hurting websites by limiting click-through rates to traditional organic listings, as Search Engine Land reported this week. In other words, on top of spewing false information, Google's AI Overviews is undermining the business model of countless websites that host trustworthy info.Nonetheless, Google is doubling down, announcing last month that it was going to be "expanding" AI Overviews in the US to "help with harder questions, starting with coding, advanced math and multimodal queries." Earlier this year, Google announced that AI Overviews is even being entrusted with medical advice.The company claims that "power users" want "AI responses for even more of their searches." (For the time being, there are ways to turn off the feature.)At least the AI model appears to be aware of its own limitations."The saying 'you can lead an AI to answer but you can't make it think' highlights the key difference between AI's ability to provide information and its lack of true understanding or independent thought," Google's AI Overviews told one Bluesky user.More on AI Overviews: Google Says Its Error-Ridden "AI Overviews" Will Now Give Health AdviceShare This Article
0 Commentarii 0 Distribuiri 29 Views