Technology

‘You Can’t Lick a Badger Twice’: Google’s AI Blunders Reveal Major Flaw

2025-04-23

Author: Wei Ling

A Playful Yet Misleading Experiment

If you're looking for a quirky break from your work, just head to Google and type any random phrase followed by the word "meaning." What you’ll find is an amusing AI Overview that not only claims your nonsense is an accepted saying but also provides a detailed explanation of its 'meaning.'

For instance, the faux phrase "a loose dog won't surf" is described as a whimsical way to express something that’s unlikely to happen. Meanwhile, "wired is as wired does" is portrayed as an idiom emphasizing that someone’s behavior is a direct reflection of their inherent nature, akin to how a computer’s functionality hinges on its wiring.

Confidence Without Credibility

While these definitions sound plausible, they are fundamentally misleading. The uncanny confidence of the AI creates the illusion that these made-up phrases are widely recognized and meaningful, rather than mere word combinations. This phenomenon highlights a crucial flaw in generative AI’s capabilities.

As noted in disclaimers, Google’s AI is still in the experimental phase. Although this technology has significant potential, it operates as a 'probability machine.' Essentially, it strings together likely word sequences without genuine understanding, leading to misunderstandings of nonsensical queries.

The Dangers of AI Assumptions

Ziang Xiao, a computer scientist, explains that while AI bases its predictions on extensive training data, this doesn’t guarantee accurate coherence. "In many cases, the next coherent word does not lead us to the right answer," he states, underlining the inherent unpredictability of AI.

Another critical aspect is AI’s inclination to satisfy users, often resulting in it echoing back what it thinks users want to hear. In a scenario where a user inputs an absurd phrase like "you can’t lick a badger twice," the AI takes that as a legitimate request, further complicating its responses.

AI's Reluctance to Admit Ignorance

Compounding the problem is AI's unwillingness to admit limitations; rather than confessing ignorance, it resorts to fabricating information. Google acknowledges that when users present nonsensical or misleading queries, their systems attempt to generate the most pertinent results based on the limited content available online.

A Google spokesperson noted, "This is consistent with search overall, where AI Overviews strive to offer helpful context, even when the underlying inquiry is flawed."

The Unpredictable Nature of AI Responses

However, Google doesn’t consistently produce AI Overviews for every errant search. Cognitive scientist Gary Marcus points out the inconsistencies in AI responses say, "The erratic nature of generative AI reflects its dependence on specific training examples and makes it far from achieving artificial general intelligence (AGI)."

Ultimately, while these AI missteps can be entertaining distractions, it’s crucial to recognize the same technology generating these playful blunders is behind the AI responses you rely on for more serious queries. Stay skeptical and take every output with a grain of salt.