Google: Nonsense answers in AI search results were not hallucinations

The strange and often inaccurate answers that Google's AI implementation in the search engine showed in recent weeks were not hallucinations but usually a 'misunderstood search query'. Google also says that a lot of information cannot be looked up at all.

Google responds in a blog post to recent messages on social media about the new AI Overview function. The company showed it during the I/O developer conference. AI Overview is a feature where an AI, based on Gemini, tries to provide answers to queries at the top of the search results. Strange results emerged in recent weeks, which quickly went viral. For example, the AI ​​advised depressed people to jump off a bridge, or that users should ideally stare straight into the sun for fifteen minutes.

In a blog post, Google now says that there are multiple reasons why the search engine gave those answers, but also that there were a lot of screenshots circulating that the company says were fake. Google mentions a number of examples, including one where the AI ​​advised pregnant women to smoke, but says that “those results have never happened. The company does not say how it can say that with such certainty.”Google acknowledges that other nonsense answers have appeared. According to Google, in most cases this is because a so-called data void occurred; There is too little information available on the internet for the AI ​​to give a good answer. In many cases, these are questions that make no sense to begin with, such as the question 'how many stones should you eat every day?' “Before these screenshots went viral, virtually no one asked Google,” the company says. For that reason, there are virtually no websites that provide a serious answer to that question. Therefore, AI Overview cannot draw on much training data.

Google also says that 'in many examples' answers surfaced that were based on 'sarcastic or troll-like content on forums'. The company does acknowledge that in 'a limited number of cases' AI Overview misinterpreted language and displayed incorrect information, but it denies that AI Overview hallucinates.

Generative AI is known for providing answers that are not based on content, but only provide a linguistic representation of what someone might reasonably expect as an answer. According to Google, AI Overview is not just a language model, but the information only comes from the top results that normally come from the search engine. “When AI Overview gets something wrong, it's usually for another reason, such as misinterpreting search queries or the nuances of internet language, or because it has little good information available.”


Posted

in

by

Tags:

Comments

Leave a Reply