Ambient misinformation
Even when giving you the "right" answer, LLM output can misinform along the way
One place where LLM output could serve a legitimate role is where questions are fuzzy or involve a couple of research steps. These are cases where getting to the right answer involves keying off of multiple embedded queries.
For instance, I might be looking for the answer to “In what other shows was the actor on LOST who plays the character who sells things in?” That’s actually a series of things that need to be solved:
in what other shows was (the actor on lost who plays(the character that sells things on LOST)))
It’s got dependencies.
The first question is who is the character on lost who sells things.
The second is who plays him
And the third is what shows might I know him from.
These need to be solved in sequence. If I type a question like “where do I know the actor that plays the guy on lost that sells things from” I am going to get nothing particularly useful. Google doesn’t just use keywords to return searches anymore, but it’s still largely got that engine underneath it.
Now if you know the character’s name is Sawyer, instantly you get a good search result: his name is Josh Holloway, he was in Colony and Mission Impossible: Ghost Protocol. But that first step is difficult.
AI results have often been offered as a solution to these dependent queries. Interestingly, for this question the answer that Gemini gives you is useful, but bad. It’s useful because it does give you the name Sawyer, and that in turn gives you a decent list of where you might have seen Josh Holloway in two steps:

Along the way, however, it manages to imply — without any evidence — that since the character Charlie Pace was a drug addict he quite possibly sold drugs. Note that there is nothing in the series that implies this — the implication here is just “people do drugs are often drug dealers”. I imagine this is in part statistical momentum from the mention of Mr. Eko, the drug smuggler in the previous bullet. Otherwise why not also Locke, who was an assistant manager at a toy store? Or Hurley, who sold chicken? In fact, why is Mr. Eko here for smuggling when those others are not here for actually selling things?
It’s an absolute mess that shows how even when getting you to the right answer LLM output can reinforce harmful stereotypes.
Note that this isn’t a trick question or a particularly odd one. And its result is not as dramatic as the Elmer’s Glue pizza. But it feels more insidious to me somehow.
Transparent assistance as an alternative
One thing I’ve been playing around with is AI for transparent assistance with search — that is instead of looking to AI for the result, look to search for the result but have AI help you formulate that search. It’s just one data point, but in this case that works really well. You request a search with the dependencies…
And then you execute it:

This actually gets you a pretty high-quality result, and allows you to tap into the richer Google Search interface. In this case it also sidesteps calling drug addicts dealers, which is good.
Of course, this is clunky here, but it’s evidence that were the connection more fluid there might be some use in having this tech formulate such dependent queries or suggest them, rather than provide AI-produced results.