Discussion about this post

User's avatar
Mandy Honeyman's avatar

"The term hallucination has become nearly worthless in the LLM discourse"... Will be taking this back as a starter for teaching both staff and students. Thank you. (Because of the difference between (mis)using llms for searching and analysis.)

Expand full comment
Matt Crosslin's avatar

The first concern I have is that the example given of finding the truth about the Slade photo takes most people seconds, not minutes to figure out. I have done this many, many times myself. So the question remains. Why take all the time to examine and refine Ai output when you can quickly do the search yourself? The whole "oh, you anti-Ai people don't know prompt engineering" response is getting very, very thin after all this time.

Second, this whole example seems to justify Ai being this way because it is like human thinking - which it is not. I keep going back to Stephan Wolfram's description of what is happening inside Ai and it still holds. It's not working through responses like a human - it is ranking possible pattern completions based on how well each response completes the pattern. The fact that you don't get a correct answer the first time is actually the Ai system working correctly. Because of the misinformation out there, it is telling you the most likely response. This is because, again, Ai is not human and does not view truth the way a human does. It is looking at the most likely pattern completion in a database trained on all the misinformation out there. You then asking for "evidence" is asking it to override it's core programming. Asking for evidence tells Ai to stop with pattern completion and use a different algorithm. One that it should have used in the first place - but the people that created it don't want it to use (red flag here!) because that makes it harder to control.

The third problem I have is the gross misrepresentation of Ai critics. Yes, we do know how to do prompt engineering, or whatever you want to call it now. Most Ai usage is not a simple web search photo hoax investigation. There are lawyers using Ai to write court filings with fake citations... MIT found that 95% of all business implementations of Ai are currently failing... People are being told they are gods, or that they should kill themselves, or all kinds of horrific things... Saying these things are often "merely" a first pass when these are what you get through deep engagement? When it is a simple true/false fact about a person in a picture, sure you can get to the truth through Ai (even though I find I am faster with just myself than the process to get there with Ai). But the more complex things people actually do with Ai? Things get weirder and weirder the longer you go.

Expand full comment
20 more comments...

No posts