Comparing the wrong things
Search might be a better approach for learning about the world, but the right comparisons matter
I romanticize search. I do. I have been to search what Julia Child was to cooking. (At least in my head. I could of course never be as cool as Julia Child).1
I also work on AI education initiatives,2 figuring out what people need to know to use AI effectively. I was working on that earlier today.
Recently a particular issue has been gnawing at me. The discussion about whether people should use AI to get an answer to something — even an initial one — seems malformed. The general argument people make seems to be “I asked AI for a result, it got things wrong, therefore don’t use AI.”
I can make AI break myself. I’m good at it. Programmers flee when they see me, intuiting, usually correctly, that I’ve discovered yet another embarrassingly muddled LLM gaffe.
But the “how close to accurate is it” approach to whether a person should consult an LLM for an answer seems flawed. I explain why in the video below. The key is that the real comparison is not between something like AI Mode or ChatGPT and an fully accurate description. The comparison is the level (and nature!) of understanding achieved by a person who chooses to pursue search vs. a person of similar ability that pursues an answer through an LLM.
How does that map out? I don’t know! It’s time-consuming to test systematically, though it’s the sort of thing that education researchers test all the time and have methods to explore. But you can’t answer that question by looking just at output accuracy, because it doesn’t account for the pitfalls of search, which quite famously does not propel everyone to perfect answers.3 You have to watch people engaging in the different approaches then test their knowledge. In the video above I walk through a specific search task (which is fraught and skill-dependent) and the equivalent AI Mode task (which is straightforward, yields useful perspective, and is much less impacted by skill level).
I’m sure people will jump on here and I’ll be seen as shilling for AI. Maybe. But I mostly want us to just focus on meaningful questions, the sort we would ask about any other technology if we weren’t thinking of it as the Star Trek Computer.
I initially referenced the “joy of search” in this sentence, but the true king of the joy of search is Daniel M. Russell — find his book here.
Obligatory note that I engage in occasional paid consulting for Google on information literacy programs and issues, including both on search and AI, though that is not my day job.
I’m skipping over, for lack of time and space, the even bigger problem that there is no perfect answer, and that “understanding” is a much different sort of thing than “believes the right set of propositions”. Understanding in my information literacy work is roughly defined as knowing the structure and significance of the surrounding discourse and evidence, not resolving to a single set of “endorsed” propositional beliefs. So the failure of understanding in the above is not the belief that the Dogon knew this, but rather the lack of knowledge that there has been a wealth of research on this since the 1930s, and all of it has weighed against that belief.
My personal intuition is that I'm able to retain more detail about things that I've casually queried ChatGPT about, compared with search. The sense I have is that the effect is similar to reading a paper book, compared to Kindle.
Was having a version of this discussion this evening--take as given that we all hold flawed models of the world. Does AI help us achieve particular aims more or less effectively than some other tool (search, talking to "the best available human", guessing, etc)? Pragmatic in the William James sense.