Using AI Is a Process Too
Just as there is "search" and there is "search process" there is a process around AI-assisted investigation too -- and we should teach it
I’ve been thinking lately about what some of the deeper understandings are when it comes to using AI for sensemaking. I think the simplest is probably this: Using AI is a process.
What I mean by that is what we mean when we talk about “search” (the technology) and the “search process”. It’s understood (mostly) that while we want search results to be as useful as possible, the usefulness of search to you is going to be based partly on your facility with the technology, and how you fold it into your larger sensemaking efforts.
The pitched battle people seem to be having about AI isn’t very hospitable to that framework. There is a lot of techno-utopianism from the providers that will convince you that most any question you have in the future will be answered perfectly (whatever that means) in a single response. On the other side of the equation, many people argue that since AI responses are flawed the technology is worthless.
As a person who has studied search and sensemaking for a long time neither position makes sense to me. First of all, there will never be such a thing as a technology that always provides a perfect answer. That’s impossible at an epistemological level. People disagree about things. An infinite amount of things aren’t known. The information environment doesn’t always provide clear signals about what is true or where claims come from. Your information needs may be different from someone else’s even when asking the same question. You may be asking the wrong question for what you want. The list goes on.
Secondly, the idea that imperfect information technologies have no utility is also weird. Any given search result is likely to have right information and wrong information in it, yet broad swathes of our personal, professional, and scholarly lives are built on search because we learn how to work around its issues, mitigating its weaknesses and playing to its strengths.
In this video I walk through a simple question and show how seeing an AI Mode result as a first step in a larger process leads to understanding. I tried to make it short and conversational enough that it could be used for students in a college class.
The truth is that there are some very simple lessons we can provide students to get more out of AI assisted investigations and I hope to share some of those in the coming weeks.
Looking at your post about AI as process rather than product, I'd like to share a complementary framework I've been developing that maps why people use LLMs along two key axes.
The first axis represents the complexity of need - ranging from simple information retrieval to complex problem-solving requiring multi-step reasoning. The second axis captures the depth of interaction - from one-shot queries to iterative, collaborative processes.
Given your insights on AI as process infrastructure, I'd be grateful for your reflections on whether this framework captures something useful about that transition.
https://chrtienmommers117356.substack.com/p/the-ai-conversation-matrix-that-actually?r=27w650