Discussion about this post

User's avatar
Sara K-M's avatar

When ChatGPT first came on the scene I was thinking it could be used to create "stepping stone" sources for students to use in the early stages of the research process. But I share your concern that things that look like answers are much more likely to be used as an end point rather than a starting point.

I wonder if you've seen any writing about the sources being linked in Google's AI summaries. I keep encountering poe.com (Quoara's AI chatbot) in the linked sources, which seems... extremely bad. I wrote my own post about it here: https://necessarynuance.beehiiv.com/p/need-talk-google-722e

Expand full comment
Sherman Dorn's avatar

If a LLM had information retrieval and synthesis as its core functionality, this would make sense. But LLMs aren't like that at all -- they simulate discourse. To the extent that they look like they're synthesizing information, that's because their training set and add-on doodads have discourse enough like the answer that the billions of parameters make a synthesis-looking chunk of text possible. But it's all text extrusion. No knowledge.

And what that means is that an AI answer to search queries can *add* cognitive load, because you have to sift for the bullshitting.

Expand full comment
6 more comments...

No posts