When ChatGPT first came on the scene I was thinking it could be used to create "stepping stone" sources for students to use in the early stages of the research process. But I share your concern that things that look like answers are much more likely to be used as an end point rather than a starting point.
I wonder if you've seen any writing about the sources being linked in Google's AI summaries. I keep encountering poe.com (Quoara's AI chatbot) in the linked sources, which seems... extremely bad. I wrote my own post about it here: https://necessarynuance.beehiiv.com/p/need-talk-google-722e
If a LLM had information retrieval and synthesis as its core functionality, this would make sense. But LLMs aren't like that at all -- they simulate discourse. To the extent that they look like they're synthesizing information, that's because their training set and add-on doodads have discourse enough like the answer that the billions of parameters make a synthesis-looking chunk of text possible. But it's all text extrusion. No knowledge.
And what that means is that an AI answer to search queries can *add* cognitive load, because you have to sift for the bullshitting.
I think the trouble is that for Google, their end goal is to make it *sufficiently answer-like* to keep the user within the bounds of Googleverse. Faceted search was something Google always *could* do but they quickly realized they'd rather users not, and just get to the part where they made Google money. I'm pretty skeptical that they'll take a more expansive, nuanced, and multi-step approach in developing this offering.
(but overall yes, I think this is the right way to think about this! Just don't trust Google to get out of their own way [and the way of their business imperatives] to do it!)
I've worked with people at Google who care very much about these things but I also know Google is sort of like a country -- there is no "What is the U.S.'s desire here?" there's a bunch of different people pursuing and advocating different visions or different rankings of priorities, and sometimes some people get the wheel and sometimes other people get the wheel. I agree that having to square the circle with an ads based business model makes everything an order of magnitude more complex. But they made a lot of money before as a stepping off point, and there are also risks if they *don't* continue to feed traffic to thee sites they use. I'm not naive, it is capitalism, but there are some good incentives that people do sometimes miss.
Yes for sure there are lots of folks who care about the right things! And the national analogy is a good one. But the one thing that Google has gotten wrong time and again, kind of no matter who is in charge, is anything that has to do with *people* rather than *systems* and the necessary pivot is back towards people-centered versions of the Web. And on that end they are also climbing out of a hole of their own creation where every tech early adopter just automatically assumes any new feature or product coming out of Google will be killed, sooner than later. Building that good faith back up would be a pretty big lift, on top of everything else.
It’s not entirely clear to me that Google has an incentive to keep people on the SERP. In fact, quite the opposite!
If you see *just* the SERP page, you miss all the ads on the ad network, spread out over all those pages you would have seen if you navigated to a bunch of results. And if you spend less time, overall, searching, you see *fewer* ad impressions.
Putting “answers” (such as they may be) directly on the SERP strikes me as something Google feels they *have* to do due to competitive pressure; it does not strike me as something—were it not for competitive pressure—that would be assumed to *increase ad revenue* (and in fact, the opposite).
As a disclaimer, I work at Google, but nowhere near Search or Ads, so this is uninformed speculation on my part.
When ChatGPT first came on the scene I was thinking it could be used to create "stepping stone" sources for students to use in the early stages of the research process. But I share your concern that things that look like answers are much more likely to be used as an end point rather than a starting point.
I wonder if you've seen any writing about the sources being linked in Google's AI summaries. I keep encountering poe.com (Quoara's AI chatbot) in the linked sources, which seems... extremely bad. I wrote my own post about it here: https://necessarynuance.beehiiv.com/p/need-talk-google-722e
If a LLM had information retrieval and synthesis as its core functionality, this would make sense. But LLMs aren't like that at all -- they simulate discourse. To the extent that they look like they're synthesizing information, that's because their training set and add-on doodads have discourse enough like the answer that the billions of parameters make a synthesis-looking chunk of text possible. But it's all text extrusion. No knowledge.
And what that means is that an AI answer to search queries can *add* cognitive load, because you have to sift for the bullshitting.
I think the trouble is that for Google, their end goal is to make it *sufficiently answer-like* to keep the user within the bounds of Googleverse. Faceted search was something Google always *could* do but they quickly realized they'd rather users not, and just get to the part where they made Google money. I'm pretty skeptical that they'll take a more expansive, nuanced, and multi-step approach in developing this offering.
(but overall yes, I think this is the right way to think about this! Just don't trust Google to get out of their own way [and the way of their business imperatives] to do it!)
I've worked with people at Google who care very much about these things but I also know Google is sort of like a country -- there is no "What is the U.S.'s desire here?" there's a bunch of different people pursuing and advocating different visions or different rankings of priorities, and sometimes some people get the wheel and sometimes other people get the wheel. I agree that having to square the circle with an ads based business model makes everything an order of magnitude more complex. But they made a lot of money before as a stepping off point, and there are also risks if they *don't* continue to feed traffic to thee sites they use. I'm not naive, it is capitalism, but there are some good incentives that people do sometimes miss.
oh, and thanks for your comment!
Yes for sure there are lots of folks who care about the right things! And the national analogy is a good one. But the one thing that Google has gotten wrong time and again, kind of no matter who is in charge, is anything that has to do with *people* rather than *systems* and the necessary pivot is back towards people-centered versions of the Web. And on that end they are also climbing out of a hole of their own creation where every tech early adopter just automatically assumes any new feature or product coming out of Google will be killed, sooner than later. Building that good faith back up would be a pretty big lift, on top of everything else.
It’s not entirely clear to me that Google has an incentive to keep people on the SERP. In fact, quite the opposite!
If you see *just* the SERP page, you miss all the ads on the ad network, spread out over all those pages you would have seen if you navigated to a bunch of results. And if you spend less time, overall, searching, you see *fewer* ad impressions.
Putting “answers” (such as they may be) directly on the SERP strikes me as something Google feels they *have* to do due to competitive pressure; it does not strike me as something—were it not for competitive pressure—that would be assumed to *increase ad revenue* (and in fact, the opposite).
As a disclaimer, I work at Google, but nowhere near Search or Ads, so this is uninformed speculation on my part.