Look before you leap: How AI changes source verification and how SIFT will respond
Surprisingly few people are talking about this?
I talked a earlier this week about how things like short form video and linkless social media have pushed me to change my information literacy approach. In a world where news is increasingly sourceless, it’s easier to just check the claim.
What I didn’t explain is there is an even bigger reason why people will spend less time verifying sources. I didn’t know quite how to explain it, but I’m going to try.
Most unknown sites you come across through search, not social
For all we talk about the weird sites that used to reach us through social we see the most new sites through search. The percentages have shifted over time, of course, but a person who comes to search is motivated to click something, and that happens something like five trillion times a year.
So this will sound weird, but how is it that we get to sites that we think are something else through search?
Is that a clear question?
You do a search for something — say vitamin K and eczema — and you get to a site telling you it’s great for that. You forward to someone you think could benefit and they say, hey, you know this is a blog post on a site that sell skin cream right?
The way a lot of people conceptualize this issue is that the search gave you a bad result. It should have given you more “reliable” links, right? That’s why you got the bad source. But that’s not correct. Most of the time it’s not a reliability problem, it’s a relevance problem. The same is true if you get a magazine article when you want a research article or a research article when you want a magazine article.
The way traditional search works is it knows your keywords but has a difficult time guess your search intent or need. When you do a search for something like vitamin K and eczema you get a search result set that is a bit of a variety pack of sources that serve the range of things that various people want when they search those things. Blog, academic research, holistic product sites, reddit posts.
Each one of these is the right link for someone, but maybe not for you. So you click through and see if it meets your need and maybe sometimes you don’t notice that it’s not quite what you think it is. Part of what the original SIFT was about was making sure once you clicked through you took a moment to realize where you landed and ask if it met your need.
The relevance problem is solved — if you have the right skills
Nothing’s perfect, but compared to where we were three years ago, search relevance (source and content) went from one of the harder problems in search to one of the less pressing issues. I remember watching a video in 2020 or so on how BERT, one of Google’s deep learning models, did search term expansion from natural language queries, resulting in greater insights into intent. Users could ask questions and the questions were processed as more than a bag of keywords. “Why do lights on the top of buildings blink?” gave you different results than “lights top of buildings blink”.
Today, we are light years beyond that with people getting full answers that are aimed directly at every nuance of questions they ask or statements they make. People realize this of course, it’s the main reason people search this way — what they need specifically is directly served.
What’s less recognized — in fact I haven’t seen any writing on it at all — is how this reduces the problem of encountering sites you misinterpret.
It does this on a couple levels. First, AI search results are just ore relevant in general. Instead of typing in “vitamin k eczema” and getting the grab bag of results that cover a variety of needs even a naive user can type in something like “should I take vitamin k for eczema?” and while the initial set of results is still a bit of a grab bag, you do get a paragraph at the top that gives you initial context for interpreting anything you might see. You do not have that frame set by the first link you click, which is huge!
You still have the sourcing problem though. If you’re an academic you might think the links there are “bad”. You’re wrong. If you wanted more scholarly sources in the mix the problem isn’t reliability, it’s relevance. Most people don’t read academic papers to decide whether to buy skin cream, and for most people getting a wall of NIH funded research when they are deciding whether to buy that is hell. If you want that, you’re the weirdo, right? (It’s OK, I’m a weirdo too).
There’s a variety of stuff that could be behind those links, and the traditional approach has been for users to click through and check, and find out that the resource isn’t what they though it might be. You know this happens a lot because if you run a website an awful lot of your web hits are “bounces”, people looking at your page for 5 seconds and saying, nope, not what I wanted. You know this also because you do this a lot personally.
And it is actually this process of “bouncing” that leads to a lot of error. Because sometimes you land on a web page from the search links, and it is not the sort of thing you think it is, but you don’t realize that. Your head was in the space of “I wonder what reputable medical associations say about this” and you hit a page that is actually a site that is selling supplements, and in your hurry you didn’t notice that.
So you stuck when you should have bounced.
A lot of source verification is about getting people, once they have arrived at the page search gave them, to bounce appropriately.
AI Allows You to Refine Source Relevance Before You Make the Jump, and That Changes Everything
Ok, not everything, but this is the shift that people are not talking about in search. After you get the above results you can put in something like this:
can I get a table of sourced opinions with links to reputable medical associations and eczema experts
And it will give you a helpful table:
But as importantly, the links it surfaces are now much more aligned to your search goal of finding organizational statements and expert positions rather than research papers on the one hand or supplement sites on the other. It’s not perfect — search results have to have a bit of flexibility in them — but you click anywhere in this list you’re going to get the sort of thing you want, especially if you use the generated table as a guide to the links.
Want more academic research? Want to drink from the firehose of ongoing correlational studies? You can do that too. Just give it the information about the sources that are relevant to you.
give me a table of recent journal articles on this topic of vitamin k and eczema with descriptions and links to sources
And of course you can ask questions about any of these sources before you click
Is Clinics in Dermatology a well-respected journal
You’re not jumping blind anymore (or at least you don’t have to)
I hope you’re seeing what I’m seeing here. We aren’t in a world anymore where you get a list of search results and then visit them and sift through them, visiting and checking what they are. The better approach right now is to do that sifting before you jump to the link. And that — even more than short term video — is why I’m looking at switching up SIFT.








Great explanation - thanks for sharing this!
I already do this intuitively. Whenever I explore a new topic, especially one I don’t have much expertise in, I turn to tools like ChatGPT or Claude. I ask them to identify experts, whether individuals or institutions, and to explain their general reputation and why they are considered authoritative. From there, I use that information as a starting point. This approach has saved me a lot of time and headaches.