"Is this what people think it is?" is a shockingly effective prompt
I don't know why this didn't occur to me until now.
In our book Verified, Sam Wineburg and I discuss something that was a later development in SIFT, around 2020 or so, the formulation of the core question of personal fact-checking as “Is this what people think it is?”
There’s actually some deep epistemological insights under that shift, but the simplest way to conceptualize it is being misinformed (either by yourself or others) is not about the relation between something you are looking at and the truth. Being misinformed is usually about bad evidence. The most common pattern is this: without context something looks like good evidence of something. Once you have the context of that evidence it does not.
You see this in the example from my replacement billboard walkthrough.
Our point in Verified is the question of whether this billboard is “true” is nonsensical. What would that even mean? It’s a real photo, it’s not doctored. (True!) On the other hand the company does not exist (False!). Tech companies do engage in behavior that resembles this on a less egregious scale (True!). But they don’t do it on this scale (False!). And so on.
Still it’s pretty clear this is misinformation.1 Why?
What is actually happening in this case is the person using this photo is presenting it as evidence that tech elites are playing loose with the law to engage in a bunch of creepy behavior.
Now again, that larger belief (tech elites are reckless and creepy) is not the “misinformation”. Claims at that level of abstraction — that climate change is the major crisis of our time or that tech elites are creepy — aren’t directly resolvable as a matters of fact. And those deeper, more abstract beliefs aren’t as central to the concept of misinformation as you might think. For instance, you can believe that tech elites are creepy, and still see this post misinformation. You can believe that climate change is a central societal challenge but reject a post that frames increases in heat-related deaths in absolute instead of per capita terms, or claims that non-climate focused mitigations can’t work to avert fatalities.
When I say we don’t assess larger beliefs directly, the key is in the word directly. What we do is assess evidence offered for those beliefs. And the reason why this billboard is “misinformation” in this case is that once the context is known (this is a satirical billboard) it is in fact no evidence at all of what the arguer is arguing. The evidence is misrepresented. It’s not that you have a wrong belief. It’s that in arguing for your belief you are polluting the discourse space with misrepresented evidence. Which is at the very least quite careless, and depending on your knowledge of the context, potentially unethical.
So that’s a big intro to this point, which is how I learned to encapsulate that insight over time in the least academic way possible was to stop asking students if things were true or false, and instead ask them the question “Is this what people think it is?”. That is, if you look at what our guy in the constructed example above is doing, and how people are interpreting that, it’s clear the poster and his audience have wrong ideas about what the thing “is”. They are missing (or stripping!) context which impacts its interpretation as evidence.
And here’s the thing — that question which worked well with students turns out to work wonders with LLMs too. In this video I go through a series of examples that I pulled together, and just pose the simple question “Is this what people think it is?”. And lo and behold, this follow up gives short, succinct, and accurate fact-checking context for a variety of artifacts that have different sorts of evidentiary problems. I was actually shocked how well it worked, and embarrased I hadn’t asked this central question as a follow-up sooner.
Check it out, it’s really interesting:
We have prompts that dig deeper, of course. But as a general follow-up producing succinct answers this performs ridiculously well. The sequence in the above video also shows quite well my assertion that the misinformation we care about (and can agree is bad) is not false belief, but wrongly contextualized evidence, which left unchecked undermines the sensemaking benefits of social argument. I lost the battle to get this insight into misinformation studies more generally, but maybe with these tools now available it’s worth resurrecting it.
I hate this term, but it’s the term we landed on as a field. So I use it here.


Love this. Not only a great prompt, but also puts fact checking in the wider dialogic context: what do I think of this, what do other people think of this, what do I think of what other people think of this etc
Thanks - am enjoying your video. Am wrestling with the question "Is this what people think it is?" How does AI (or even us for that matter) know what people are thinking? Which people? etc. What about "Tell me about this information source/opinion?". Also - what about AI being flooded with Russion dis/misinformation (at least according to NewsGuard) - how long can we trust AI's analysis?