Reasonableness: An Introduction
A middle road between naive theories of impact and "lol facts don't matter"
Part of a series on reasonableness and evidence, please check out my other pieces in the series: 2, 3, 4, 5, 6, 7, 8, 9
There are two primary accounts of the relation between evidence and belief in misinformation research, and neither is adequate.
The first mode is naive. The idea here is you see misinformation and it shifts your belief. I see that Hillary Clinton is supposedly implicated in a murder on lizardpeople.com and, well, now I’m definitely not going to vote for Hillary Clinton. A more reasonable version might be that I see a video of a person collapsing or dying suddenly supposedly from a vaccine and I say, well, I’m not getting that vaccine then.
The second account is non-naive. In this model, misinformation doesn’t really have much impact at all. People here talk about narratives, deep stories, incentives, structural inequalities. The idea is that if I have a deep narrative that the government is corrupt and untrustworthy, I’ll believe things that support that and disbelieve things that don’t. The same if I have a deep resentment about increasing demographic diversity. If you want to address the belief in the larger narrative. Fix inequalities, address self-identity, etc. Or so the story goes.
For as long as I’ve been in this field we’ve had a set of people running studies in the first mode, and people from the second mode rightfully critiquing them.
But the second mode has problems too. First among them is that to an audience there is no difference between misinformation and not-misinformation. The idea of misinformation is a construct we, as third parties, use. But to an audience it’s just information.
To take a simple example, let’s say you ask me when the bus arrives. In one scenario, I lie — 12:25, I say. In the other scenario, I tell the truth. It comes at 12:10.
If we accept that when I get the correct time I will adjust my behavior towards it (and arrive early) it goes without saying that I will adjust my behavior towards the wrong time if I get that. Because the thing about misinformation from the point of view of an unknowing audience is that the misinformation one accepts as fact is felt as fact. If the killing of Breonna Taylor shifted your attitudes about the police, or racism in the U.S., it would be really weird to believe that false stories about Ashli Babbitt being intentionally targeted and “murdered” at the January 6th attempted coup didn’t shift the beliefs of others. To claim that misinformation has no effect on actions and belief you would have to claim that no information, of any sort, has impact on actions and belief. Not the reported cause of death of your father, not 9/11, not the APR of a credit card offer. Not the article someone wrote claiming that misinformation doesn’t change belief. The idea that information doesn’t change beliefs, ever, is ridiculous on its face.
The response to this from those focusing on narrative is that outside the cases where there are direct results from our beliefs (the bus schedule) and issues where we have no existing narrative (cause of parent’s death, usually) we generally select facts that support our underlying narratives. If we feel that we are victimized, we are more inclined to believe false information that supports that. If we think we are privileged we are more likely to accept stories that support that. We have a deep belief (a “narrative”), we seek evidence to support it, but the evidence is an afterthought, a retcon. In this version of reality, the idea that better information literacy or critical reasoning skills — of us, or the friends that listen to us — could lead to belief change is futile. The facts are merely window-dressing on deeper beliefs that must be addressed directly.
At the edges, this isn’t tenable either. Would the Democratic convention and aftermath in 2016 been as much a mess without Wikileaks and the mix of true and false information it spawned? Would extremists have stormed Congress to stop the formal certification of Biden’s win if no one had advanced the false idea that Mike Pence could overturn the results? The general retort to this seems to be that while misinformation may have resulted in these specific actions, attacking the misinformation would not have changed the belief, which is seen as unmovable by facts.
It’s possible (probable, really) that I’ll be accused of not presenting one of the two modes above fairly. But I mean only to sketch them briefly because I wish to come at this from a different angle altogether.
The Pursuit of Reasonableness
Fundamental to the “narrative” account is that we have the story backwards. We don’t collect evidence and decide what to believe. Rather, we believe something and then collect evidence. I actually agree with this, and have for a long time. Here’s a little snippet of something I wrote back in 2016 where I talked about rebuttal shopping:
… a lot of stuff that goes viral on Facebook is posted as an implicit rebuttal to arguments that the poster feels are being levied against their position. This stuff tends to go viral on Facebook because the minute the Facebook user sees the headline they know this is something they need, an answer to a question or criticism [of their position] that irks them.
The first thing to note, then, is this idea is not particularly new. As I worked on SIFT, for example, this pattern that people tend to look for material that defends their position more often than to explicitly inform it was front of mind.
My issue with the “narrative account” is not its noting of this pattern, a pattern that has been noted since the beginning of recorded history. Rather it’s the unidirectional nature of the account. The “naive” account ran causality from information to belief. The “narrative” account runs it from belief to information.
But here’s a question — if you have a belief already, why spend all this time collecting evidence and in many cases sharing it? People spend a lot of time doing this, and people generally invest their time in things that have value to them. When we ask those advancing the second account, the “narrative” account, they’ll reference things outside of the logical — people share to self-express, people read things for reasons of self-identity. But this too is a bit odd. There’s many different ways to express yourself, or connect with your self-identity, and a lot of them are quite low effort. If all this sharing of facts has nothing to do with logic, then why are you collecting facts?
What I’d propose (and I have borrowed from a mishmash of sources here, from Leo Festinger to Matthew McKeon) is that people spend all this time because they want their beliefs to seem reasonable. And while that it connected to identity, it is connected in a way that straddles the worlds of logic and self-conception.
I’ll give you an example from my own family. A family member of mine did not want to get the COVID vaccine. When I’d call her, I wouldn’t talk too much about it, I’d just ask what she was currently thinking about the vaccine. And she would reply with a long list of reasons why she did not trust it, as well as ask me a variety of questions. For the most part I did not argue, though I did occasionally get frustrated with some of the logic. Eventually this family member did get the vaccine, and the reason she said was that she “just was tired of talking about it.”
The interesting thing is neither I nor anyone else was forcing her to defend her position. Rather, we knew her position, and she knew we knew her position and that we thought it was an unreasonable position. The talking on the phone calls was not to convince me to not get the vaccine. Rather, it was to convince me that her position was reasonable, and that she had come to it by reasonable means. She wanted two things at once: to not get the vaccine, and to be perceived as reasonable. Those things were in conflict, which meant that she had to spend quite a bit of time on phone calls introducing new evidence, new concerns, new stories. Eventually the maintenance of perceptions of reasonableness became too big a cost relative to just getting the vaccine.
This is not to “other” this position at all. We all do this all the time. We have beliefs, we would like to be thought reasonable, we supply reasons. Sometimes that is in an effort just that we be thought reasonable, and other times we engage in persuasion, attempting to enhance the reasonableness of a position so that others will adopt it, as I was doing on those calls. We’re all doing this, quite a lot of the time.
To review — yes, we come to beliefs before evidence, but we wish not only to express our beliefs but to be thought as reasonable. In some cases, we adopt beliefs considered reasonable by those around us. Sometimes we adopt those beliefs just to seem reasonable. When we adopt beliefs thought by some to be unreasonable, we supply reasons, often in the form of evidence. But far from being an afterthought, the evidence we supply is a necessary price we pay for the maintenance of our beliefs. If the reasonableness of a belief becomes too expensive or difficult to maintain, we lose the belief.
This model hopefully cuts a middle road between the mode one (facts form beliefs) and mode two (facts are merely window-dressing on beliefs). People do select facts based on pre-existing beliefs quite often, but that does not mean that the facts are irrelevant. On the contrary, since a sense of reasonableness is required for belief maintenance the facts and evidence matter quite a bit, and people confronted with counter-evidence or a lack of supporting facts may find their beliefs difficult to maintain socially, and ultimately personally.
In the next couple of posts, I’ll show how the concept of reasonableness significantly shifts models of misinformation and information environments and crucially how it can inform educational approaches.
Notes
I tried to keep the sources out of the way during this. It’s a high-level mish-mash of sorts, because my ultimate aim here is not to enter the multiple intersecting domains I have encroached on, but to get to more practical implications regarding an educational model which can inform educational interventions and be tested directly.
Still, here’s some of the inspirations and basis for this.
Reasonableness is used by a lot of different scholars in a lot of ways. Rawlsian reasonableness is at the center a certain view of political morality. Reasonableness also plays a role in law. Epistemologists have used the term in different ways, including as a standard for relevance. I mean something more constrained and at the same time more general. To argue is to enhance the reasonableness of a belief (or other position, such as fear) relative to a proposition. To be thought reasonable as a person may require a range of things, but one requirement is that one has and can supply reasons for beliefs held. Following Toulmin (1958), what constitutes “reasons” varies by culture, profession, domain, and era.
Outside of Toulmin’s work on argument, probably one of the main influences here is A Theory of Cognitive Dissonance, by Leo Festinger (1957). Most people don’t realize that Festinger’s work on dissonance was an attempt at first to make sense of misinformation after an earthquake in India. What Festinger found was that people in the regions that experienced the least threat spread the most misinformation about imagined threats. He postulated that people who were safe needed to rationalize their fear, and hence, had to create reasons for the fear. In our terms here, they had to make the fear reasonable.
Festinger is fascinating to me, because he is the author of a foundational, data-informed theory about misinformation, including the relation between identity and patterns of information-seeking and information-avoidance. And yet for all the work on identity and misinformation relatively few of the major works on misinformation and identity cite it in any meaningful way. It’s not unknown in the field — but it’s on the edges, and I’d argue it shouldn’t be. At the core of the work is a vision where individual facts don’t matter but exert a cumulative effect over time that can trigger significant shifts when belief coherence becomes too stressed.
A lot of what I say here is guided by newer work in argumentation theory (2000s - present). Crucially, I am influenced by observations in a 2013 work by McKeon, who makes the provocative (and I think correct) claim that there is no hard line between the argument and explanation, which are both rationale-giving activities. I also regret to inform you all I have a scrawled note in my notebook from an argumentation theory reading binge half a year ago that says “argumentation enhances the reasonableness of a position” and for the life of me I don’t know if that was a quote, a summary, or a thought of my own. It’s a shame, because that formulation has been really useful to me. But wherever it came from, it’s not far from either the work of McKeon or Robert Pinto.
Richard Stalnaker (1970s - present). In terms of linguistics, this presentation is highly influenced by Richard Stalnaker’s work from the 1970s on, which looked in part at how conversational participants negotiate the introduction of new evidence into the “common ground”. This work is extended by relevance-theoretic work in formal pragmatics in the 1990s, including the Craige Roberts model of the QUD in conversational discourse (“questions under discussion”).
"At the core of the work is a vision where individual facts don’t matter but exert a cumulative effect over time that can trigger significant shifts when belief coherence becomes too stressed."
I know you weren't trying to include every source, but your description of Festinger's work could just as easily be applied to Kuhn's idea of a paradigm shift.