Against epistemic discounting
We usually argue in order to settle specific issues. How much do masks help against the spread of Covid? How important is it to reduce carbon emissions? How does incarceration affect crime? We put a particular question in the spotlight, and bring various arguments to bear on it.
However, the arguments don’t just affect the spotlight question, but also have further implications. For instance, suppose that someone has argued that experts claim that it’s very important to reduce carbon emissions, and that someone else counters by saying that experts are biased and untrustworthy. If so, that should affect our credence in all kinds of other propositions that are partly based on trust in experts. Thus, even if we make arguments in order to affect our listeners’ credences in the spotlight question, their effects don’t stay there.
Neglect of further implications of supporting arguments might be related to a Soldier Mindset. Suppose that you’re convinced of some spotlight proposition P, and that you also think that arguments for or against P only affect the believability of P. If so, you might not be very interested in whether those arguments are actually correct, as long as they make people more likely to believe in P. But if you realise that those arguments should affect our credences in all sorts of other theories and claims as well, you might adopt more of a Scout Mindset, and become more interested in whether the arguments actually make sense.
And it’s not just arguments that have implications beyond the spotlight question. What terms or other linguistic expressions you use also have downstream consequences. When you’re discussing a particular issue, you might think it doesn’t matter much exactly what terms you use, since it’s anyway clear from the context what you mean. Likewise, you might think that it’s fine to use different forms of conversational implicature instead of being literal. However, my experience is that people often find it easier to generalise to other contexts if you’re being literal and if you’re using standard (or technical) terms. Hence, I think these further epistemic effects—beyond the spotlight question—can be a reason to insist on high levels of clarity and literalness.
But most people tend to focus on the spotlight question when they argue, and fail to consider how their arguments and their use of language affect other issues. In effect, they engage in what can be called epistemic discounting. Just as we value present benefits more than future benefits—temporal discounting—so we value our arguments’ epistemic effects on the spotlight question more than their epistemic effects on other questions.*
In my view, we should correct for this tendency. We engage in too much epistemic discounting, much like we engage in too much temporal discounting. Instead, we should argue in a way that has positive indirect effects beyond the spotlight question. Specifically, we should:
Make sure supporting arguments are actually correct, whether or not they support our view on the spotlight question
Go beyond the minimum levels of clarity necessary from the perspective of the spotlight question
Use technical or standard terms where available
Follow good epistemic norms that have positive downstream consequences
It’s likely no coincidence that these points are all old-school scientific/academic ideals. Science has developed norms that are consistent with a lower rate of epistemic discounting than the human default. It’s not as myopically focused on the spotlight question, but takes a more zoomed-out perspective. (This is not to say that contemporary academics couldn’t improve in this regard, though.)
Temporal discounting is sometimes explained by reference to our evolutionary history. In the environment of evolutionary adaptedness, the future was much more uncertain than it is now, meaning it may have been adaptive to discount the future heavily. Similarly, you could give an evolutionary explanation of epistemic discounting. Since our distant ancestors were illiterate, arguments and claims were liable to be forgotten. That would have reduced the downstream consequences of arguments beyond the spotlight question. Likewise, they likely didn’t have a well-defined technical terminology, meaning they had no choice but to use language in an intuitive and often haphazard way when constructing arguments. More generally, they lacked systematic methods for acquiring knowledge (science).
That might be what biases us in the direction of solving spotlight questions in ad hoc ways, without worrying too much about wider epistemic effects. We should seek to avoid this bias, and ensure that our arguments don’t just solve our spotlight questions, but improve our beliefs and our epistemics more generally.
* You might argue that epistemic discounting is just temporal discounting applied to beliefs and arguments, since the effects of arguments on spotlight questions naturally precede the effects on other questions. I think that’s one factor, but I also think the epistemic case is somewhat special. For instance, the downstream consequences of arguments on other questions seem quite unsalient, and that might make us underestimate them. By contrast, it seems that people often engage in temporal discounting even when the near-term and long-term effects of their actions are fairly salient. Also, I suspect that epistemic discounting may in part be due to individual and social benefits coming apart: people might use whatever means at their disposal to win the debate on the spotlight question even if that has negative downstream epistemic externalities for their community. But in any event, the relationship between epistemic and temporal discounting isn’t central to my argument.
This post is inspired by Lars Harhoff Andersen’s insightful discussion of the downstream consequences of lying. See also my post on the importance of paying attention to “the scaffolding”—e.g. examples and the finer details of thought-experiments—in arguments.