Against cluelessness: pockets of predictability
A common argument against longtermism is that we’re “clueless” about the longterm effects of our interventions: we can’t say whether they make the future better or worse even in expectation. Therefore the longtermist project is hopeless, the argument goes. We just can’t reliably affect the future, so we shouldn't even try.
I disagree with this argument: I don’t think we are clueless, in the relevant sense. Thus I think this argument against longtermism fails.*
One thing that’s important to notice is that for the cluelessness argument to work, us being clueless about the longterm effects of most of our interventions is not enough. Instead it has to be that case that we’re clueless about the longterm effects of all of our interventions. If we’re not, the longtermist can just focus their efforts on the interventions which we’re not clueless about.
And in fact, I think there are some interventions which we plausibly are not clueless about. One example is interventions directed towards near-term existential risks and artificial intelligence lock-in scenarios. Such scenarios don’t seem that unlikely - and were they to occur, it seems that we should have some chance of affecting them. For instance, it seems that effective altruists’ position in artificial intelligence is sufficiently strong to allow us to affect artificial intelligence existential risk and lock in scenarios. That means that they should be able to pursue interventions targeting those scenarios that have a positive expected value.**
Another example is interventions aimed at increasing the effective altruist longtermist community’s own capacity - in particular in terms of numbers and monetary assets, but also in terms of more intangible assets such as reputation and knowledge. The aim of this strategy would be to build capacity in order to use later - when an opportunity to have more direct longterm effects (e.g. via a lock-in event or the aversion of an existential catastrophe) present themselves.
It seems to me that effective altruists have been able to build such capacity successfully over the last 10-15 years, and that it’s fairly likely that we’ll be able to continue to do so. There are, of course, risks: e.g. the movement might experience value-drift, or might implode for some reason. But even if those risks are substantial, it still seems to me that efforts to build effective altruist capacity will in expectation produce more value-aligned effective altruist capacity over the medium term (many decades or centuries). That additional capacity should, in turn, increase our ability to affect potential existential risks and lock-in scenarios that could appear in the medium term. Since it seems fairly likely that such scenarios will occur in the medium term, that means that capacity-building interventions should have a positive expected value.
Thus, I think we have at least two classes of interventions whose longterm effects have a positive expected value. Therefore, I would reject the cluelessness argument against longtermism.
*
But while I reject the cluelessness argument, I share an intuition that I think underlies it: namely that it is in general incredibly hard to reliably affect the longterm future. It is true that our overall epistemic position with regards to the longterm future is impoverished. What saves us is the existence of what we may call “pockets of predictability” (cf. Stephen Wolfram’s “pockets or reducibility”): particular aspects of reality which we could affect with partially predictable longterm results.
On an intuitive level, we may think that the predictability of different longtermist interventions don’t vary that much. (We may generally default to such low-variance distributions in the absence of specific knowledge.) Thus we may think that because many longtermist interventions have low predictability, they all do. But on the contrary, the variance of longterm predictability is likely high: some longtermist interventions (e.g. those affecting existential risk) are probably much more predictable than most. And that's good news, because whether we're clueless or not isn't determined by the epistemically average intervention but rather by the epistemically best interventions - as we will choose to implement them.
It's possible that similar intuitions about low-variance predictability long held back scientific and technological progress.*** Much of the world was once unknowable to humans, and people may have generalised from that, thinking that systematic study wouldn't pay off. But in fact knowability varied widely: there were pockets of knowability or predictability that people could understand even with the tools of the day (e.g. naturally simple systems like the planetary movements, or artificially simple systems like low-friction planes). Via these pockets of knowability, we could gradually expand our knowledge - and thus the world was more knowable than it seemed. As Ernest Gellner points out, the Scientific and Industrial Revolutions largely consisted in the realisation that the world is surprisingly knowable:
the generic or second-order discovery that successful systematic investigation of Nature, and the application of the findings for the purpose of increased output, are feasible, and, once initiated, not too difficult.
Similarly, we may be on our way of realising that it's surprisingly feasible to have a reliable longterm impact, thanks to pockets of predictability that we've failed to take sufficiently seriously.****
* Another interesting counterargument says that even if we were clueless about the longterm effects of our interventions, it wouldn't follow that we should abandon longtermism - since we would be at least as clueless about the longterm effects of interventions directed towards near-term causes.
** I also think that these interventions have sufficiently large chances of affecting the longterm future that they can’t be said to constitute Pascal’s muggings. (Thereby not said that one shouldn’t maximise expected value in Pascal’s mugging-style cases - that’s an issue for another day.)
*** The below arguments about pockets of knowability are influenced by Wolfram, but since I'm not entirely sure how to understand Wolfram's claims I have chosen to couch them in my own terms.
**** See Robin Hanson’s fascinating talk on related issues.