Deliberate altruism and costly signalling
Yesterday I gave an argument against the view that effective altruists should engage in costly signalling. Here’s an additional complication for that view.
The costly signalling-view says that we should engage in certain behaviours—e.g. accept a lower salary, or adopt a vegan diet—even if their direct impact is relatively low, because they serve as a costly signal of other high-impact traits—like cause-neutrality or good epistemics. * And we should reward those who engage in those behaviours.
According to this view, there is a general value-alignment trait, which predicts a range of behaviours—from tendency to accept a lower salary to cause-neutrality. It’s like a quantity that you can dial up or down—and various behaviours will change accordingly. Therefore, you can infer that quantity from observing some of those behaviours.
People use similar reasoning outside of effective altruism, of course. E.g. there’s the well-known adage that you shouldn’t trust anyone who mistreats service staff, because it’s evidence of poor character (though I think that that, too, is sometimes a bit exaggerated). In some contexts, such reasoning may work. In particular, it may work when people aren’t being very deliberate or strategic, because then their behaviour will relatively directly reflect their dispositions.
However, it can be more difficult to employ such reasoning about some sorts of deliberate or strategic behaviour. Suppose, for instance, that some committed effective altruists have done the maths and arrived at the conclusion that a lower salary and a vegan diet don’t make that much of a difference in terms of direct impact. Moreover, suppose that they, rightly or wrongly, reject costly signalling-arguments (and other forms of indirect impact). For instance, they may think that in the long run, effective altruism is better off by focusing on direct impact. In such circumstances, them not engaging in those behaviours isn’t at all indicative of them not being committed. Rather, it’s indicative of them being deliberate about these behaviours, and having concluded (rightly or wrongly) that they’re not a priority. It may also be indicative of a bullet-biting inclination that is characteristic of, e.g. many utilitarians (cf. David Moss’s comment). That introduces an additional complication for anyone wanting to use these behaviours as signals or evidence of character.
In fact, this means that there’s even a risk that rewarding this kind of costly signalling could backfire. You might penalise people who think carefully about impact, while rewarding those who stick with common sense conceptions of what an altruistic person should be like, e.g. for reasons of inertia.
It is thus quite complex to reason about costly signalling in effective altruism. My sense is that some of those who advocate costly signalling in effective altruism rely on overly simplistic theories of effective altruists’ motivations, and that they therefore underestimate the epistemic difficulties of inferring character from behaviour.
* Of course, if there are other other compelling reasons—e.g. direct impact—for these behaviours, then we should still engage in them. My criticism concerns the argument that we should engage in them specifically for reasons of costly signalling.