An analysis of reputational arguments for sacrificial behaviour in effective altruism
Some argue that effective altruists should make certain sacrifices—e.g. work for a lower salary, or adopt a vegan diet, or lower their personal emissions—even if that has a limited direct impact. Recently, I’ve discussed two such arguments: the argument that such behaviour is a useful costly signal of other, high-impact, traits, and the argument that not engaging in such behaviours causes bias. In this post, I’ll turn to a third argument: that effective altruists adopting such behaviour helps effective altruism’s reputation outside the movement. This argument says that if effective altruists don’t, e.g. adopt modest salaries or a vegan diet, effective altruism’s reputation will suffer.*
This reputational argument seems more complex than the other two arguments, which concern correlational or causal relations between individual psychological variables (e.g. between willingness to work on a low salary and cause neutrality, or between a vegan diet and ability to think clearly about the distant future). In this case, we rather have to look at people’s beliefs and judgements about people who engage in these sacrificial behaviours, and how much those beliefs and judgements matter. That is more complex. For instance, people may believe that people who request a higher salary are morally dubious in various ways, and if so, that may cause reputational harm even if those beliefs are wrong. We also have to factor in what strategies the effective altruism movement wants to pursue; e.g. to what extent we want to have a high public profile (in which case reputational arguments may be more important).
I definitely do think that some people expect people calling themselves “effective altruists” to engage in certain forms of sacrificial behaviours; and that they would think less of effective altruists if they learned that they didn’t.** At the same time, one needs to put that in perspective. Some of the groups that dislike effective altruism aren’t particularly important anyway—and it was always unlikely that they would become effective altruists. The fact that they’re vocal and salient may bias us to overrate their importance.
We should also remember that sacrificial behaviours, and expectations thereof, can carry reputational costs as well. In particular, a reputation of paying low salaries may put off people who could have a large impact from applying to effective altruist organisations. Conversely, higher compensation may help us get a more normal reputation, which in turn could help with recruitment.
There are also a bunch of more nebulous considerations—less directly related to impact—that I’m unsure how to think about. If you think that it is morally unproblematic to behave in some way, then you may feel bad about changing your behaviour for reputational reasons—as it may feel like you’re giving in to moral pressure. It may even seem disingenuous to engage in certain behaviours for reputational reasons. Evan Hubinger argues that it may thus in fact be reputationally harmful:
One thing that bugged me when I first got involved with EA was the extent to which the community seemed hesitant to spend lots of money on stuff like retreats, student groups, dinners, compensation, etc. despite the cost-benefit analysis seeming to favor doing so pretty strongly. I know that, from my perspective, I felt like this was some evidence that many EAs didn't take their stated ideals as seriously as I had hoped—e.g. that many people might just be trying to act in the way that they think an altruistic person should rather than really carefully thinking through what an altruistic person should actually do.
This is in direct contrast to the point you make that spending money like this might make people think we take our ideals less seriously—at least in my experience, had I witnessed an EA community that was more willing to spend money on projects like this, I would have been more rather than less convinced that EA was the real deal. I don't currently have any strong beliefs about which of these reactions is more likely/concerning, but I think it's at least worth pointing out that there is definitely an effect in the opposite direction to the one that you point out as well.
I agree that there’s something authentic and attractive about just doing what seems right on a first analysis. Ironically, refusing to signal sends a signal of its own.
However, how you think about these issues probably depends on your intuitions about the behaviour itself. Some people have strong nonconsequentialist moral gut instincts about, e.g. a vegan diet or a low-emissions life-style. They may on average find reputation-based arguments for those behaviours to be less disingenuous than those who both lack those gut instincts and think that those behaviours have a low direct impact.
In any event, my overall sense is that the reputational argument is the strongest of the three arguments for sacrificial behaviours that I’ve discussed. That said, I think that reputational concerns normally shouldn’t make us refrain from increasing effective altruist compensation. I think that such increases raising effective altruist productivity is the dominant consideration, and that the reputational effects are normally smaller (and it may not always even be clear what their sign is). I also don’t think that effective altruists should pursue a low-emissions lifestyle for reputational reasons. But I do think it’s an important argument regarding, e.g. whether EA Global should be vegan or vegetarian.***
Lastly, my general sense is that effective altruists are a bit too willing to depart from what’s highest-impact on a first-order analysis for reputational reasons. For instance, I think that we’ve sometimes emphasised global poverty a bit too much in outreach relative to longtermism for reputational reasons. Thus, even though I don’t think that reputational considerations are unimportant, I’d in general prefer us to put less weight on them relative to direct impact when the two are in conflict.****
* It is thus distinct from the costly-signalling argument I discussed previously, which concerned what effective altruists could infer about other effective altruists based on their behaviour.
** This may even be an argument for using another branding. However, there are many other relevant considerations, and I won’t discuss them here.
*** Here there is also another argument, namely that some effective altruists simply have a strong aversion to meat-eating that they’re not able to or willing to overcome. This argument says that, e.g. EA Global should be vegan or vegetarian not because it has positive downstream consequences (e.g. in terms of reputation or reduced bias), but because some effective altruists would feel bad about attending if it served, e.g. meat. That argument in effect says that there’s an insurmountable psychological obstacle to means neutral impact maximisation, that we thus have to respect. This argument is rarely mentioned in these contexts, and I won’t discuss it further, but I don’t think it should be dismissed out of hand.
**** This isn’t to say that we shouldn’t pay more attention to reputation when it comes to other kinds of decisions. For instance, it could be argued that effective altruists should spend more money on media outreach, which could be seen as a form of reputation management. Thanks to Daniel Eth for this point.
Thanks to Daniel Eth and Ryan Carey for their incisive comments.
(Slightly revised 30 June 2022.)