Moral circle expansion isn’t the key value change we need
Most longtermist work is focused on risks of human extinction and civilisational collapse. Some argue, however, that we should pursue a different strategy: to try to improve the quality of the future, conditional on survival. That could, in turn, be done in several different ways. A popular approach is to try to improve people’s moral values, so that they become more likely to take actions that have beneficial long-run effects. That’s the strategy that I discuss in this post.*
In the effective altruism community, the moral value strategy often seems equated with moral circle expansion: with trying to spread the view that animals, digital people, and other groups are (full) moral patients. The idea is that moral circle expansion would decrease the risk of discrimination against such groups in the future.
I agree that narrow moral circles is a problem, but I don’t think it’s the key problem with our current moral values. Narrow moral circles might cause people to do harm in the future, but in my view a more important problem is that people’s values might cause them not to take great opportunities to make the world better.
Many effective altruists think that we may be close to developing technologies that could radically transform life on Earth and beyond. Human enhancement technologies could drastically improve the quality of our lives. And we might be able to create new forms of sentience, whose quality of life would be much higher still. In my and many other (though not all) effective altruists’ views, we should be open to deploying such technologies (depending on their nature). The expected positive value of doing so is, I would argue, much greater than the expected negative value of future harm due to discrimination of animals or digital people. (Though I appreciate that some would disagree with that.)
The deployment of such technologies would likely face considerable opposition. However, I don't think that it would mostly stem from narrow moral circles. People would surely view enhanced humans as moral patients, and are still largely opposed to human enhancement. And even if they viewed new forms of sentience as moral patients, many would not want to create them.
Instead, I think the opposition would have other sources. People have a range of worries about human enhancement, from equality concerns to more conservative or purity-based judgements. Many have person-affecting intuitions, and thus don’t see failure to create new, happier forms of sentience as a cost. More generally, most people have the nonconsequentialist intuition that it’s not nearly as bad to fail to act on opportunities to make the world better, as it is to cause harm. That leads them, in my view, to underrate the moral importance of deploying technologies that could radically improve the conditions for Earth-originating life.
To increase the chances that such technologies are deployed, we thus don’t primarily need moral circle expansion. Ideally, you want a much broader swathe of value changes. For instance, you may want to advocate directly for total utilitarianism, if you’re sympathetic to that view. Alternatively (or perhaps complementarily), you could identify a specific technology and campaign for its deployment in a more ad hoc manner. There are many interesting approaches to value change, and we shouldn’t equate it with moral circle expansion—especially since it’s unlikely to be the most promising strategy.
* However, I’m not comparing it with the impact of existential risk reduction.
Thanks to Pablo Stafforini for comments.