Introduction
We’re quick to admit that we haven’t left our tribal stone-age brains behind yet, so that we need to continuously be wary of the biases that come with having a mind that “runs on broken hardware.” But in some domains, it’s not easily settled who is biased and who is applying useful simplifying heuristics.
Moral Uncertainty
Effective altruism comprises a number of camps. For the sake of this example I’ll simplify the situation down to two clusters and call them the Visionaries and the Stoics.
The Visionaries are people who yearn for space colonization, abhor existential risks, value making happy beings as much as they value making beings happy, and generally behave sort of like classic (hedonistic) utilitarians. They are usually not very vocal about suffering.
Who is vocal about suffering, however, are the Stoics. They are wary of space colonization, abhor suffering risks, value making beings happy more than making happy beings, and generally behave sort of like preference utilitarians (with an antifrustrationist bent). They are usually not very vocal about maximizing our use of the cosmic commons.
My hypothesis is that this division is driven by our tribal minds and discomfort with uncertainty.
The Visionaries formed or consolidated because of a pre-existing and influential group of people working on existential risks, primarily from AI. They had built expertise in the area and a community around their work. Eventually they recognized some problems with their work – e.g., that they increase the probability of astronomical suffering as a side effect of their increasing the probability of astronomical bliss or eudaimonia, or that it’s a common intuition among thoughtful people today to have no or few offspring unless a very high bar for their expected life satisfaction is met.2
They were also averse to great uncertainty, in particular, they preferred to be able to believe that they were likely making things better rather than worse. But being the rationalists they are, they knew their peers and their own minds wouldn’t let them get away with believing provably false things about reality. The only recourse was to change the element of their decisions that was arguably more antirealist than their epistemology, namely their moral goals. So they gradually self-modified to the de-facto rather hedonistic utilitarian position that allowed them to care to an overriding extend about such things as astronomical waste.
Likewise, the Stoics formed amidst an existing animal advocacy community. They were heavily invested in reducing the suffering of farmed animals by preventing them from coming into existence in the first place. Eventually they recognized some problems with their work – e.g., that the logic of the larder may go through for most cows farmed for meat, or that they may deprive some sufficiently insensitive individuals of their net positive lives as a side effect of preventing other net negative lives, with all of it hinging on thresholds with huge uncertainty, such as which life is worth living.
They were also averse to great uncertainty, in particular, they preferred to be able to believe that they were likely making things better rather than worse. But being the rationalists they are, they knew their peers and their own minds wouldn’t let them get away with believing provably false things about reality. The only recourse was to change the element of their decisions that was arguably more antirealist than their epistemology, namely their moral goals. So they gradually self-modified to the de-facto rather antifrustrationist preference utilitarian position (or negative, negative-leaning, prioritarian, etc. utilitarian position) that allowed them to care to an overriding extend about suffering.
Surely, this is not true of everyone in those groups. Degrees of pain sensitivity, a sense of security from being loved by one’s parents in early childhood, one’s density of conscious experiences, agreeableness, and various other influences might be hypothesized to contribute to the effect.
Such hypotheses give either set of moral preferences a sense of arbitrariness that undermines my motivation for pursuing it, which, ipso facto, undermines the intensity of the preferences.
This can easily be misunderstood in various ways, so let me be clear that I distinguish at least four types of distinctions here: (1) happiness/suffering, (2) the moral evaluation according to which happiness/suffering is good/bad, (3) the moral preference for maximal/minimal aggregate suffering, and (4) one’s focus on actions that maximize happiness or minimize suffering.
Questioning the relative importance of happiness or suffering doesn’t impinge on what happiness or suffering feel like and needn’t (but might) impinge on 2 or 3. But doing so will change the relative intensity of the preferences and thus what actions end up most choiceworthy.
Moral Cooperation
Maximizing and minimizing moral preferences benefit from the collaboration of many actors on satisfying them. (Unless they minimize something rare and unguessably obscure or one is omnipotent.) The more outré one’s moral preferences, the fewer collaborators one is likely to find. So there is a trade-off where, even with perfectly crisp moral preferences, one is better off compromising on them to some degree to satisfy them to a greater degree.
My conclusion from this consideration has been to maximize cooperation in five ways and maybe even to interpret moral cooperation as a terminal moral goal of mine.
-
I empathize with a mess of moral preferences whose boundaries and levels of abstraction are fuzzy. So even internally, I need to compromise to maximize my motivation for my actions.
-
The idea of a moral parliament helps me to achieve this. I imagine that all the different moral theories I empathize with are factions in a moral parliament. Then I intuit some rough fractions of the parliament and assign the factions to the fractions. So we have a smaller faction of something deontological looking, a larger faction of something preference utilitarian, etc.5
Theoretically, you could now just crunch the numbers for every decision, but unfortunately my model is not that precise, not in a simple sense and also not in the sense that I’m unsure whether my deontological looking faction is a smaller actually, say, Kantian one, or whether it’s a much larger two-level utilitarian one that includes the preference utilitarian one among others. I imagine that many people will face such issues.
But the model is still helpful in that a lot of potential actions that I can take elicit near unanimous votes (with abstention). A vote on whether I want to start space colonization and fill the Hubble volume with happy people (or better yet, very simple, very happy, simulated beings) elicits some agreement, some shrugs, but also many loud voices urging me to consider the individuals that are suffering, that will be suffering, or that may be suffering if something goes wrong. It’s not met with unanimous approval. Neither would be a goal such as blowing up the planet.
-
-
My moral preferences have changed over time through reflection and changing “tribes.”3 This may continue to happen, so I want to already cooperate with my future versions.
-
Other agents alive today are potential collaborators or saboteurs depending on our game theoretic behaviors; the degree to which our preferences complement, augment, or clash with each other; our respective power; and how well we communicate.
-
If we are going for a more or less power-weighed compromise anyway,1 one may wonder why we shouldn’t let a negotiation decide and be maximally partisan (more so than we would normally be to make token concessions to the other side in return for concessions from them who we need to assume exaggerate their stakes too).
But I think one should avoid escalation when possible as it would only make the process more costly in terms of transaction costs, time, and risk of further escalation. This strikes me as cooperation in a Prisoner’s dilemma–like situation, which is probably a good idea so long as the other side also continues to cooperate. I see no reason to doubt that today.
-
-
Agents in the future can’t causally affect my actions today, but they can acausally reward or punish me by satisfying or frustrating my other-directed preferences. This is distinct from the fifth case in that I can causally affect them. But can’t do so with certainty – and probably only to a small degree in expectation – so that I’m well-advised to take the preferences into account that they are likely to have as opposed to those that I would like them to have.
-
Fully acausal cooperation.
-
Conversely, it may be advisable to cooperate with agents in the past as it gives me evidence that agents in the future that are sufficiently correlated with me will cooperate with my moral preferences. This indicates an increased importance of maintaining traditions, at least when our ancestors would not themselves have abandoned them if they had had better information of the sort that we may have today.
-
What may also be called for is cooperating with beings who are not agenty in the sense that they could/would support or thwart our efforts in that it gives us evidence that beings so much more powerful than us that we couldn’t support or thwart their efforts would cooperate with us.
-
Agents throughout the universe or multiverses may also be more likely to cooperate with us in worlds where we cooperate with them. This is where the concept of superrationality comes in.
-
Compromise Goals
The resulting goals are hard to infer precisely, but we can make educated guesses about them. Many gray areas remain for reasons of unknown unknowns, because the compromise goals are sometimes in conflict, or because they are sometimes silent.
-
Chesterton’s fence becomes more important because not only may old norms still be relevant in ways we haven’t understood, but maintainting them out of respect for our ancestors who cared about them also gives us evidence that our descendants may continue to maintain what we care about. Some caveats:
-
We wouldn’t want our descendants with better information to maintain norms that we would rescind if we had such better information. This is a difficult call to make.
-
This is in tension with moral progress, if that’s a meaningful concept, so a world with a short or sparsely populated future may call for a more reactionary morality than a world with a long, populous future.
-
-
For a wide range of other-directed goals it is instrumentally necessary to exist in order to reach them. Also a nontrivial number of agents with such goals may exist or continue to exist in expectation. This suggests that opposing existential risks is fairly convergent because it helps these agents to exist. Some caveats:
-
I know enough people who would prefer not to have been born (but shy away from suicide because of its low success rate and associated risks). Existential risk reduction may defect against them in the same way that a hypothetical ban on suffering would defect against those that enjoy suffering. My main objection to this line of thought is that agents who are more agenty about other-directed goals will be more likely to welcome their own existence, because for the converse to be the case, their personal suffering would have to outweigh the opportunity cost in moral preference satisfaction that they would pay if they could choose not to have existed. In a purely power-weighed compromise, this would probably settle the issue, because these more agenty agents will lend the greater support (or pose the greater risks) to our goals. But I prefer to live in a world where I’ll not be heavily defected against for having minority preferences (that don’t themselves constitute a defection), and being considerate toward minority preferences myself gives me evidence that others may be considerate toward my minority preferences too. Easier access to safe, institutionalized suicide may be a win-win.
-
It is highly unclear whether continued existence increases or decreases aggregate suffering, depending on what shape it takes, what sorts of minds can suffer, whether we are in a simulation and what the simulator’s intend is, etc. These will need to be weighed against the first compromise goal for each individual scenario whose probability we want to increase or decrease.
-
There may be powerful agents in the future whose other-directed goals may be better served through non-existence.
-
-
The most widely shared moral foundations (according to the eponymous theory) are care and justice, where care can be understood as an antifrustrationist aspect of ones moral preferences. Especially among the agents (other than my own person moments) that I’m most likely to want to cooperate with – utilitarians of various stripes – the care foundation is strong. This leads me to believe that there are very few agents who don’t care about suffering reduction to some extend and even fewer who would oppose it.2 Opposing suffering seems to me like most robust compromise goal at the moment. One caveat:
- In some cases, this is in tension with the second likely compromise goal, as mention in its second caveat. I think that absent ways of arbitrate such trade-offs, we should steer clear of actions that are partisan in one direction or the other.
The result could be to focus on actions that maximize the sum of these goals as well as one can guess it, but because there are many more constraints such as crowdedness, personal aptitudes, etc., for me it rather takes the shape of focusing on actions that maximize one goal without hindering the others.
-
Power-weighed compromise is an attractor state because powerful agents could and would want to use their power to change any other compromise to a power-weighed compromise. This would create addional conflict and then return the system to the power-weighed attractor state anyway. But I think we can aim for a compromise that has stronger minority protection properties than a pure power-weighed compromise so long as these concessions cost the more powerful agents less than the conflict would cost them. The marginal value of such concessions is also probably greatly higher for the minority agents than for the powerful agents. ↩
-
Note that I think that this also extends to some degree into the direction of population ethics. For example, most people in my circles endorse a “Pro Choice” stance even in the case where (1) in world A, the child is not born and, in world B, has a net positive life, (2) the decrease in well-being of the parents in world B compared to A is offset by the additional well-being of the child in world B, and (3) the resources freed by not having to raise the child in world A are not invested into an at least equal gain of happiness for an existing or marginal person. These assumptions are perfectly plausible for non-EA parents and run counter to the implications of classical utilitarianism (i.e. total utilitarianism). The “Pro Life” position is very similar so long as it allows abstinence. You have to turn to fringy movements like Quiverfull to find a group that decides in accordance with classical utilitarian population ethics. ↩↩
-
Interestingly, these changes happened at least in some cases before I realized that my environment largely shared them. It would be flattering to think that I arrived at them independently because they are true. It would also be convenient because they are arguably more likely to be true if more people arrive at them independently. But my use of “flattering” and “convenient” may have betrayed that I’m skeptical. The geographic clustering rather suggests that there is maybe something to the broader cultures of the regions that encourages one or the other – perhaps differences in thinking about property, somehow connected to social security paid from taxes? But also social influences within my small peer groups may have subtle effects (as described in the introductory paragraphs). No one went out to convince me or even tell me about suffering-focused ethics. It didn’t even have a name. But my efforts to make sense of the actions and priorities of my peers crystalized into models of morality and the world that, ipso facto, led me to prioritize the same actions. Now these models feel fundamentally, axiomatically, and intuitively correct to me just like one’s own dialect feels intuitively correct to oneself. But we know that it’s just as arbitrary as anyone else’s. (Not fully arbitrary, since it needs to serve as communication tool and error-correcting code. Just roughly similarly arbitrary.) ↩
-
Back when I hadn’t thought about metaethics and assumed that moral realism must be true, I predictably dedicated a lot of time to the search for the true morality and to the search for some form of test for the truth value of moral theories. That should probably be the top priority for moral realists. ↩
-
There are two things that I’ve decided not to merge down to this level, namely my decision procedures (I’m a big fan of the integrity one) and heuristics that follow from considerations of cooperation (more on that in the second section). ↩
Comments
comments powered by Disqus