Dated Content

I tend to update articles only when I remember their content and realize that I want to change something about it. But I rarely remember it well enough once about two years have passed. Such articles are therefore likely to contain some statements that I no longer espouse or would today frame differently.

Introduction

The situations after which I had to refer to these modes were usually failure scenarios that I sought to explain. Some were mild, like running in open doors with the Child in the Pond analogy or discussing minutia of the applicability of RCTs with someone who has not yet understood the point of altruism. A more worrisome scenario is the one Brian uses, where a deep ecologist – someone who values diversity in nature intrinsically – comes to see preference or hedonistic utilitarians as enemies.

In a recent discussion, friends of mine have pointed out that there are certain people that seek competition, so that it may be effortful for them to suspend it in favor of cooperation as as all modes require to different degrees. The result of the discussion was that it may be more feasible to funnel this competitiveness into channels where it is unlikely to do harm rather than to just tell them to suspend it. I will leave any ideas for this mechanism outside the scope of this article.

Agents interested in one or several of the same moral goals will soon notice that they can reach these more easily when they cooperate on realizing them, convince more agents to join their cause, avoid zero-sum competition, and possibly even trade with groups with different goals:

  1. Cooperation. Agents can agree on some of their (not necessarily terminal) moral goals and cooperate toward their realization.

  2. Education. Education comes in two subforms, both of which can generally be thought of as cooperative behaviors.

    1. Correcting misinformation. Helping other agents realize that some of their assumptions about the world are mistaken.

    2. Correcting ignorance. Educating other agents on topics they had not considered or did not know enough about.

  3. Trade. When it has either been established that both sides disagree on preferences rather than beliefs or when both sides find that the other is resistant to education to the point that it would make it prohibitively costly, then both sides can usually gain from avoiding a zero-sum fight in favor of compromise.

These are explained in more detail below.

Cooperation

When agents have several moral goals each, the situation resembles a stag hunt in that there are two Nash equilibria – when the agents cooperate and when they defect. (In the Prisoner’s Dilemma, for comparison, the only Nash equilibrium is the case of both agents defecting.) Risk and payoff of the options, however, can go either way.

To do so, they will need to agree on pursuing the moral goal that they share. A concrete example may be that one activist interested in the proliferation of knowledge and the reduction of suffering and a second activist interested in the reduction of suffering and in not lying can agree on their shared goal of the reduction of suffering. A variation is the situation where at least one agent shares only an instrumental goal with the other agent. Thus, for example, a queer activist and an anti-/ activist can both agree to support secularization efforts in Uganda. Of course, the agents can enter into several such cooperative efforts if their resources allow.

Education

Education comes in two different forms depending on whether the recipient is mistaken or merely oblivious.

Correcting Misinformation

When people do not seem to share any moral goals, it may be the case that one party has not realized that what they took for terminal moral goals are merely instrumental moral goals, and possibly badly chosen ones, so that the relationship is not apparent. In that case, one cost-effective and cooperative way of winning them for the cooperative effort would be to provide them with the information they are lacking.

A person might be enrolled as sponsor of a child in Kenya and might scoff at people that support obscure interventions like the treatment of schistosomiasis and believe that they care about the well-being of their sponsored child. But upon learning about the identified victim effect and scope neglect, the person may do further research and find that the charity that runs the child sponsorship program realized years ago that providing such aid to individual children is inefficient compared to communal programs and states in the fine print that they interpret child sponsorship such that they use the funds to provide communal programs such as treatment of schistosomiasis to the village where the child lives.

After some soul-searching, the person may come to the conclusion that they cannot begrudge the charity that decision, and that really it would be consistent with their morality to provide more than one child with the same level of well-being if they can do so at the same cost. The person may come to realize that it is really the well-being of the greatest number of people that they care about and not only that of a specific child, and they will be ready to cooperate with the group that focuses entirely on providing schistosomiasis treatment.

Correcting Ignorance

A variation of this mode occurs when a person has never considered some key question. They may not be aware that there is a decision that they have constantly been making simply by omission because they were unaware of it. Telling such people about the options they have may greatly enrich their thinking and their lives. It may also come at a benefit to the internal consistency of their moral system or their identity and thus reduce cognitive dissonance. Some people may never have considered philanthropy, for example, so simply by telling them about this possibility one can create new cooperation partners.

Trade

Other people, however, may have the same convictions as the misinformed one, but may not actually lack information. They may be well aware of all one’s arguments and still not be swayed by them. Here the education approach will fail. What remains is trade.

We should note, however, that the distinction between ethical and epistemic differences can be a fuzzy one. When someone’s (epistemic) beliefs are very different from one’s own, it can be very costly to convince the other person all the way to one’s own view even if one is objectively right, so that compromise is again more efficient.

When these people have a strong interest in spreading their values, we encounter a situation that resembles the Prisoner’s Dilemma. Much research has been invested into this scenario. Axelrod, for example, has boiled down his insights from a competition of algorithms in the repeated prisoner’s dilemma into four rules:

  1. Don’t be envious.
  2. Don’t be the first to defect.
  3. Reciprocate both cooperation and defection.
  4. Don’t be too clever.

Visualization of mutual gains from trade by Brian Tomasik. Visualization of mutual gains from trade by Brian Tomasik.

One very simple algorithm that implemented this behavior was Tit for Tat, which cooperated unless the other agent chose to defect in the last round. Thus it quickly forgave defection, cooperated by default, punished and rewarded defection and cooperation respectively, and was perfectly predictable.

Brian Tomasik has investigated what this could mean for altruists. He compares the case of deep ecologists, which care terminally about diversity of life and noninterference, and animal welfarists, which care about the well-being of all sentient beings. These two are often opposed. Humans have been able to reduce the suffering due to disease, starvation, adverse climate, and many other sources by developing technical, medical, and economical remedies. Animals are still exposed to all these to maximal degrees and cannot change these conditions for themselves, centrally due to their lower intelligence. Hence improving animal welfare will require intervention in nature, presumably by humans directly or proximately.

When these opposed factions both want to control the future to implement their moral preferences, they would have to fight each other, a fight they would win with some probability \(p_i < 1\) where the values of \(p_i\) may sum up to 1 or even less than one.

However, both sides know aspects of the future that they care more strongly about than about others while the same is reversed for the opposing faction. Animal welfarists, for example, may value a world with few animals suffering greatly over a world with many animals, but deep ecologists may be indifferent about the number of animals per species so long as diversity is maintained.

Based on their estimates of how likely each faction would be to overpower the other in a fight and the value they put on different aspects of their utopias, they can arrive at a compromise that is better with certainty than the expected value of a fight.

This is already happening, for example when the VEBU (“Vegetarian Union”) cooperates with the sausage producer Rügenwalder on vegetarian products that reduce animal suffering and are a financial success for the company, both side had to make some concessions, but the result was a mutual gain.


Comments

comments powered by Disqus