Introduction
We have a variety of views when it comes to axiology – how good various world histories are relative to one another, whether such assessments are subjective or god- or nature-given, and how decision-relevant that is in practice (section 3.1). But there’s also the related question of what individual behaviors or social norms entail the best world histories given some axiology. This article focuses on the second and ignores most of the complexity of the first.
Paul Christiano has argued that integrity is crucial. Scott Behmer warned that the EA community tends to free-ride, and that he doesn’t know when that is really appropriate. (I’ve commented on the second post.)
Once you’ve settled on some axiology, calculating and comparing the expected choiceworthiness of all actions may not be feasible. So what considerations should guide your actions? I’ve come across a number of such considerations in isolation. In the following, I want to sketch a framework that I’ve found helpful to consolidate many of them.
It doesn’t answer whether a particular cooperation or moral trade is worth it, but it has helped me structure my thinking about this question. If two people both accept a framework like this one, they can also communicate much better about their needs and what they want to offer in return.
Here a few questions that it has helped me understand better:
- Intimacy of cooperation: Do you collaborate with a highly value-aligned cofounder on a charity startup? Or do you cooperate with a somewhat aligned group on some common instrumental goal? Or do you trade with a mostly unaligned group on an issue where some of your opposing preferences are more intense than others, allowing for gains from trade? Or do you clash with an opponent in a verbal debate in which you both refrain from insults, blackmail, and violence? I address this in the Levels section.
- Level of escalation: Do you and an opponent clash in a debate in which you maintain high epistemic standards – you refrain from lies and misrepresenting your opponent’s points, and you concede points that you think they’re right about? Do you and your opponent raise your voices but refrain from blackmailing each other with private information you have? Do you and your opponent scream incomprehensibly at each other but refrain from physical violence? Do you and your opponent beat each other up but leave your guns in the holsters because of a shared understanding that it’s a fist fight? I address this in the Levels section.
- Reciprocation of your cooperation: You may be happy to pay it forward a bit, but you don’t want to be freerode on for long. You also don’t want to end up in mutual defection just because of noise in your communication. I address this in the Levels section.
- Reasons for imperfect alignment: Are you factually wrong about something, and/or are they, and can either/both of you be convinced of the other’s or another position? Or are their terminal moral preferences different from yours? Or is it potentially the first, but finding out would be more costly than compromising? I address this in the Problems section.
- W_anting_ to help beings who can’t or are unlikely to reciprocate: Nonhuman animals or people who’ll be alive only when you’re dead are examples. You need to weigh your resource expenditure on this goal against your resource expenditure on gains from trade. I address this in the Problems section.
- What is the neutral level of cooperation: Is it neutral to be completely silent on a problem or is the problem so widely reviled (in your country, city, or among your peers) that silence is reactionary and the neutral stance is to express support in various customary, low-cost ways? I address this in the Problems section.
Some of this will become clear when I introduce the levels of cooperation that I’ve come up with. The other questions are specific to particular situations; these I’ll address in the Problems section. Finally I’ll try to guess how important all of this might be for us.
Levels
I’ll for now assume that, all else equal, closer cooperation is preferable to more reserved cooperation or conflict, be it because of reduced overhead, gains from trade, or less zero-sum mutual undoing of efforts. That’s perhaps more plausible if I also consider “cooperation partners” two people who don’t know of each other but work toward the same goal without duplicating efforts. (They might work on something that is not verifiable, so that it’s valuable to know if two people completely independently arrived at similar conclusions.)
Below some illustrative levels of cooperation. These are lines drawn somewhere in the middle of huge gray areas, so don’t take them too seriously. For example, some collaborators may be partially motivated by improving their CVs and so may be more like trade partners, and some people may be collaborators on one issue but mere nonaggressors on another.
This list starts with a level of maximal escalation and goes up to perfect cooperation:
- Uncivil aggressors: People outside civilization in the state of nature, be it because they don’t care about anyone including their future person moments or because they are untouchably powerful. They flout all social norms of interaction. They might kill you if it seems auspicious to them.
- Ambivalent aggressors: People who compromise on some social norms. Say, they might lie or blackmail you, but they wouldn’t kill you even if they could get away with it. That’s just an example: Some may find lying more objectionable than killing. You may be able to predict which social norms they are more ready to flout than others.
- Civil aggressors: People who may attack you but in basically civil ways. Say, they may attack you through honest but pointed critiques in public forums. A rough guide may be that they’ll not resort to methods that are illegal in their jurisdictions.
- Nonaggressors: Everyone you trust not to attack you. You can’t trust them to live up to norms that are not legally required of them except that they will stay out of your way.
- Neutrals: Everyone who takes care to do just enough not to hinder your efforts. They may be quite committed to maintaining at least this neutral level. They actively seek out feedback on their behavior. They may not be interested in praise but they’ll change their behavior if they hear complaints.
- Trade partners: Everyone you trade with. Say, a musician you pay to create the theme song for your startup. If you pay them well, you’ll get a great theme song from them. This also includes opponents with whom you can agree on compromise solutions. (Failing that, you’ll probably fall straight to nonaggressor level with them rather than neutral.)
- Collaborators: Everyone you collaborate with because you share some goals, terminal or instrumental. Those might be people of another organization with an only subtly different vision. They don’t require you to pay them if they can afford it.
- Unified actors: You and everyone you’re highly value-aligned with. Maybe your cofounders of a charity startup, your partners, or whole subsets of the EA community.
This progression doesn’t settle how we should resolve the tension between limiting freeriding and paying it forward, or between limiting freeriding and maintaining the moral high ground, or between limiting freeriding and improving social norms. But it can lend some structure and language to discussions of the topic. The following is an example.
At first approximation, it seems that you should engage with someone on the level that they chose for the interaction. If you start the interaction, then you can engage them on the level that is your best guess of the level they would choose.
But I think that’s risky. Communication is noisy. Communication that hasn’t happened yet is particularly noisy. Lifting an interaction with someone to a higher level takes a lot of work. Losing them or getting caught in defect-defect cycles is much easier. The government and our friends and peers give us powers and incentives that make it cheaper and less risky to maintain a moral high ground.
All in all, the costs, risks, and opportunities are sufficiently asymmetrical that it’s usually warranted to err on the side of starting an interaction on too high of a level, say, by one level. (Unless you really can’t afford it.)
Problems
Here is how I would frame various problems that I’ve observed.
Failing to Notice Trade Opportunities
One failure mode is to ignore someone – so behave neutral or even just as nonaggressor toward them – who might be a trade partner or collaborator if only they (or you) realized some minor fact. The information exchange can happen efficiently and is well worth the time investment. It may be particularly worth it for you if you’re more wrong than the other party. Conversely, one might overinvest into trying to resolve differences at the expense of putting more work into finding people or groups who would immediately readily enter into close collaborations or trades.
Examples:
- I’ve often talked with people who seemed to be opposed to effective altruism for various reasons that were easy to resolve. One person I remember thought that the likes of GiveWell were oblivious to the research of J-PAL and were trying to reinvent the development-economic wheel. I could easily convince her that that’s not the case.
- Michelle Graham argues that the wild animal welfare community missed out on important contributions from conservationists and that the EA and animal advocacy communities still miss out on important contributions from people of the global majority (people of color). Perhaps that can also be remedied.
- But there’s also often an effect that has been termed “narcissism of small differences.” Very different groups may never talk to each other, but groups that are only very slightly different may take those remaining differences as occasion for hostility or (more relevantly in this context) year-long, tedious debates. There is some point where it’s worth discussing something, but if that fails for too long of a time, it may be cheaper to compromise even if the issue should hypothetically lend itself to an empirical resolution. (The discussion can continue when it’s not blocking the collaboration anymore.)
Many Currencies
Second, when it comes to the bargaining, there are many currencies involved, and the parties may disagree on how to convert them into each other. Nonhuman animal trade partners or future generations may remunerate us with increased aggregate well-being, which we value. Present human trade partners may contribute back with their skills or contacts or money. Those are already four currencies whose exchange rates potential trade partners can have greatly different opinions on. That can make causal bargaining difficult and gameable and can make acausal bargaining quite unclear. In any case, it’s perhaps helpful to think of all these benefits as having conversion rates that you just need to come to agree on.
Example:
- One activist may argue that funding should go to the wild animal initiative because the number of wild animals is so enormous and our potential to help them so largely untapped that research into ways to do that improves enormous numbers of animals’ person moments. The other activist may argue that beings who don’t contribute to human society should not be considered moral patients, so that it’s improper to use such beings to inflate the importance of an alleged problem. To resolve this conflict, one would have to bet on moral realism and solve some key issues of ethics in ways that convince both. That may be too hard, so they’ll have to settle on some (perhaps unprincipled) compromise to convert the currency of improved wild animals’ person moments to a currency the other activist values.
Neutrality and Nuance
Third, there is neutrality and nuance: Nonaggression is a very fundamental tenet in many parts of the world. Various countries and in particular various small communities and friend groups will have much more refined and varied norms, and many of them will serve good purposes. Even just being neutral will then mean more than just abiding by the law. If you don’t want to hinder the efforts of those in your community, you will need to adhere to a higher standard.
Examples:
- In much of the world the neutral stance toward animal rights may be to not insult vegans or activists. In some parts of the EA community, however, the neutral stance may be to be at least a lacto-vegetarian nonactivist.
- In some workplaces it’s neutral to refrain from shouting others down when they try to speak. But in others it’s neutral to be respectful of what they have to say and to interrupt someone only in exceptional circumstances.
High-Dimensional Beating
Fourth, it’s hard to recognize multidimensional beating without becoming overly cautious about collaboration. Beating is “the procedure by which a ship moves on a zig-zag course to make progress directly into the wind (upwind).” Someone may have instrumental goals opposite to your instrumental goals, but you have resources that the other person desires. So they may pretend to compromise with you to get some of your resources in return. But really they keep shifting (or rotating – imagine a spiral) what they compromise on. At any particular moment they may seem like a trade partner, but over a sufficiently long time it becomes clearer that they are using your resources to work against you.
This can be less obvious than the illustration in the abovelinked Wikipedia article because the tacking points can be smoothed out. It’s also less obvious because social interaction is very high-dimensional. So a slight, gradual reversal on one agreed-upon compromise can be hidden among or cushioned by a strengthening of a number of other compromises. This can arise seemingly naturally if you press the trade partner to adhere to some particular parts of your agreement. They may strengthen their momentary commitment to those parts while reversing their commitment to parts that are not currently in the focus.
Examples:
- Abusive relationships can work this way.
- Some companies with great environmental and welfare externalities (e.g., sellers of animal products) may be trying this when they sign various pledges when there’s a lot of pressure but years later fail to follow through on them when the pressure is on some other issue.
Problems with Aggregation
Fifth and finally, there are sometimes problems with aggregation. For example, when whole organizations want to trade or cooperate with other organizations – or even bigger and looser groups want to do the same – then there may be tensions between what the whole group as a group wants or does and what individual people in the group do. Conversely, when two people trade or cooperate, but one of them is part of a bigger group. The group may violate agreements even when the individual doesn’t. In both cases there are additional costs to expelling a group member or to leaving a group, which may or may not be more important to the trade partner than the trade. This can also be exploited on purpose.
Examples:
- Whenever I hear someone complain that people with label X believe A and B where A and B contradict each other, I wonder whether really the fraction of Xist who believe A and B is tiny, and really the problem is that Xist are divided over whether A or B. (The division may not be one that they are wont to fight over, so that they hardly notice.)
- Conversely, this effect can be exploited by companies that agree with other companies or a government to follow certain guidelines but then implement incentives internally that subtly push against those policies. They can claim that they just haven’t quite figured out the best incentives yet, and there’s never any proof that they set the present incentives on purpose to violate the policy even if recordings of all the relevant meetings should leak.
Importance
I’m quite unsure how important this is generally. It probably varies a lot and is more important on smaller scales than on greater ones – e.g., it may be important for EA startups, but a country may only benefit mildly from improving cooperation nationally. Three heuristics that come to mind:
- A powerful actor (I use “power” and “resources” interchangeably here) whose marginal utility doesn’t diminish quickly may rather want to just do their thing with no help (but also no interference) from others. (Like some fictional superheroes – Batman, Superman, Iron Man.)
- Someone who doesn’t have enough power will benefit greatly from trading with a similarly or more powerful partner. (In Germany I saw a lot of animal rights organizations cooperate like that.)
- Someone powerful with diminishing marginal utility may get to the point where they are forced to trade with others to make progress. (Evidential cooperation in large worlds may be particularly interesting for them. Thanks to Daniel Kokotajlo for this idea! Maybe tech companies count that can’t find enough developers to hire so that they have to “acqhire” other companies.)
Someone may also be powerful in one way (e.g., have a lot of money) but weak in another (e.g., can’t recognize the first excellent developer that can build up a team of excellent developers), which again makes trade more valuable for them.
On a large scale, improved cooperation may not make a big difference. The advent of the internet has probably made a lot of cooperation on a national and international scale a lot easier, and yet global GDP has increased by barely one third since then, even imagining that the 2008 crash didn’t happen. And that’s the product of so much more than just the internet.
But maybe a 5- or 10-fold increase is realistic in more limited contexts that either suffer from low levels of cooperation or have high upsides to cooperation. In fact, I’ve increased the fundraising revenue of an organization 20-fold at about constant cost by adjusting completely to a particular audience, and I might’ve increased it 200-fold or more if I had known then what I know now. That seems similar to a shift from a neutral role to a trade partner or cooperator role for that audience.
Beyond that I suspect that there are risks from failing to keep up (keep on an at least neutral level) with positive developments in one’s social context that can cause hostility rather than just lost gains from trade. The negative effects of such hostility on small or less powerful groups may rival (in absolute terms) the positive effects above. Just as positive effects can have windows that open and close, negative effects can permanently destroy opportunities. So the EA community may fail to seize an opportunity to forge an important allegiance before such a window closes but it may also be destroyed if it fails to compromise before a powerful opposition forms. The original emergence of the community may have involved a lot of luck. Our new experience and any remaining networks may help to make up for some of the luck that will be lacking the second time around, but such a collapse seems very costly even if it is eventually reversible.
Comments
comments powered by Disqus