Introduction

There are a number of questions that I think could greatly change or refine dominant EA opinions on prioritization depending on the answers we can find for them. I don’t have time to research all or even a significant fraction of them, so I’m hoping that by compiling them here, others will get interested in them too.

Some of these questions will seem like they intend to cast into doubt opinions of one group or another. That’s just because I think that’s the job of prioritization research. But there’s probably a rough positive correlation between how much I like to probe theories and how likely I think they are.

If you’re interested in researching one of these questions, please leave a comment or ask me to edit the post so we don’t duplicate effort.

Value of Prioritization Research

Questions

  1. Which paths to impact are how plausible for prioritization research?
  2. Are they sufficiently valuable on the margin to beat more direct interventions?

Introduction

I can think of four paths to impact already, but I don’t strong intuitions for how to weigh them nor do I think the list is complete.

  1. Research may show that we’ve been sufficiently wrong that continuing to pursue our current strategy would incur a significant opportunity cost.
  2. Research may uncover new, highly cost-effective interventions.
  3. Research may produce a more nuanced picture of the cost-effectiveness landscape – a combination of paths 1 and 2.
  4. Research may cement our current strategy with enough evidence to secure the support of more uncertainty-averse contributors.

Path 4 has some appeal to me, though I’m divided on whether that’s rational.1 Path 3, however, is the one I find most interesting, and perhaps it’s also one that’ll require some more explanation, because it touches on the question how I would like to see the knowledge structured that we’ll hopefully generate.

I’ll discuss this question further in the next section.

Denise Melchin’s and Sam Bankman-Fried’s observations with regard to multiplicative effects between interventions are likely relevant here (see also Carl Shulman’s comment on that article), because the most relevant spaces – such as prioritization itself, AI safety, welfare biology, and community building – are all still very small and may have many low-hanging fruit left to pluck. Small size and high output elasticity of labor are indicators that we need to be careful not to allocate more resources to the area than it takes to resolve a bottleneck because otherwise it would only exacerbate a bottleneck elsewhere.

Based on the low number of people working on prioritization at the moment – probably lower than AI safety and community building, but tell me if that’s wrong – it’s possible that prioritization is a greater bottleneck than the work of the areas it leverages.5

Relevance

If we’re considering investing time and money into prioritization research, it would be useful to have some evidence that it’s plausibly the best use of our time. If not, it would still be useful to find out how we can 80/20 it to reap any low-hanging fruit.

Assumptions

  1. Ones moral system is aggregative and consequentialist.
  2. It is common enough that the research has value for enough people.
  3. Infinite ethics is somehow solved.

Unified Model

Questions

  1. What structure lends itself to parallel research on many questions while minimizing coordination overhead and maintaining a focus on only the most important issues?
  2. How does this structure integrate all that information into a ranking?
  3. To this end, what metrics are feasible to determine for a wide range of interventions?
  4. What are the best ways to trade off value of information and option value against terminal goal realization, such as minimizing suffering?
  5. What interventions lend themselves to several metrics so that more evidence on their cost-effectiveness can provide information on the priorities of several others at once?

Introduction

The typical EA approach, perhaps pioneered by GiveWell, has served its purpose: Identify the most pressing problems, find the best interventions that address these problems, and then support the organizations that implement these interventions best. But it separates only interventions to investigate further and interventions to dismiss. It is not a clean way of prioritizing among the first group.

What can be more or less cost-effective are these interventions, not the problems. Interventions, however, don’t usually solve only one problem, especially not if everyone gets to decide how they want to categorize undesirable things into problems.

My Vision

  1. One problem is the vagueness of what constitutes one problem: whether it could just as well be subdivided (academia being inefficient vs. psychology being inefficient and gender studies being inefficient) or whether it’s instrumental/proximate (meat-eating vs. speciesism). In the first case, academia being inefficient doesn’t cause the subfields to be inefficient but speciesism is one cause of meat-eating (assuming that theory holds).
  2. The second problem is whether an intervention has one main effect and other effects (flow-through effects) are minor or whether the configuration is somehow different.
  3. Finally, we don’t have a common metric that would always allow us to compare the magnitude of the effects an intervention has on a problem.3

So taken together, we’d like to have a set of problems, an intervention, and a common metric that allows us to compare the effect of the intervention on the problems. Then we could determine the effect distribution of the intervention over the problems and say whether it’s sort of log-normal (very few problems get a lot of effect, others can be ignored), more heavy-tailed (very few problems get a lot of effect but many others get sizable effects too, so they can’t be ignored), uniform (every problem gets addressed about equally much), or some other variation.4

These are all significant challenges. To make progress on them, we’ll need much more research on how to translate various effects into the metrics we care about (or some useful proxy measures) and then aggregate them into a partial ordering. This task can be distributed over many researchers and will require them to answer highly specific questions – much more specific than, say, whether technical or policy AI safety research is more important.

Meanwhile, other researchers can investigate what interventions show which of the investigated effects and can suggest more effects or different taxonomies of effects for further investigation.

Only then can we derive insights from the complete model. (But as an MVP, we can just make up taxonomies and performance metrics, aggregate them into a model and share them. That would lend much more structure to prioritization research already. I’ve not seen even that much so far.)

With that basic structure in place, we would incur little coordination overhead and could retain a clear focus on what is most important while we start making finer distinctions between interventions, work out the important differences, and study them separately.

In the end, I hope we’ll no longer be studying broad questions such as whether it’s better to show down negative developments or to improve them, but will invest significant resources into very detailed questions while being confident enough that the value of information will be worth it.

Poor Economics has demonstrated how a space can move from broad questions such as “Does aid help?” to detailed questions such as what cost-effectiveness a certain approach to mass deworming of primary school children in a certain region of Kenya displays towards improvements along some highly comparable metrics. I want to see a similar movement in the prioritization space at large, because at the moment I feel lie we’re still asking a number of questions that may be too broad and vague to answer.

Some Inchoate Ideas

In order to trade off interventions that are highly unlikely to succeed but abysmal if they fail against interventions that change the status quo in incremental, unspectacular ways, or against interventions that have great upsides but destroy option value, or against interventions that are speculative but have great value of information, etc., we need common metrics to compare them.

That does not necessarily have to be one common metric for all of them. If one intervention lends itself to two types of cost-effectiveness metrics, it can serve as a bridge to compare two interventions, each of which only lends itself to one of them and not both the same one.

For example, in one case we may have a risk over time of a sudden explosion of suffering happening while in the other we have a continuous function of total expected suffering over time. Here we can probably use survival analysis to convert the first into another function of expected total suffering over time.

In other cases we may have only importance, tractability, and neglectedness scores; density and centrality; or fundamentality to go on.

Relevance

About as relevant as the whole of prioritization research. See above.

Assumptions

  1. Enough people will care about the output of the system.
  2. We’ll be able to disentangle questions sufficiently that people don’t have to work together extremely closely to solve several of them.

Resilience

Questions

  1. Where do important arguments still rely on overly “absolute” models?
    1. For simplicity, models may ignore expected decay over time,
    2. rely on “sequence thinking” only,2 or
    3. treat a multidimensional optimization problem as if it were one-dimensional.
  2. How long can “shmingletons” (explained below) ideally last – decades or millions of years?
  3. If we use methods like stabilizing feedback and path dependence to create a highly resilient system, how long can we make it last?
  4. How long will extinction last?
  5. The answers to these may be functions with a few dependent variable – which ones should we use and how do we recognize them in real or hypothetical systems?

Introduction

This is where it gets a bit more specific. Models of the future that a lot of effective altruists base their decisions on are currently of mostly qualitative nature. In order to reason about systems qualitatively, effects need to point in the same direction or be sufficiently strong or likely that you don’t face non-negligible trade-offs. Where trade-offs are unavoidable, they force you to limit yourself to strategies that are robust enough to be beneficial either way or to coordinate well with many others to try out many strategies that each are highly unlikely to succeed.

A limitation to robust strategies can be costly in terms of impact and option value. Without the evidence that shows that, for example, malaria prevention with bednets is fairly cost-effective, we’d be limited to interventions that we might consider robust (depending on our moral goals), such as building up a better health system (horizontal rather than vertical interventions) or improving infrastructure, which may be radically less cost-effective. Similarly, trying many risky strategies is also inferior to trying only the best among them.

Therefore, more research that could narrow down the set of the most cost-effective interventions can be very valuable, and that will require quantifying the relevant factors in trade-offs more rigorously.

One example of a concept that is still heavily “rounded” is the singleton:

As I introduced the notion, the term [singleton] refers to a world order in which there is a single decision-making agency at the highest level. Among its powers would be (1) the ability to prevent any threats (internal or external) to its own existence and supremacy, and (2) the ability to exert effective control over major features of its domain (including taxation and territorial allocation).

I’d be more interested in something like a “shmingleton,” which can “prevent almost any threats … to its own existence and supremacy.” Say, an entity with a vanishingly small put nonzero probability of collapse over time. It seems much more likely and natural to me that we’ll see shmingletons than actual singletons, and given the vastness of the future, even this small probability can make a large difference.

Here we can turn to systems theory and complexity theory to help us model the resilience of systems and get a feeling for the yearly probabilities of collapse of systems, because we can think of a shmingleton as a highly resilient system.

Relevance

If we’re at a key point in history at which we can uniquely influence the values that will shape the billions of years of the future, and we lose that ability once they’re set in stone, everything we do only matters insofar as it influences what the values are that will get set in stone.

But if that stone is more like sandstone, the argument becomes less diamondclad. Particularly low tractability, some sort of cluster-thinking penalty, low personal fit, etc. suddenly become much more important considerations.

Assumptions

I can’t think of any nontrivial assumptions on which the relevance of these questions depends.

Fundamentality

Questions

  1. How can we recognize bottlenecks in the dependency tree that leads to impact such as suffering minimization?
  2. What sorts of dependencies are we dealing with? Do we need a minimum amount of something? Are there several fundamental qualities that are necessary but interchangeable? Are there others of which we need a minimum amount in sum?
  3. Can we use something like Page Rank to determine the relative importance of dependencies to trade off against tractability and neglectedness?

Introduction

How do we compare very fundamental and thus hopefully very robust interventions (like the research I’m trying to encourage here) to “last brick” types of interventions like bednet distributions?

In the linked article I argue that fundamentality is a big asset thanks to the tremendous leverage it has over interventions it enables or unearths. But even fundamental research builds upon other research as well as communication, money transfer, safety, trust, etc. (Let’s call these things “stocks.”)

Maybe the dependencies between these stocks are of various different shapes: Maybe we need a minimum level of trust, an equilibrium between research that enhances capabilities of and control over something, both of research and marketing (though the considerations above concerning multiplication of impact), or various other types of interdependencies.

Given the dependencies between stocks and the shapes of those dependencies, can we derive the relative importance of stocks? And observing them in our environment, can we derive the marginal effect of increasing them? And from that the marginal cost-effectiveness of increasing them?

The page rank algorithm solves this problem for one type of (literal) link. That may be a start, and further research into this class of algorithms may reveal more appropriate algorithms or ways to adjust the algorithm.

Relevance

On the for-profit market, businesses that are too meta are too expensive to run compared to the margins they can charge so that they will fail. Conversely, if businesses are too vertically integrated, effects such as economies of scale will lead to meta-businesses popping up that the businesses can outsource some of their work to.

But outside costly experiments, we don’t currently have any principled way of predicting such dynamics in altruistic contexts.

Assumptions

When considering this problem, I typically think about it as a graph of

  1. discrete, unique nodes,
  2. of which there is a finite number, and
  3. whose interconnections are sufficiently few or simple (e.g., noncircular).

I imagine that this problem will need to be heavily simplified with a lot of assumptions about what constitutes a node in the graph in order to merge complicated cases and ignore hopefully irrelevant cases before it’ll become computationally tractable.

When qualitative arguments yield unintuitive results but otherwise appear sound, they pattern-match (for me) the types of arguments a better informed or more intelligent person can produce for almost any position to convince a less informed or intelligent person – Scott Alexander talks about epistemic learned helplessness.6

When an argument still feels that way to people with a 99th percentile IQ after years of reading about the relevant field, it seems plausible to me that the bottleneck is that the argument needs to be improved. But no one would produce motivated arguments intentionally (I suppose), so turning an argument into a quantitative model can help us discover gaps in our own reasoning and expose assumptions more clearly.

There does not need to be a symmetry in the other direction, i.e., interventions need not have a plurality of effects, but they might unless we define the effects sufficiently inclusively. In health, where we have a taxonomy of diseases, interventions are classified into vertical and horizontal, and the horizontal interventions are the ones with particularly many effects.


  1. The question whether a better informed, more intelligent community would be convinced by an argument can be answered by becoming this better informed, more intelligent community or though a Vingean reflection–like process – “a theory of formal reasoning that allows an agent to reason with high reliability about similar agents, including agents with considerably more computational resources, without simulating those agents” (Daniel Dewey). 

  2. Most of the population seems to overfit their experience to the point that they care surprisingly little about rare events or uncommon considerations. The EA community seems to do that to a much lesser degree, but that does not clarify whether we’ve overcome the bias and are now well calibrated in this regard or whether we just underfit as often as the rest of the population overfits. It may seem that the smaller divergence from the norm is more likely than the larger divergence, but strong divergences are important for informative in-group signaling: “A moral action that can be taken just as well by an outgroup member as an ingroup member is crappy signaling and crappy identity politics.” 

  3. For me, the expected difference in suffering over time between futures with and without an intervention has proved widely useful, as a metric and a proxy for what I care about. (I also care about other things.) But I can imagine cases where I’d have trouble applying it, e.g., interventions aimed at improving institutional decision-making. 

  4. Effects usually have a plurality of causes – John Mackie’s inus conditions are a simplification since the necessary but insufficient conditions can usually be refined further into insufficient conditions that are dependent on not only the presence but a particular degree of another insufficient condition and the absence of a third and so forth before they become sufficient. 

  5. The particular application here is that (1) a group of researchers does the prioritization research and (2) a subset of people in AI safety, welfare biology, and other spaces will listen to prioritization research if it finds that their work needs to focus on a particular subproblem to be among the top most effective things for them to do. The effect of the first group on the second can be modeled as multiplication of the impact of the second. Because this is true across the board for all possible areas of altruistic work but is not true of every individual working on the problem (for reasons of personal fit and preferences) and because individuals can switch between spaces, I think it is cleaner to think of it not as prioritization research influencing a number of other spaces but as influencing a number of other EAs. So, barring a great drop in output elasticity in prioritization research, we may need about as many people in prioritization research as we have sufficiently flexible people in other spaces all combined before prioritization research reaches the limits to its growth according to this model. 

  6. Disclaimer: I don’t generally endorse the works of the author. Alexander originated a wealth of helpful ideas, so that I can’t help but cite him lest it seem that I plagiarize them. Unfortunately, (1) the community around his blog contains some insalubrious factions, and (2) until roughly 2016, he himself still published articles that presented issues in a skewed fashion reminiscent of the very dynamics he warns of in Toxoplasma of Rage. I’m adding these disclaimers to avoid the impression that I accept such intellectual wantonness or that it is accepted in my circles. I don’t know whether he still endorses his old approaches. 


Comments

comments powered by Disqus