Two things happened in 2020: I switched out of earning to give and I looked into evidential cooperation in large worlds. Oh yeah, and a pandemic, various conflagrations, a market crash, Brexit, SpaceX starting the exodus, election fraud in Belarus, protests in Belarus, protests in the US, protests in Bulgaria, protests in Kyrgyzstan, riots in India, explosions in Beirut, water on the moon, earthquake, hurricane, volcano, terrorist attacks, deployment of murder hornets, black hole, free public transport, Félicien Kabuga arrested, Ladislas Ntaganzwa sentenced, the pope getting into AI safety, Africa free of polio, Biden-Harris, negligible election fraud in the US, Ethereum 2.0, first-ascent of the second 9c route, birthday of one of my partners. But those things didn’t influence my donations.

Donor Lottery

I want to donate less because of my now lower income, and I want to donate to the donor lottery so that 1.4% of my copies on various Everett branches can dedicate a lot of time to allocating a really big chunk of money instead of all of me 80/20ing it again.

The donor lottery FAQ says, “The winning number for each lottery block will be determined by taking the first ten hexadecimal digits of the NIST Beacon at the lottery draw date.” (Link mine.) So if something like the many-worlds interpretation is true, my plan should go through. If not, I fall back on good-old expected value, and the lottery seems equally attractive.

Center on Long-Term Risk

The Center on Long-Term Risk (CLR, previously known as Foundational Research Institute), has been my top favorite organization for several years. Because of the diminutive size of this donation, I haven’t re-evaluated that opinion this year.

The reasoning is that, first, all the more mundane things check out, such as that I know that there are smart, cooperative, and committed people working there. But second, CLR is also the only place that focuses on two of the most important questions I’m aware of: how to avert suffering risks and what the optimal moral compromise looks like.

Extinction risks already receive a fair bit of attention in effective altruism circles. Fates much worse than extinction have received less attention even though averting those is also in the interest of a wide range of value systems, and addressing them may be no less urgent.

The optimal moral compromise, which follows from evidential cooperation in large worlds, is something that CLR prioritizes only nominally at the moment. As far as I know, no one on staff currently focuses on this topic.

Justin Shovelain (Convergence Analysis) told me that figuring it out exactly can perhaps wait a bit longer until we have extinction risks a bit more under control.1 I would agree that averting extinction risks is very likely to be cooperative according to the compromise. The question whether we’re sufficiently sure that it is in the interest of the compromise is itself an optimal stopping problem that we may want to prioritize, but for now I’m leaning toward yes.

There is the view that the time that we should delegate such investigations to is that of the Long Reflection, after we’ve attained existential security. The Long Reflection may take centuries or millennia.

If I look at the world today, coordination looks very difficult. The Long Reflection would require reducing the risk of irreversible unilateral action from anyone anywhere in earth’s vicinity to a negligible level, and not just a negligible level per year but over the course of a whole millennium. That seems hard, to say the least. Conversely, if the world is very different in that, for example, one singleton is in charge of it, then coordination is easy, but getting such a singleton to care about our values, or the moral compromise in particular, is a notoriously hard problem. I may be overlooking something, but such a Long Reflection seems to me about impossible (<< 0.1%) in the first and very unlikely (< 1%) in the second case.

But I’m still a big fan of all other aspects of the Long Reflection, so I think it’ll more likely take the shape of a race between priorities research and technological progress. In between there’ll be some advocacy to try to influence the (hopefully conscientious) agents driving the technological progress. That race may have to start soon because technological progress won’t wait for us at the starting line.2

So in short, I think, we need something like the long reflection soon and with it work to narrow down the optimal moral compromise, but I can also understand that it’s not CLR’s top priority for now.

In the branches where I win the lottery I may decide to incentivize work on this by people who wouldn’t otherwise work on something even more pressing.

Rethink Priorities also seems well-positioned to launch investigations into such questions. Rethink Priorities and the Wild Animal Initiative have long been close contenders for my personal “favorite organization” spot.

Notes


  1. It was a verbal conversation, so I can’t check the exact wording. 

  2. Michael Aird remarked that such a race-like “Long Reflection” lacks distinctive features of the Long Reflection and should receive a new name. I have yet to come up with one. 


Comments

comments powered by Disqus