“If you value future people, why do you consider near term effects?” by Alex Holness-Tofts makes the case that a lot of reasons to focus on near-term effects fall short of being persuasive. The case is based centrally on complex cluelessness. It closes with a series of possible objections and why they are not persuasive. (Alex also cites the amazing article “Growth and the case against randomista development”!)
I find it disconcerting that there are a lot of very smart people in the EA community who focus more on near-term effects than I currently find reasonable. It creates a tension between my assessment of the question before and after Aumann update, i.e. taking into account that I’m more likely to be wrong when a lot of smart people disagree with me.
The article invites a discussion, and Michael St. Jules responded by explaining the shape of a utility function (bounded above and below) that would lead to a near-term focus and why it is a sensible utility function to have. The number of upvotes lead me to believe that this is a common reason for near-term focus, but Michael notes in a comment on my Facebook post:
I’d actually guess that a bounded utility function is an uncommon reason for neartermist focus, based on the other answers and the way shortermist EA orgs do cost-effectiveness analysis (risk-neutrally), so people are upvoting because they find my comment interesting. I think I’ve only heard it endorsed explicitly a few times by EAs.
I’d guess different person-affecting views might explain some neartermism, but except for those who actually think future/extra individuals, including those in pure misery (s-risks), at most barely matter or don’t matter at all, this wouldn’t be a good objection to longtermism on its own.
I think this is probably the closest to what neartermists would endorse (I left the comment late compared to others, so it might not have gotten much attention because of it):
“My guess is that people who support AMF, SCI, or GiveDirectly don’t think the negative long-term effects are significant compared to the benefits, compared to ‘doing nothing.’ These do more good than harm under a longtermist framework. Compared to ‘doing nothing,’ they might generally just be skeptical of the causal effects of any interventions primarily targeting growth and all other so far proposed longtermist interventions (the causal evidence is much weaker) or believe these aren’t robustly good because of complex cluelessness.
“I focus on animal welfare, and it’s basically the same argument for me. If I think doing X is robustly better than doing nothing in expectation, and no other action is robustly better than doing X in expectation, then I’m happy to do X.”
See also this conversation between Michael and Phil Trammell. I revisit this conversation and Phil’s essay on the matter and may then update this post.
There are also hints in the discussion of whether there may be a reason to focus on near-term effects as a Schelling point in coordination problem with future generations. But that point is not fully developed, and I don’t think I could steelman it.
I’ve heard smart people argue for the merits of bounded utility functions before. They have a number of merits – avoiding Pascal’s mugging, the St. Petersburg game, and more. (Are there maybe even some benefits for dealing with infinite ethics?) But they’re also very unintuitive to me.
Besides, I wouldn’t know how to select the right parameters for it. With some parameters, it’ll be nearly linear in a third-degree-polynomial increase in aggregate positive or negative valence over the coming millennium, and that may be enough to prefer current longtermist over current near-termist approaches.
Another related article is Brian Tomasik’s “How the Simulation Argument Dampens Future Fanaticism”:
There’s a non-trivial chance that most of the copies of ourselves are instantiated in relatively short-lived simulations run by superintelligent civilizations, and if so, when we act to help others in the short run, our good deeds are duplicated many times over. Notably, this reasoning dramatically upshifts the relative importance of short-term helping even if there’s only a small chance that Nick Bostrom’s basic simulation argument is correct.
Comments
comments powered by Disqus