I give an explanation for a phenomenon in the effective altruism community (related to this presentation) that might look like the streetlight effect, propose an idea for a software that might help to further optimize this area, and ask you for your input.
Open Philanthropy–Type Interventions
Insightful quantitative analyses give me the warm-fuzzies, but there may well be highly effective interventions, maybe even interventions more cost-effective than GiveWell’s top charities, that are not easily quantifiable. The Open Philanthropy Project has set out to find them, but so far it has not published any hard and fast recommendations. (Though their staff have.)
Meanwhile outsiders seem to have mistaken our enthusiasm for certain more easily quantifiable interventions for “effective altruism is about donating to easily quantifiable interventions” rather than “effective altruism is about doing the most good.” That’s weird. But in a comment on my article David Moss noted that there is something going on that does look like the streetlight effect. Still I think that this is the result of good judgment. That, however, doesn’t mean that the same good judgment could not also lead to different decisions given more information.
Expected Utility and Limited Diversification
What the streetlight effect describes is a scenario where you lose your wallet in the park, and then go up to the street to search for it there since there are streetlights there and you’d never find it in the dark of the park anyway. The reality is more like that you haven’t lost any particular wallet but that there are potentially a whole bunch of wallets lying around on the street and in the park. Since the light cones are so small, you figure chances are that the biggest wallets are probably somewhere out in the dark.
Translating the metaphor into reality, the darkness are areas of great uncertainty. You may, for example, have uncertainty surrounding how important the cause is that an intervention is trying to address, how effective the intervention is at addressing said cause, whether the intervention is still cost-effective at the margin, and whether the charity implementing the intervention is any good at it. If these uncertainties were known probabilities, you’d have to multiply them all, but you can’t do that unfortunately. The result of all these pseudomultiplications is that the expected utility of interventions in the dark becomes pretty small.
All the while there are a few, pretty few, really cool interventions in the cones of the streetlights, those of the GiveWell and Animal Charity Evaluators top charities. The interventions in the dark haze of uncertainty are many, many more, but you can only expect the tiniest fraction of them to have any considerable cost-effectiveness at the margin and even fewer of them to beat the known top charities.
Ordered by expected utility, the interventions will form something like a hyperbolic function (or more likely Pareto distribution) with very few top interventions and a long long tail of potentially interesting giving opportunities. This doesn’t include interventions that are fairly certain not to be cost-effective.
When we choose charities to donate to, we can’t diversify infinitely, and that wouldn’t be a good idea anyway. Some even argue against any diversification, but that seems unnecessarily restrictive. In any case, few effective altruists will donate to more than five charities, and they will mostly focus on the charities with the highest expected utility or some slight variation thereof. Most will agree on the high expected utility of the known top charities, but the opinions on the long tail charities will vary widely. One person may be very familiar with a certain long tail charity and may hence think that they’re able to tell with above-average certainty that it’s a good buy, but someone else could worry that this very familiarity might bias the first person’s judgment and donate to a different one or none at all. The result is that the hyperbolic function becomes even more extreme when the y axis are donations.
Expected Utility Auctions
This seems all perfectly logical to me, and I see no reason to criticize these people’s decisions. What would be very valuable, however, is what Open Phil does, to try to find the few good giving opportunities in the long tail and lift them out of it.
Open Phil, however, has to prioritize interventions that are very scalable because there are eight billion dollar waiting to be invested. The interventions at least have to maintain a comparable marginal cost-effectiveness for long enough to warrant the time and money invested into finding them.
Significantly, EA metacharities could profit from prioritization. Such goals as to educate the public about effective giving, to fundraise for effective charities, to collect donation pledges, to conduct prioritization research, and much more might all conceivably be highly cost-effective so long as they haven’t overexerted the limits of their scalability or suffer from any other hidden ailments. Worries about these latter problems are probably what’s holding back many EAs who would otherwise donate to metacharities.
Is there maybe a system that is less reliable than the proper Open Phil treatment but that might serve as a rough guide for these donors and as a training ground for prioritization research hobbyists? Impact certificates may develop into such a tool, but here’s another idea.
I envision an expected utility auction site a bit like Stack Exchange,
- where people can post their own estimates of the cost-effectiveness and scalability of their project and their reasoning and calculations behind it
- where other people can reply to such a bid with their own ones and their own calculations
- where a widget at the side displays the current average of the bids weighted according to their upvotes, the standard deviation, and some other metrics of the thread
- where a list gives a sorted overview of these metrics of all threads.
The unit could be something like 1 util/$ = what GiveDirectly can do for $1. The better estimates would influence the overall total more strongly, and the original poster would be incentivized to start out with a reasonable starting bid to earn upvotes and thus exposure for their project.
Later a link to the donation register of the EA Hub might be useful so that people who read a review a few months after it was published can estimate how much room for more funding is still left for them.
Do you think this could work? Do you think it’s worthwhile? What other features would such a site need? Who would like to use such a site? Who can implement the MVP? (I’d do it myself, but I should really be working on my thesis.)
Thanks for reading!
You can find a discussion in the EA Forum.