Nov 05, 2016

Should we give now or later? (continued)

2017-01-30 update: I no longer think the reasoning laid out in this post is correct. I was convinced by this comment on the EA Forum.


My recent post on whether Good Ventures should give now or give later contained an important error – I gave my subjective probabilities of various futures, then used this probability distribution to draw a conclusion about the question at hand (I concluded that Good Ventures should emphasize present-day giving).

I still think my conclusion is sound, but the reasoning I laid out was faulty. When making decisions, it's better to assess the expected value of each choice rather than just the probability of each choice. I was doing this implicitly, but didn't make my reasoning explicit. So let's do that now.

In the post, I gave my best guess at the likelihoods of a set of scenarios. To assess (my subjective) expected value of these scenarios, we also have to assign a utility score to each scenario (obviously, the following is incredibly rough):

  • Current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs = Baseline
  • Current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity = 2x as good as baseline
  • Strong AI leads to a post-scarcity economy = 100x as good as baseline
  • Strong AI leads to a global catastrophe (GCR) = 0x as good as baseline
  • A different GCR occurs = 0x as good as baseline

Before calculating the expected value of each scenario, let's unpack my assessments a bit. I'm imagining "baseline" goodness as essentially things as they are right now, with no dramatic changes to human happiness in the next 30 years. If quality of life broadly construed continues to improve over the next 30 years, I assess that as twice as good as the baseline scenario.

Achieving post-scarcity in the next 30 years is assessed as 100x as good as the baseline scenario of no improvement. (Arguably this could be nearly infinitely better than baseline, but to avoid Pascal's mugging we'll cap it at 100x.)

A global catastrophe in the next 30 years is assessed as 0x as good as baseline. It's a little weird to think of one thing being "zero times" as good as another thing, but it will suffice for our purposes here.

Again, this is all very rough.

Now, calculating the expected value of each outcome is straightforward (probabilities are taken from the first post on this topic):

  • Expected value of current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs = 0.3 x 1 = 0.3
  • Expected value of current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity = 0.56 x 2 = 1.12
  • Expected value of strong AI leads to a post-scarcity economy = 0.05 x 100 = 5
  • Expected value of strong AI leads to a global catastrophe = 0.02 * 0 = 0
  • Expected value of a different GCR occurs = 0.07 * 0 = 0

And each scenario maps to a now-or-later giving decision:

  • Current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs –> Give later (because new opportunities may be discovered)
  • Current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity –> Give now (because the best giving opportunities are the ones we're currently aware of)
  • Strong AI leads to a post-scarcity economy –> Give now (because philanthropy is obsolete in post-scarcity)
  • Strong AI leads to a global catastrophe (GCR) –> Give now (because philanthropy is nullified by a global catastrophe)
  • A different GCR occurs –> Give now (because philanthropy is nullified by a global catastrophe)

So, we can add up the expected values of all the "give now" scenarios and all the "give later" scenarios, and see which sum is higher:

  • Give now total expected value = 1.12 + 5 + 0 + 0 = 6.12
  • Give later total expected value = 0.3 = 0.3

Comparing the totals shows that, in expectation, giving now will lead to substantially more value. Most of this is driven by the post-scarcity variable, but even with post-scarcity outcomes excluded, I still assess "give now" scenarios to have about 4x the expected value as "give later" scenarios.

One note: this calculation is a little strange, because GCR outcomes are given no weight, but in reality if we were faced with a substantial risk of a global catastrophe, that would strongly influence our decision-making. Maybe the proper way to do this is to assign a negative value to GCR outcomes and include them in the "give later" bucket, but that pushes even further in the direction of "give now" so I'm not going to fiddle with it here.

Obviously, different people will make different assessments about the appropriate values to use here, but this framework feels like a reasonable way to assess the question.

When I assign probabilities and utilities that seem reasonable, my intuition that present-day giving opportunities outperform future giving opportunities is confirmed. (Note that I didn't make ad-hoc probability or utility assessments, or do any other fiddling to get the outcome I wanted – estimates were made in good faith, which makes me feel better about where it ended up.)

Disclosure: I used to work at GiveWell.
[rereads: 1, edits: prose tightening, formatting, added disclaimer at top]