Dec 21, 2017

“Just take the expected value” – a possible reply to concerns about cluelessness

Cross-posted to the EA Forum.

This is the second in a series of posts exploring consequentialist cluelessness and its implications for effective altruism:

The first post describes cluelessness & its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.

This post considers a potential reply to concerns about cluelessness – maybe when we are uncertain about a decision, we should just choose the option with the highest expected value.

Following posts will discuss how tractable cluelessness is, and what being clueless implies about doing good.

Consider reading the first post first.


A rationalist’s reply to concerns about cluelessness could be as follows:

Cluelessness is just a special case of empirical uncertainty.[1]

We have a framework for dealing with empirical uncertainty – expected value.

So for decisions where we are uncertain, we can determine the best course of action by multiplying our best-guess probability against our best-guess utility for each option, then choosing the option with the highest expected value.

While this approach makes sense in the abstract, it doesn’t work well in real-world cases. The difficulty is that it’s unclear what “best-guess” probabilities & utilities we should assign, as well as unclear to what extent we should believe our best guesses.

Consider this passage from Greaves 2016 (“credence function” can be read roughly as “probability”):

The alternative line I will explore here begins from the suggestion that in the situations we are considering, instead of having some single and completely precise (real-valued) credence function, agents are rationally required to have imprecise credences: that is, to be in a credal state that is represented by a many-membered set of probability functions (call this set the agent’s ‘representor’). Intuitively, the idea here is that when the evidence fails conclusively to recommend any particular credence function above certain others, agents are rationally required to remain neutral between the credence functions in question: to include all such equally-recommended credence functions in their representor.

To translate a little, Greaves is saying that real-world agents don’t assign precise probabilities to outcomes, they instead consider multiple possible probabilities for each outcome (taken together, these probabilities sum to the agent’s “representor”). Because an agent holds multiple probabilities for each outcome, and has no way by which to arbitrate between its multiple probabilities, it cannot use a straightforward expected value calculation to determine the best outcome.

Intuitively, this makes sense. Probabilities can only be formally assigned when the sample space is fully mapped out, and for most real-world decisions we can’t map the full sample space (in part because the world is very complicated, and in part because we can’t predict the long-run consequences of an action).[2] We can make subjective probability estimates, but if a probability estimate does not flow out of a clearly articulated model of the world, its believability is suspect.[3]

Furthermore, because multiple probability estimates can seem sensible, agents can hold multiple estimates simultaneously (i.e. their representor). For decisions where the full sample space isn’t mapped out (i.e. most real-world decisions), the method by which human decision-makers convert their multi-value representor into a single-value, “best-guess” estimate is opaque.

The next time you encounter someone making a subjective probability estimate, ask “how did you arrive at that number?” The answer will frequently be along the lines of “it seems about right” or “I would be surprised if it were higher.” Answers like this indicate that the estimator doesn’t have visibility into the process by which they’re arriving at their estimate.

So we have believability problems on two levels:

   1) Whenever we make a probability estimate that doesn’t flow from a clear world-model, the believability of that estimate is questionable.

   2) And if we attempt to reconcile multiple probability estimates into a single best-guess, the believability of that best-guess is questionable because our method of reconciling multiple estimates into a single value is opaque.[4]

By now it should be clear that simply following the expected value is not a sufficient response to concerns of cluelessness. However, it’s possible that cluelessness can be addressed by other routes – perhaps by diligent investigation, we can grow clueful enough to make believable decisions about how to do good. The next post will consider this further.

Thanks to Jesse Clifton and an anonymous collaborator for thoughtful feedback on drafts of this post. Views expressed above are my own.


Footnotes

[1]: This is separate from normative uncertainty – uncertainty about what criterion of moral betterness to use when comparing options. Empirical uncertainty is uncertainty about the overall impact of an action, given a criterion of betterness. In general, cluelessness is a subset of empirical uncertainty.

[2]: Leonard Savage, who worked out much of the foundations of Bayesian statistics, considered Bayesian decision theory to only apply in "small world" settings. See p. 16 & p. 82 of the second edition of his Foundations of Statistics for further discussion of this point.

[3]: Thanks to Jesse Clifton to making this point.

[4]: This problem persists even if each input estimate flows from a clear world-model.

[rereads: many, edits: many]