When it comes to assessing which actions are good and which are bad, consequentialism doesn't align very well with my intuition.
Consequentialism is the conceptual framework I most frequently use when assessing ethical questions, so this misalignment crops up fairly regularly. Ignoring my intuition is, well, unintuitive, so I'd like to find a resolution that's more accommodating than simply shouting myself down: "Shut up Intuition, you're wrong, just go along with what Rational Brain says because that's the only thing that makes any sense!"
Before we dive into potential resolutions, let me illustrate the misalignment more fully.
Activities that have traditionally been considered sinful are good case studies of the misalignment. (I don't mean "sinful" in any precise, things-that-take-you-further-from-God sense. Rather, I mean "sinful" in a vague, that-activity-is-bad-and-we-shouldn't-think-about-it sense, which I often associate with mainline American protestantism.) Consider masturbation as an example. I think many people feel shame & guilt about their masturbation habits, and it isn't something that can be discussed in polite company. Masturbation, in other words, is sinful in the vague sense.
People who feel shame or guilt around their masturbation habits probably have an intuition that it is a morally bad activity that they should stop doing. Yet, from a consequentialist viewpoint, masturbation isn't morally bad. Indeed, on a consequentialist view, the worst effect of masturbation is oftentimes the "unnecessary" guilt that the practitioner feels. We could even say that on a consequentialist view, the masturbation is morally neutral while the masturbatory guilt is morally bad.
This quality isn't unique to masturbation. Many vices bring about shame and guilt in those who partake, yet are assessed as morally neutral (or even mildly positive) on a consequentialist view. Here are some examples: intemperance, covetousness, pride, duplicity. My intuition says that these qualities are bad and ought to be avoided. But consequentialist assessment takes a more nuanced view – some vices can yield pleasant experiences without causing material harm (e.g. intemperance), and thus are moral goods themselves; other vices can be instrumentally useful for achieving other moral goods (e.g. duplicity), and are thus absolved when used as such.
Consequentialism and my intuition are misaligned in the other direction as well. On a consequentialist view, much good can be caused by giving money to effective causes. It follows that, when money is spent on other things (i.e. not given to effective causes), some amount of good is unrealized, which is morally bad (or morally suboptimal, for those who prefer softer language). When I buy a fancy $4 coffee, I have squandered that money, which could have otherwise been transformed into $3.60 in the pocket of a poor Kenyan subsistence farmer. Giving the money to the poor farmer would obviously have been higher utility than buying myself a fancy coffee. I knew this, yet I bought the coffee. On the consequentialist view, doing so was morally bad.
But my intuition is just not on board with this reasoning. I'm literally drinking a fancy $4 coffee as I write this, and I do not feel one iota of shame or guilt. I felt a pang of empathy as I wrote that line about the Kenyan farmer, but that pang was a desire to help them, not a reprimand of my morally bad purchase. Consequentialist reasoning does not seem to activate whatever brain module generates my feelings of moral badness.
So, what to do about this?
The standard consequentialist reply involves suppressing your intuition. It is what Peter Singer suggests in Famine, Affluence, and Morality: our moral intuitions & social norms are broken, and must be drastically revised. It's true that our moral intuitions are broken (Trolleyology does a good job of pointing out specific areas of brokenness), but I'm not convinced of the feasibility or desirability of a drastic revision.
Let's address feasibility first. For some people, moral intuition might be intractably hard to revise. I've experienced my moral intuitions change over time, but most of these alterations are around the margin rather than at the core. And I've never experienced my moral intuitions change as the result of an intentional project aimed at changing them. To be clear, I think that intuitions can change a lot over time, but I'm skeptical that intentionally trying to change one's intuitions is tractable. Beliefs can be altered intentionally, but not intuition (e.g. I can start believing that I shouldn't eat meat after reading some compelling animal welfare literature and endorsing it, but I can't cultivate a sense of visceral disgust about eating hamburgers in this way). As far as I can make out, my intuition evolves by way of a hard-to-understand process that incorporates my experience, my worldview, and societal forces, all on a long lead time. There doesn't seem to be room in the process for intentional direction.
Now, let's turn to the desirability of drastically revising our intuition to align it with our ethical framework. One strange characteristic of Singer's pond argument (in which a direct analogy is drawn between refusing to help a drowning child in a shallow pond for fear of ruining your nice clothes, and refusing to donate money to distant people who need it desperately) is that, while the conclusion suggests a radical overthrow of our traditional moral intuitions, the argument itself rests on an appeal to those intuitions.
To expand on that: there are two ways to generalize the drowning-child-is-the-same-as-distant-sufferer equivalency. We can take our strong, intuitive desire to save the drowning child, and generalize this desire to apply to all desperate sufferers everywhere. Or we can take our general indifference to desperate sufferers everywhere and apply this indifference to the case of drowning child (a direction considered briefly in this LessWrong post (a)).
Obviously, ignoring a drowning child and justifying my refusal to assist with "I don't give money to help distant sufferers in need, so why would I help a drowning child right in front of me?" is repugnant. But, repugnancy aside, it is just as consistent as going the other away. And this repugnancy is a gut reaction. An intuition. Without this visceral, intuitive feeling that we must help a drowning child when we encounter them, the pond argument deflates.
Given that the pond argument rests on an appeal to intuition, it does not seem obviously desirable to attempt drastically revising this intuition. If framed generously, we could view the drastic revision as a purification – we are merely making our morality stronger by ridding it of inconsistencies. But that cuts both ways. If we are willing to revise our morality such that our intuitive feeling for a single child in front of us propagates to all children everywhere, surely we should also at least entertain the idea of revising it such that we ignore the single child's plight, given all the other plights elsewhere that we're indifferent to.
This leaves us in a sticky position. On one hand, generalizing out from the drowning child, we place an extreme demand on ourselves. Giving to the level of marginal utility is a high, high bar for being a moral person (perhaps we decide to treat the level of marginal utility as an asymptote rather than as a threshold, but even still, most all of us are so far from this level that it cashes out the same). Going this direction, we resign ourselves to either asceticism or inescapable moral insufficiency.
On the other hand, applying our general indifference to the case of the drowning child, we endorse an extremely callous self-interest. We decide to be the sort of people who walk by kids drowning in kiddie pools.
It's pretty clear that if we want to walk out of this with a "reasonable" resolution, we're going to need to accept some amount of inconsistency (side note: not feeling like you are obligated to be as moral as possible takes much of the edge off here, but that's a topic for another time). Inconsistency is the nag that started this whole interrogation. Now, it seems like having intuitions misaligned with our ethical framework is the least shitty of three shitty options.
Okay, we're going to accept some amount of inconsistency. But how much? Where to set the trade-off?
I think each person sets the trade-off differently, at the point most palatable to their preferences. The more puritan-minded are likely inclined to attempt squelching their misguided intuitions and shouldering more of the extreme demandingness that giving to the level of marginal utility presents. The bon vivants are more likely to preserve more of their admittedly inconsistent intuitions and shirk more of the demandingness giving to the level of marginal utility calls for. And very few are likely willing to walk the third path, the implications that follow from applying their general indifference to the case of the drowning child.
Thanks to Carl Shulman, Jacy Reese, David Berlekamp, and Rebecca Raible for giving feedback on earlier drafts of this post.
[rereads: 3, edits: prose tightening, edits to clarify what I was saying about intentionally trying to change intuitions being intractable, added the "Shut up and divide?" link, removed the "this post also appeared on the EA Forum" note after I learned that you actually have to have 5 karma to create posts on the EA Forum, which seems rather silly.]