8.3 Problems, Problems

The difficulties faced by utilitarianism are of two types – technical problems that arise from its claim to be based on a scientific calculation of the costs and benefits of our actions and deeper questions about its status as a moral theory. We will look at each of these in turn in this section.

Technical problems

These issues have already briefly discussed these in the last section but they deserve further elaboration since they may end up being more serious than defenders of utilitarianism seem willing to admit. The problems I mentioned there were the problems of measuring and comparing happiness as well as the problem of determining what exactly the outcomes of our actions might be. At first glance these may seem relatively minor, but it seems to me that they begin to call into question the very claim of utilitarianism to be a viable theory.

So the first question is, if utilitarianism claims that we can determine the moral worth of an action by the results of that action and measure those result by the amount of happiness that it produces we may wonder how exactly we might go about measuring that. Of course it may seem obvious that I know for myself whether I am happy or not, whether my needs have been met, whether I am better off than I previously was in a variety of circumstances. It may also seem to be the case that we can compare different people and determine that one person is happier or better off than another at least in general terms. But that already suggests a problem for utilitarianism – how can we get beyond the somewhat vague and indefinite comparisons we might make in such a way as to support the claim that the calculation of costs and benefits can truly lead us to an objectively valid measure of moral worth for any particular situation we might face? Economists and other social scientists frequently face similar issues and so fall back on a convenient stand-in for happiness – dollar value. When they examine human decision-making from a scientific standpoint they often ask people to choose which of two alternatives people would pay more for, or how much would be required to compensate them for their efforts in a given scenario. This works well enough in given scenarios but it seems to be limited to cases in which a monetary equivalent can be given and there are many situations in which monetary value is a poor substitute for other more qualitative valuations we might make. For example, how might we compare the long term, but not particularly intense satisfaction of seeing one’s children graduate from college to a shorter term and more intense pleasure, like that one might get from going skydiving. Any attempt to find a single scale on which to make an objective comparison must seem completely arbitrary. Hence Jeremy Bentham’s claim to have developed a “felicific calculus” that would enable us to calculate the precise quantity of happiness produced by any action invites parody, something with Bentham himself unwittingly provided with his convenient verse mnemonic:

Intense, long, speedy, fruitful – Such Marks in pleasures and pains endure. Such pleasures seek if private be thy end: If it be public wide let them extend Such pains avoid whichever by thy view: If pains must come let them extend to few.15

This brings up the related issue of how we might be able to compare the results of an action across different people. Do we just end up counting heads? Is there a survey we can give to everyone involved to determined how much people really enjoyed the results of our actions or not?

The general problem here of determining the precise payoff of our decisions gets even worse when we think about when that payoff even occurs. After all the consequences of my actions continue to spread out in all kinds of ways like the ripples on a pond after you throw a stone into it and there seems to be no non-arbitrary way to determine when exactly the further consequences no longer need to be taken into account. It is not as if the consequences of our decisions simply stop being relevant after a certain definite point.

And finally in this vein, we may wonder how it is that we can even tell what consequences our actions even may have. It seems to me to be no accident that the classic “trolley problem” where we are asked to decide whether to throw a switch that leads to the death of one person on one track as opposed to doing nothing causing the death of more than one person on another is cast on a railroad, where a speeding train car is locked into its trajectory. In the real world there are no such well-defined and pre-determined alternatives but instead we have to take at best educated guesses about what might happen as a result of our decisions.

Deeper questions

The deeper questions that utilitarianism faces have to do with its fundamental claim that we can and should define what is right in terms of what is good. Well can we and should we? As we will be seeing in more detail in the next chapter the philosopher Immanuel Kant argues that the answer is no. In the absence of his particular arguments about why this is the case we can here at least appeal to our moral intuitions. This isn’t really a definitive proof that utilitarianism is wrong, but it does at least suggest that we need to look at things more carefully, which Kant will offer a way of doing.

The big worry here is captured by the question, “Can the ends ever justify the means?” Utilitarians answer in the affirmative – given good enough outcomes, the pursuit of the greatest happiness for the greatest number of people may in certain cases lead us to endorse doing things that seem to be morally dubious. Should we ever risk the lives of innocent people in order to accomplish a “greater good?” Well utilitarians might very well answer in the affirmative if they consider the payoff to be big enough. But then, if we can’t really determine how big the payoff really is, how can we say when this might be the case? Are we really justified in causing real harm in the interests of avoiding even worse but merely hypothetical results if we acted differently. Since we have no real way of rewinding the tape and playing the scenario again with a different choice at the crucial moment we are basically reducing the morality of any given decision to something that is basically unknowable — what would have happened if things had been otherwise. Real life examples of this are easy to find and it always must seem at least a little suspect to offer as a response to the victims of our actions, “Trust me the outcome would have been much worse if I did this instead of that.”


  1. Jeremy Bentham, An Introduction to the Principles of Morals and Legislation (Oxford: Clarendon Press, 1970)↩︎