8.2 Why should I care?

The argument for utilitarianism is straightforward and might go like so:

Everyone is out for the same thing – happiness.
The rational approach to happiness is maximizing utility.
All of our interests count equally.

Thus we should all strive to maximize overall utility.

That is, given a standard set of assumptions about human motivation and rationality, and adding an explicitly ethical premise (the third), we get the result that we should all strive to get the best possible outcome for the greatest number of people. Utilitarians claim that this argument shows why it is that we should use the good that our actions do for whoever it is that is affected by them as a measure of their ethical value.

So far, in our description of the position of utilitarianism, we have seen the grounds for buying the first two premises. The first of these claims is a straightforward, and pretty obvious, description of human behavior. The second depends on a pretty clear and uncontroversial theory of rational action. Acting rationally has to involve taking into account the costs of doing something and the probability of actually accomplishing it. So far so good, but these premises do not yet enable us to go beyond egoism. They are perfectly compatible with a complete lack of concern for others. The third premise goes well beyond anything that an egoist would accept and clearly calls for a more involved defense. The whole weight of utilitarianism rests on this claim.

So far, we seem to be admitting that people only care about themselves and getting what they individually want. Yet ethics is supposed to be about concern for others, or at least taking others into account when you are making a decision. Utilitarians, however, insist that we can construct an ethics on the basis of what we have so far said about human motivation. So what we have to show next is that not only should we act with our own interests in mind, but we also have to have a compelling reason to take others into account. We can’t just assume that others count; we have to demonstrate that ignoring others’ interests is just not a rational option.

Consider the following, somewhat absurd, example. Suppose I have 10 dollars and would like to entertain myself on a hot sunny afternoon. I come up with the following two possibilities:

  1. I could buy a ticket to go see a Hollywood action movie.
  2. I could use the 10 dollars to buy a six pack of beer and then go up to my roof where there happen to be some bricks left over from the construction of the building. My plan would be to hang out and relax, enjoying the breeze, while occasionally throwing a brick down on to the crowded street below.

In case 1 I’d get some relief from the heat and a little relief from my boredom. But this relief would only last for an hour and a half, and I’d have to sit though yet another typical Hollywood movie in which the good guys get the bad guys, with all of the usual predictable car chase and cliff hanging scenes and all of the usual dazzling but fake looking special effects. In short – it just doesn’t seem worth the money.

In case 2 instead of the fake explosions and chaos, blood and guts of a Hollywood movie I could see the real thing. Maybe there would be multiple vehicle accidents and perhaps even a gripping real life chase, involving me too. This certainly seems more cost effective, a way of getting the most for my entertainment dollar.

The only problem here is of course that option 2 requires that I assume that nobody else’s interests, or lives for that matter, really count. In this scenario I am considering only my own interests and treating others as if their pain and suffering just didn’t matter. The question is then, do I have any way to defend my actions? Can I possibly come up with good reasons for believing that my interests mean more than the interests of the people whose lives I am putting at risk for the sake of entertainment? It is not enough here just to insist that the people I am endangering wouldn’t want me to endanger them. This is because I may genuinely not care what they think of my actions. I am acting selfishly here as a matter of fact. The real issue is whether I can in any way rationally defend my selfishness. Well, can I?

It seems like I can’t. Of course I can act selfishly, but that’s not the same as convincing others, who are potential victims of my selfishness, that I am justified in acting selfishly. But that seems very unlikely – why should you, someone who has his or her own needs and interests agree to let me endanger you unless you were getting something in return? But if I am selfishly endangering you, then you are not getting anything, or at least enough, in return, so you’d have no reason to go along with my selfish schemes. Thus I cannot defend my own selfishness. And if I cannot do this, then I must accept the third premise of the argument for utilitarianism: “All of our interests count equally.”

Where is all of this leading? To the claim that we don’t really have any good reason for denying that others count just as much as we do. Once we are convinced of this it is a short step to the basic principle of utilitarian ethics, other wise known as “the principle of utility.” This states that the right thing to do in any situation is to look at all of the available alternatives and choose the one that gets the most benefit for the most people, or that maximizes overall utility. All of morality boils down to this one simple principle: do whatever brings the most benefits and the least costs to the greatest amount of people. On the account of utilitarianism I have been developing here, this principle follows from the fact that none of us really has any good reason for denying that others count just as much as we do.

This is perhaps not a surprising conclusion. After all, isn’t accepting the idea that others’ interests count just as much as ours do a requirement for looking at things from a moral point of view? If we cannot accept this, then we are simply not moral agents. Utilitarianism simply makes this idea explicit and defends it with a clear argument.