So here’s the thing: I like utilitarianism. No matter what I do, no matter what I read, I always find that I am stuck in a utility-shaped box. (Here’s one reason: it is hard for me to applaud moral convictions if they treat rights as inviolable, even when the future of the right itself is at stake.) But trapped in this box as I am, sometimes I put my ear to the wall and hear what people outside the box are doing. And the voices outside tell me that utilitarianism is alienating and overly demanding.
I’m going to argue that act-utilitarianism is only guilty of these things if fatalism is incorrect. If fatalism is right, then integrity is nothing more than the capacity to make sense of a world when we are possessed with limited information about the consequences of actions. If I am right, then integrity does not have any other role in moral deliberation.
Supposedly, one of the selling points of act-utilitarianism is that it requires us to treat people impartially, by forcing us to examine a situation from some third-person standpoint and apply the principle of utility in a disinterested way. But if it were possible to do a definitive ‘moral calculus’, then we would be left with no legitimate moral choices to make. Independent judgment would be supplanted with each click of the moral abacus. It is almost as if one would need to be a Machiavellian psychopath in order to remain so impartial.
One consequence of being robbed of legitimate moral alternatives is that you might be forced to do a lot of stuff you don’t want to do. For instance, it looks as though detachment from our integrity could force us to into the squalor of excessive altruism, where we must give away anything and everything we own and hold dear. Our mission would be to maximize utility by doing works in such a way that would keep our own happiness above some subsistence minimum, and improve the lot of people who are far away. Selfless asceticism would be the order of the day.
In short, it seems like act-utilitarianism is a sanctimonious schoolteacher, that not only overrides our capacity for independent moral judgment, but also obliges us to sacrifice almost all of our more immediate interests for interests that are more remote — the people of the future, and the people geographically distant.
Friedrich Nietzsche, Samuel Scheffler, Bernard Williams: here are some passionate critics who have argued against utility in the above-stated ways. And hey, they’re not wrong. The desire to protect oneself, one’s friends, and one’s family from harm cannot simply be laughed away. Nietzsche can always be called upon to provide a mendacious quote: “You utilitarians, you, too, love everything useful only as a vehicle for your inclinations; you, too, really find the noise of its wheels insufferable?”
Well, it’s pretty clear that at least one kind of act-utilitarianism has noisy wheels. One might argue that nearer goods must be considered to have equal value as farther goods; today is just as important as tomorrow. When stated as a piece of practical wisdom, this makes sense; grown-ups need to have what Bentham called a ‘firmness of mind’, meaning that they should be able to delay gratification in order to find the most happiness out of life. But a naive utilitarian might take this innocent piece of wisdom and twist it into a pure principle of moral conduct, and hence produce a major practical disaster.
Consider the sheer number of things you need to do in order to make far-away people happy. You need to clamp down on all possible unintended consequences of your actions, and spend the bulk of your time on altruistic projects. Now, consider the limited number of things you can do to make a small number of people happy who are closest to you. You can do your damnest to seize the day, but presumably, you can only do so much to make your friends and loved ones happy without making yourself miserable in the process. So, all things considered, it would seem as though the naive utilitarian has to insist that we all turn into slaves to worlds that are other than our own — the table is tilted in the favor of the greatest number. We would have to demand that we give up on today for the sake of the longer now.
But that’s not to say that the utilitarian has been reduced to such absurdities. Kurt Baier and Henry Sidgwick are two philosophers that have explicitly defended a form of short-term egoism, since individuals are better judges of their own desires. Maybe utilitarianism isn’t such an abusive schoolteacher after all.
Why does act-utilitarianism seem so onerous? Well, if you’ve ever taken an introductory ethics class, you’re going to hear some variation on the same story. First you’ll be presented with a variety of exotic and implausible scenarios, involving threats to the wellbeing of conspecifics that are caught in a deadly Rube Goldberg machine (involving trolleys, organ harvesting, fat potholers, ill-fated hobos, etc.) When the issue is act-utilitarianism, the choice will always come down to two options: either you kill one person, or a greater number of others will die. In the thought-experiment, you are possessed with the power to avert disaster, and are by hypothesis acquainted with perfect knowledge of the outcomes of your choices. You’ll then be asked about your intuitions about what counts as the right thing to do. Despite all the delicious variety of these philosophical horror stories, there is always one constant: they tell you that you are absolutely sure that certain consequences will follow if you perform this-or-that action. So, e.g., you know for sure that the trolley will kill the one and save the five, you know for sure that the forced transplant of the Hobo’s organs will save the souls in the waiting room (and that the police will never charge you with murder), and so on.
This all sounds pretty ghoulish. And here’s the upshot: it is not intuitively obvious that the right answer in each case is to kill the one to save the five. It seems as though there is a genuine moral choice to be made.
Yet when confronted with such thought-experiments, squadrons of undergraduates have moaned: ‘Life is not like this. Choices are not so clear. We do not know the consequences.’ Sophomores are in a privileged position to see what has gone wrong with academic moralizing, since they are able to view the state of play with fresh eyes. For it is a morally important fact about the human condition that we don’t know much about the future. By imagining ourselves in a perfect state of information, we alienate ourselves from our own moral condition.
Once you see the essential disconnect between yourself and your hypothetical actor in the thought-experiment, blinders ought to fall from your eyes. It is true that I may dislike pulling the switch to change the trolley’s track, but my moral feelings should not necessarily bear on the question of what my more perfect alternate would need to do. Our so-called ‘moral intuitions’ only make a difference to the actual morality of the matter on the assumption that our judgments can reliably track the intuitions of your theoretical alternate — assuming your alternate knows the things they know, right on down to the bone. But then, this assumption is a thing that needs to be argued for, not assumed.
While we know a lot about what human beings need, our most specific knowledge about what people want is limited to our friends and intimates. That knowledge makes the moral path all the more clear. When dealing with strangers, the range of moral options is much wider than the range of options at home; after all, people are diverse in temperament and knowledge, scowl and shoe size. Moral principles arise out of uncertainty about the best means of achieving the act-utilitarian goal. Strike out uncertainty about the situation, and the range of your moral choices whittle down to a nub.
So if we had perfect information, then there is no doubt that integrity should go by the boards. But then, that’s not the fault of act-utilitarianism. After all, if we knew everything about the past and the future, then any sense of conscious volition would be impossible. This is just what fatalism tells us: free will and the angst of moral choice are byproducts of limited information, and without a sense of volition the very idea of integrity could not even arise.
Perhaps all this fatalism sounds depressing. But here’s the thing — our limited information has bizarrely romantic implications for us, understood as the creatures we are. For if we truly are modest in our ability to know and process information, and the rest of the above holds, then it is absurd to say, as Nietzsche does, that “whatever is done from love always occurs beyond good and evil”. It is hard to conceive of a statement that could be more false. For whatever is done from love, from trust and familiarity, is the clearest expression of both good and evil.
Look. Trolley-style thought-experiments do not show us that act-utilitarianism is demanding. Rather, they show us that increased knowledge entails increased responsibility. Since we are the limited sorts of creatures that we are, we need integrity, personal judgment, and moral rules to help us navigate the wide world of moral choice. If the consequentialist is supposed to be worried about anything, the argument against them ought to be that we need the above-stated things for reasons other than as a salve to heal the gaps in what we know.