Tag Archives: Friedrich Nietzsche

Better to be Nothing?

There is an old legend that king Midas for a long time hunted the wise Silenus, the companion of Dionysus, in the forests, without catching him. When Silenus finally fell into the king’s hands, the king asked what was the best thing of all for men, the very finest. The daemon remained silent, motionless and inflexible, until, compelled by the king, he finally broke out into shrill laughter and said these words, “Suffering creature, born for a day, child of accident and toil, why are you forcing me to say what would give you the greatest pleasure not to hear? The very best thing for you is totally unreachable: not to have been born, not to exist, to be nothing. The second best thing for you, however, is this — to die soon.”

-Nietzsche, The Birth of Tragedy

One rather good metaphysical question is “why is there something rather than nothing?” An interesting question in the realm of value is “is it better to be nothing rather than something?” That is, is it better “not to have been born, not to exist, to be nothing?”

Addressing the question does require sorting out the measure of value that should be used to decide whether it is better to not exist or to exist. One stock approach is to use the crude currencies of pleasure and pain. A somewhat more refined approach is to calculate in terms of happiness and unhappiness. Or one could simply go generic and use the vague categories of positive value and negative value.

What also must be determined are the rules of the decision. For the individual, a sensible approach would be the theory of ethical egoism—that what a person should do is what maximizes the positive value for her. On this view, it would be better if the person did not exist if her existence would generate more negative than positive value for her. It would be better if the person did exist if her existence would generate more positive than negative value for her.

To make an argument in favor of never existing being better than existing, one likely approach is to make use of the classic problem of evil as laid out by David Hume. When discussing this matter, Hume contends that everyone believes that life is miserable and he lays out an impressive catalog of pains and evils. While he considers that pain is less frequent than pleasure, he notes that even if this is true, pain “is infinitely more violent and durable.” As such, Hume makes a rather good case that the negative value of existence outweighs its positive value.

If it is true that the negative value outweighs the positive value, and better is measured in terms of maximizing value, then it would thus seem to be better to have never existed. After all, existence will result (if Hume is right) in more pain than pleasure. In contrast, non-existence will have no pain (and no pleasure) for a total of zero. Doing the value math, since zero is greater than a negative value, never existing is better than existing.

There does seem to be something a bit odd about this sort of calculation. After all, if the person does not exist, then her pleasure and pain would not balance to zero. Rather it would seem that this sum would be an undefined value. It cannot be better for a person that she not exist, since there would (obviously) not be anyone for the nonexistence to be better for.

This can be countered by saying that this is but a semantic trick—the nonexistence would be better than the existence because of the relative balance of pleasure and pain. There is also another approach—to broaden the calculation from the individual to the world.

In this case, the question would not be about whether it would be better for the individual to exist or not, but whether or not a world with the individual would be better than a world without the individual. If a consequentialist approach is assumed, it is assumed that pain and pleasure are the measure of value and it is assumed that the pain outweighs the pleasure in every life, then the world would be better if a person never existed. This is because the absence of an individual would reduce the overall pain. Given these assumptions, a world with no humans at all would be a better world. This could be extended to its logical conclusion: if the suffering outweighs the pleasures in the case of all beings (Hume did argue that the suffering of all creatures exceeds their enjoyments), then it would be better that no feeling creatures existed at all. At this point, one might as well do away with existence altogether and have nothing. Thus, while it might not be known why there is something rather than nothing, this argument would seem to show that it would be better to have nothing rather than something.

Of course, this reasoning rests on many assumptions that can be easily challenged. It can be argued that the measure of value is not to be done solely in terms of pleasures and pains—that is, even if life resulted in more pain than pleasure, the overall positive value could be greater than the negative value. For example, the creation of art and the development of knowledge could provide value that outweighs the pain. It could also be argued that the consequentialist approach is in error—that estimating the worth of life is not just a matter of tallying up the negative and positive. There are, after all, many other moral theories regarding the value of existence. It is also possible to dispute the claim that pain exceeds pleasure (or that unhappiness exceeds happiness).

One could also take a long view—even if pain outweighs pleasure now, humans seem to be making a better world and advancing technology. As such, it is easy to imagine that a better world lies ahead and it depends on our existence. That is, if one looks beyond the pleasure and pain of one’s own life and considers the future of humanity, the overall balance could very well be that the positive outweighs the negative. As such, it would be better for a person to exist—assuming that she has a role in the causal chain leading to that ultimate result.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Our Father vs Big Brother

The tape of Mitt Romney speaking to his cohorts in what could be described as a proverbial back-room seems to have had a lasting effect – we’ll see if it turns out to make all the difference, but it certainly brought into focus the image of Romney as oblivious aristocrat.

But even more interesting to me than the specifics of this candidate’s attitudes was the evidence of a change in certain social and technological expectations. Many people responded to Romney’s comments by shaking their heads at the fact that he would say those things out loud, that he would speak so candidly. Sure, he was at a fundraiser with other super-rich political puppeteers, but he must have known the information could get out…

Of course, a couple decades ago, it probably would not have. Even if a member of the staff could afford a hidden camera it would have taken a lot of planning and setting up to get the material, and once it was on tape it would have taken a lot of work to get it nationally aired. It may not seem like that’s that much commitment, but it’s definitely active and organized: hide tiny expensive specialty technology beforehand, and then transfer incriminating material to a standard medium, and try to get a national news outlet’s attention without being dismissed as some kind of conspirator (in fact, many journalists back then might have rejected the tape as unethical just because Romney clearly doesn’t realize he’s being taped).

Today, a person does not even have to really care about the consequences – sometimes people will record things just because they can. In a room with a famous person and some number of non-guests with iPhones, it is not at all surprising that someone recorded Romney speaking and then put a portion of it on YouTube—there did not even need to be intent behind it. The ease of catching a person in the act has increased so monumentally that the very idea of a backroom deal is in trouble.* Anyone can tape the conversation and show it to a potential audience of millions, and they don’t even need to dislike you or want to cause harm. It’s just information sharing—the connotations or potential impact of the information is not always considered (this happens on Facebook all the time: a photo posted in fun in one context is evidence of a promise broken in another, for instance).

The idea that we are losing privacy, and even losing the desire for privacy, has been argued about since technology and the internet especially first began allowing for these new methods of disclosure. An angle I want to focus on is the concurrence this has with a rise in atheism. There are plenty of other reasons that the idea of God is not as popular as it once was, and technology and the internet can contribute to the phenomenon in other ways. But there’s a social, pragmatic level at which God is becoming obsolete that could be a factor.

One of the classic reasons to have a concept of God from society’s point of view is the same as a reason to have Santa: “he knows when you’ve been bad or good, so be good for goodness’ sake.” From an intellectual standpoint this may not be convincing – Plato, for instance, attempts to show why we can’t use God as a referee when discussing the question of ethics in The Republic. The story of the Ring of Gyges, a ring which allows its wearer to become invisible and thus get away with any sort of immoral behavior she chooses with no repercussions, leads to the argument that even if the wearer is invisible, surely the Gods still know and can still judge. The original argument illustrated by the story of the ring is that people only act ethically when they are being watched, and this comeback says, well, you are always being watched by God so the point is moot. God serves as an external conscience.
But in The Republic, this idea is debunked—God is unreliable, and can be appeased by gifts or pleas for forgiveness. If you do something wrong, you can always get back on His good side. In other words, your conscience may know you were unethical this once, but do something extra-nice next week, and you’ll feel it’s been evened out.

In that way, Big Brother is more effective. If a person wants to steal something in a store, but thinks “No, God will know what I’ve done,” they might stop themselves. But they may also imagine that they can bargain with the big guy and promise to never do something like this ever again. On the other hand, if they believe there is a camera coming at them in every direction it will be harder to make that kind of deal. Our increasingly Panoptic forms of life make it possible to see this particular utility of God being overshadowed, since people with videos are a lot more direct and aggressive.

I am not suggesting that would consciously affect beliefs, but if the fear of moral oversight were to shift realistically toward peers, one of God’s greatest strengths would be made irrelevant. Sure, no video can see into your heart: but if it becomes widely expected that everything that happens in a public or semi-public space could be broadcast, that knowledge could play the part of an external conscience just as well as religion.

It’s true that God was famously described as dead over a century ago by Nietzsche, and he too was concerned with moral issues. However, his focus was on the lack of cohesion or agreement in beliefs, whereas I am addressing the much more mundane but perhaps more convincing issue of the cohesion of facts. That is, Nietzsche thought the concept of God was coextensive with the idea of absolute truth, and as that became untenable, religion would die. It’s arguable to what degree that happened, but the issue here is not what is right, but whether the right thing has to be done. God as an externalized conscience becomes less effective when society is doing the job in a more obvious and graspable way (which doesn’t require that God isn’t real, just that His methods are less convincing).

It could easily be coincidence that secularism is on the rise at the same time as surveillance and general recording become the norm, but I’m suggesting that it is part of larger cultural shift, and that the notion of God just fits less easily into a world where we can already picture a very ordinary kind of “all-seeing, all-knowing” presence. What was once supernatural is now merely artificial.

*I wouldn’t want to imply that therefore people will start being ethical, however. There are always adaptations and ways around – the idea is just that a fear of being seen is becoming much more real.

Carpe Diem and the Longer Now

So here’s the thing: I like utilitarianism. No matter what I do, no matter what I read, I always find that I am stuck in a utility-shaped box. (Here’s one reason: it is hard for me to applaud moral convictions if they treat rights as inviolableeven when the future of the right itself is at stake.) But trapped in this box as I am, sometimes I put my ear to the wall and hear what people outside the box are doing. And the voices outside tell me that utilitarianism is alienating and overly demanding.

I’m going to argue that act-utilitarianism is only guilty of these things if fatalism is incorrect. If fatalism is right, then integrity is nothing more than the capacity to make sense of a world when we are possessed with limited information about the consequences of actions. If I am right, then integrity does not have any other role in moral deliberation.


Supposedly, one of the selling points of act-utilitarianism is that it requires us to treat people impartially, by forcing us to examine a situation from some third-person standpoint and apply the principle of utility in a disinterested way. But if it were possible to do a definitive ‘moral calculus’, then we would be left with no legitimate moral choices to make. Independent judgment would be supplanted with each click of the moral abacus. It is almost as if one would need to be a Machiavellian psychopath in order to remain so impartial.

One consequence of being robbed of legitimate moral alternatives is that you might be forced to do a lot of stuff you don’t want to do. For instance, it looks as though detachment from our integrity could force us to into the squalor of excessive altruism, where we must give away anything and everything we own and hold dear. Our mission would be to maximize utility by doing works in such a way that would keep our own happiness above some subsistence minimum, and improve the lot of people who are far away. Selfless asceticism would be the order of the day.

In short, it seems like act-utilitarianism is a sanctimonious schoolteacher, that not only overrides our capacity for independent moral judgment, but also obliges us to sacrifice almost all of our more immediate interests for interests that are more remote — the people of the future, and the people geographically distant.

The longer now is a harsh mistress.

Friedrich Nietzsche, Samuel Scheffler, Bernard Williams: here are some passionate critics who have argued against utility in the above-stated ways. And hey, they’re not wrong. The desire to protect oneself, one’s friends, and one’s family from harm cannot simply be laughed away. Nietzsche can always be called upon to provide a mendacious quote: “You utilitarians, you, too, love everything useful only as a vehicle for your inclinations; you, too, really find the noise of its wheels insufferable?”

Well, it’s pretty clear that at least one kind of act-utilitarianism has noisy wheels. One might argue that nearer goods must be considered to have equal value as farther goods; today is just as important as tomorrow. When stated as a piece of practical wisdom, this makes sense; grown-ups need to have what Bentham called a ‘firmness of mind’, meaning that they should be able to delay gratification in order to find the most happiness out of life. But a naive utilitarian might take this innocent piece of wisdom and twist it into a pure principle of moral conduct, and hence produce a major practical disaster.

Consider the sheer number of things you need to do in order to make far-away people happy. You need to clamp down on all possible unintended consequences of your actions, and spend the bulk of your time on altruistic projects. Now, consider the limited number of things you can do to make a small number of people happy who are closest to you. You can do your damnest to seize the day, but presumably, you can only do so much to make your friends and loved ones happy without making yourself miserable in the process. So, all things considered, it would seem as though the naive utilitarian has to insist that we all turn into slaves to worlds that are other than our own — the table is tilted in the favor of the greatest number. We would have to demand that we give up on today for the sake of the longer now.

But that’s not to say that the utilitarian has been reduced to such absurdities. Kurt Baier and Henry Sidgwick are two philosophers that have explicitly defended a form of short-term egoism, since individuals are better judges of their own desires. Maybe utilitarianism isn’t such an abusive schoolteacher after all.

Nietzscheans one and all.

Why does act-utilitarianism seem so onerous? Well, if you’ve ever taken an introductory ethics class, you’re going to hear some variation on the same story. First you’ll be presented with a variety of exotic and implausible scenarios, involving threats to the wellbeing of conspecifics that are caught in a deadly Rube Goldberg machine (involving trolleys, organ harvesting, fat potholers, ill-fated hobos, etc.) When the issue is act-utilitarianism, the choice will always come down to two options: either you kill one person, or a greater number of others will die. In the thought-experiment, you are possessed with the power to avert disaster, and are by hypothesis acquainted with perfect knowledge of the outcomes of your choices. You’ll then be asked about your intuitions about what counts as the right thing to do. Despite all the delicious variety of these philosophical horror stories, there is always one constant: they tell you that you are absolutely sure that certain consequences will follow if you perform this-or-that action. So, e.g., you know for sure that the trolley will kill the one and save the five, you know for sure that the forced transplant of the Hobo’s organs will save the souls in the waiting room (and that the police will never charge you with murder), and so on.

This all sounds pretty ghoulish. And here’s the upshot: it is not intuitively obvious that the right answer in each case is to kill the one to save the five. It seems as though there is a genuine moral choice to be made.

Yet when confronted with such thought-experiments, squadrons of undergraduates have moaned: ‘Life is not like this. Choices are not so clear. We do not know the consequences.’ Sophomores are in a privileged position to see what has gone wrong with academic moralizing, since they are able to view the state of play with fresh eyes. For it is a morally important fact about the human condition that we don’t know much about the future. By imagining ourselves in a perfect state of information, we alienate ourselves from our own moral condition.

Once you see the essential disconnect between yourself and your hypothetical actor in the thought-experiment, blinders ought to fall from your eyes. It is true that I may dislike pulling the switch to change the trolley’s track, but my moral feelings should not necessarily bear on the question of what my more perfect alternate would need to do. Our so-called ‘moral intuitions’ only make a difference to the actual morality of the matter on the assumption that our judgments can reliably track the intuitions of your theoretical alternate — assuming your alternate knows the things they know, right on down to the bone. But then, this assumption is a thing that needs to be argued for, not assumed.

While we know a lot about what human beings need, our most specific knowledge about what people want is limited to our friends and intimates. That knowledge makes the moral path all the more clear. When dealing with strangers, the range of moral options is much wider than the range of options at home; after all, people are diverse in temperament and knowledge, scowl and shoe size. Moral principles arise out of uncertainty about the best means of achieving the act-utilitarian goal. Strike out uncertainty about the situation, and the range of your moral choices whittle down to a nub.

So if we had perfect information, then there is no doubt that integrity should go by the boards. But then, that’s not the fault of act-utilitarianism. After all, if we knew everything about the past and the future, then any sense of conscious volition would be impossible. This is just what fatalism tells us: free will and the angst of moral choice are byproducts of limited information, and without a sense of volition the very idea of integrity could not even arise.

Perhaps all this fatalism sounds depressing. But here’s the thing — our limited information has bizarrely romantic implications for us, understood as the creatures we are. For if we truly are modest in our ability to know and process information, and the rest of the above holds, then it is absurd to say, as Nietzsche does, that “whatever is done from love always occurs beyond good and evil”. It is hard to conceive of a statement that could be more false. For whatever is done from love, from trust and familiarity, is the clearest expression of both good and evil.


Look. Trolley-style thought-experiments do not show us that act-utilitarianism is demanding. Rather, they show us that increased knowledge entails increased responsibility. Since we are the limited sorts of creatures that we are, we need integrity, personal judgment, and moral rules to help us navigate the wide world of moral choice. If the consequentialist is supposed to be worried about anything, the argument against them ought to be that we need the above-stated things for reasons other than as a salve to heal the gaps in what we know.

Enhanced by Zemanta