Tag Archives: Ethics

Introduction to Philosophy

The following provides a (mostly) complete Introduction to Philosophy course.

Readings & Notes (PDF)

Class Videos (YouTube)

Part I Introduction

Class #1

Class #2: This is the unedited video for the 5/12/2015 Introduction to Philosophy class. It covers the last branches of philosophy, two common misconceptions about philosophy, and argument basics.

Class #3: This is the unedited video for class three (5/13/2015) of Introduction to Philosophy. It covers analogical argument, argument by example, argument from authority and some historical background for Western philosophy.

Class #4: This is the unedited video for the 5/14/2015 Introduction to Philosophy class. It concludes the background for Socrates, covers the start of the Apology and includes most of the information about the paper.

Class#5: This is the unedited video of the 5/18/2015 Introduction to Philosophy class. It concludes the details of the paper, covers the end of the Apology and begins part II (Philosophy & Religion).

Part II Philosophy & Religion

Class #6: This is the unedited video for the 5/19/2015 Introduction to Philosophy class. It concludes the introduction to Part II (Philosophy & Religion), covers St. Anselm’s Ontological Argument and some of the background for St. Thomas Aquinas.

Class #7: This is the unedited video from the 5/20/2015 Introduction to Philosophy class. It covers Thomas Aquinas’ Five Ways.

Class #8: This is the unedited video for the eighth Introduction to Philosophy class (5/21/2015). It covers the end of Aquinas, Leibniz’ proofs for God’s existence and his replies to the problem of evil, and the introduction to David Hume.

Class #9: This is the unedited video from the ninth Introduction to Philosophy class on 5/26/2015. This class continues the discussion of David Hume’s philosophy of religion, including his work on the problem of evil. The class also covers the first 2/3 of his discussion of the immortality of the soul.

Class #10: This is the unedited video for the 5/27/2015 Introduction to Philosophy class. It concludes Hume’s discussion of immortality, covers Kant’s critiques of the three arguments for God’s existence, explores Pascal’s Wager and starts Part III (Epistemology & Metaphysics). Best of all, I am wearing a purple shirt.

Part III Epistemology & Metaphysics

Class #11: This is the 11th Introduction to Philosophy class (5/28/2015). The course covers Plato’s theory of knowledge, his metaphysics, the Line and the Allegory of the Cave.

Class #12: This is the unedited video for the 12th Introduction to Philosophy class (6/1/2015). This class covers skepticism and the introduction to Descartes.

Class #13: This is the unedited video for the 13th Introduction to Philosophy class (6/2/2015). The class covers Descartes 1st Meditation, Foundationalism and Coherentism as well as the start to the Metaphysics section.

Class #14: This is the unedited video for the fourteenth Introduction to Philosophy class (6/3/2015). It covers the methodology of metaphysics and roughly the first half of Locke’s theory of personal identity.

Class #15: This is the unedited video of the fifteen Introduction to Philosophy class (6/4/2015). The class covers the 2nd half of Locke’s theory of personal identity, Hume’s theory of personal identity, Buddha’s no self doctrine and “Ghosts & Minds.”

Class #16: This is the unedited video for the 16th Introduction to Philosophy class. It covers the problem of universals,  the metaphysics of time travel in “Meeting Yourself” and the start of the metaphysics of Taoism.

Part IV Value

Class #17: This is the unedited video for the seventeenth Introduction to Philosophy class (6/9/2015). It begins part IV and covers the introduction to ethics and the start of utilitarianism.

Class #18: This is the unedited video for the eighteenth Introduction to Philosophy class (6/10/2015). It covers utilitarianism and some standard problems with the theory.

Class #19: This is the unedited video for the 19th Introduction to Philosophy class (6/11/2015). It covers Kant’s categorical imperative.

Class #20: This is the unedited video for the twentieth Introduction to Philosophy class (6/15/2015). This class covers the introduction to aesthetics and Wilde’s “The New Aesthetics.” The class also includes the start of political and social philosophy, with the introduction to liberty and fascism.

Class #21: No video.

Class #22: This is the unedited video for the 22nd Introduction to Philosophy class (6/17/2015). It covers Emma Goldman’s anarchism.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Ethics of Backdoors

In philosophy, one of the classic moral debates has focused on the conflict between liberty and security. While this topic covers many issues, the main problem is determining the extent to which liberty should be sacrificed in order to gain security. There is also the practical question of whether or not the security gain is actually effective.

One of the recent versions of this debate focuses on tech companies being required to include electronic backdoors in certain software and hardware. Put in simple terms, a backdoor of this sort would allow government agencies (such as the police, FBI and NSA) to gain access even to files and hardware protected by encryption. To use an analogy, this would be like requiring that all dwellings be equipped with a special door that could be secretly opened by the government to allow access to the contents of the house.

The main argument in support of mandating such backdoors is a fairly stock one: governments need such access for criminal investigators, gathering military intelligence and (of course) to “fight terrorism.” The concern is that if there is not a backdoor, criminals and terrorists will be able to secure their data and thus prevent state agencies from undertaking surveillance or acquiring evidence.

As is so often the case with such arguments, various awful or nightmare scenarios are often presented in making the case. For example, it might be claimed that the location and shutdown codes for ticking bombs could be on an encrypted iPhone. If the NSA had a key, they could just get that information and save the day. Without the key, New York will be a radioactive crater. As another example, it might be claimed that a clever child pornographer could encrypt all his pornography, making it impossible to make the case against him, thus ensuring he will be free to pursue his misdeeds with impunity.

While this argument is not without merit, there are numerous stock counter arguments. Many of these are grounded in views of individual liberty and privacy—the basic idea being that an individual has the right to have such security against the state. These arguments are appealing to both liberals (who tend to profess to like privacy rights) and conservatives (who tend to claim to be against the intrusions of big government).

Another moral argument is grounded in the fact that the United States government has shown that it cannot be trusted. To use an analogy, imagine that agents of the state were caught sneaking into the dwellings of all citizens and going through their stuff in clear violation of the law, the constitution and basic moral rights. Then someone developed a lock that could only be opened by the person with the proper key. If the state then demanded that the lock company include a master key function to allow the state to get in whenever it wanted, the obvious response would be that the state has already shown that it cannot be trusted with such access. If the state had behaved responsibly and in accord with the laws, then it could have been trusted. But, like a guest who abused her access to a house, the state cannot and should not be trusted with a key After all, we already know what they will do.

This argument also applies to other states that have done similar things. In the case of states that are even worse in their spying on and oppression of their citizens, the moral concerns are even greater. Such backdoors would allow the North Korean, Chinese and Iranian governments to gain access to devices, while encryption would provide their citizens with some degree of protection.

The strongest moral and practical argument is grounded on the technical vulnerabilities of integrated backdoors. One way that a built-in backdoor creates vulnerability is its very existence. To use a somewhat oversimplified analogy, if thieves know that all vaults have a built in backdoor designed to allow access by the government, they will know that a vulnerability exists that can be exploited.

One counter-argument against this is that the backdoor would not be that sort of vulnerability—that is, it would not be like a weaker secret door into a vault. Rather, it would be analogous to the government having its own combination that would work on all the vaults. The vault itself would be as strong as ever; it is just that the agents of the state would be free to enter the vault when they are allowed to legally do so (or when they feel like doing so).

The obvious moral and practical concern here is that the government’s combination to the vaults (to continue with the analogy) could be stolen and used to allow criminals or enemies easy access to all the vaults. The security of such vaults would be only as good as the security the government used to protect this combination (or combinations—perhaps one for each manufacturer). As such, the security of every user depends on the state’s ability to secure its means of access to hardware and software.

The obvious problem is that governments, such as the United States, have shown that they are not very good at providing such security. From a moral standpoint, it would seem to be wrong to expect people to trust the state with such access, given the fact that the state has shown that it cannot be depended on in such matters. To use an analogy, imagine you have a friend who is very sloppy about securing his credit card numbers, keys, PINs and such—in fact, you know that his information is routinely stolen. Then imagine that this friend insists that he needs your credit card numbers, PINs and such and that he will “keep them safe.” Given his own track record, you have no reason to trust this friend nor any obligation to put yourself at risk, regardless of how much he claims that he needs the information.

One obvious counter to this analogy is that this irresponsible friend is not a good analogue to the state. The state has compulsive power that the friend lacks, so the state can use its power to force you to hand over this information.

The counter to this is that the mere fact that the state does have compulsive force does not mean that it is thus responsible—which is the key concern in regards to both the ethics of the matter and the practical aspect of the matter. That is, the burden of proof would seem to rest on those that claim there is a moral obligation to provide a clearly irresponsible party with such access.

It might then be argued that the state could improve its security and responsibility, and thus merit being trusted with such access. While this does have some appeal, there is the obvious fact that if hackers and governments knew that that the keys to the backdoors existed, they would expend considerable effort to acquire them and would, almost certainly, succeed. I can even picture the sort of headlines that would appear: “U.S. Government Hacked: Backdoor Codes Now on Sale on the Dark Web” or “Hackers Linked to China Hack Backdoor Keys; All Updated Apple and Android Devices Vulnerable!” As such, the state would not seem to have a moral right to insist on having such backdoors, given that the keys will inevitably be stolen.

At this point, the stock opening argument could be brought up again: the state needs backdoor access in order to fight crime and terrorism. There are two easy and obvious replies to this sort of argument.

The first is based in an examination of past spying, such as that done under the auspices of the Patriot Act. The evidence seems to show that this spying was completely ineffective in regards to fighting terrorism. These is no reason to think that backdoor access would change this.

The second is a utilitarian argument (which can be cast as a practical or moral argument) in which the likely harm done by having backdoor access must be weighed against the likely advantages of having such access. The consensus among those who are experts in security is that the vulnerability created by backdoors vastly exceeds the alleged gain to protecting people from criminals and terrorists.

Somewhat ironically, what is alleged to be a critical tool for fighting crime (and terrorism) would simply make cybercrime much easier by building vulnerabilities right into software and devices.

In light of the above discussion, it would seem that baked-in backdoors are morally wrong on many grounds (privacy violations, creation of needless vulnerability, etc.) and lack a practical justification. As such, they should not be required by the state.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robot Love II: Roboslation under the Naked Sun

In his Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human worlds is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide rather than endure such contact.

While this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. But, philosophers do love a good science fiction context for discussion, hence the use of The Naked Sun.

Almost everyone is now familiar with the popular narrative about smart phones and their role in allowing unrelenting access to social media. The main narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people (friends, relatives or even strangers) gathered together physically, yet ignoring each other in favor of gazing into the screens of their lords and masters. There are a multitude of anecdotes about this and many folks have their favorite tales of such events. As a professor, I see students engrossed by their phones—but, to be fair, Plato has nothing on cat videos. Like most people, I have had dates in which the other person was working two smartphones at once. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else—all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the main focus, namely robots. However, the reader should keep in mind the social isolation created by social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, what can be called social robots are a relatively new thing. Sure, there have long been “robot” toys and things like Teddy Ruxpin (essentially a tape player embedded in a simple amnitronic bear toy). But, the creation of reasonably sophisticated social robots is a relatively new thing. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from a pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies that are and will sell social robots are, unsurprisingly, quite positive about the future of social robots. There are, of course, some good arguments in their favor. Robot pets provide a good choice for people with allergies, who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person with special needs (such as someone who is ill, elderly or injured) requires round the clock attention and monitoring that would be expensive, burdensome or difficult for other humans to supply.

Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots and social media, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction—people are emotionally shortchanging themselves and those they are physically with in favor of staying relentlessly connected to social media. This, obviously enough, seems to be a taste of what Asimov created in The Naked Sun: people who view, but no longer see one another. Given the apparent importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. To use an analogy, human-human social interactions can be seen as being like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as being like consuming junk food or drugs—it is very addictive, but leaves one ultimately empty…yet always craving more.

It can be argued that this worry is unfounded—that social media is an adjunct to social interaction in the real world and that social interaction via things like Facebook and Twitter can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter does have some appeal, social robots do seem to be a different matter in that they are something new and rather radically different. While humans have had toys, stuffed animals and even simple mechanisms for non-living company, these are quite different from social robots. After all, social robots aim to effectively mimic or simulate animals or humans.

One concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant—that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This would make robotic companions rather more appealing than human company. At least the robots whose cost is not subsidized by advertising—imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner/user would be able to make changed to the robot. A person can, quite literally, make a friend with the desired qualities and missing undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right, at least in terms of some qualities.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends—there is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this regard.

Social robots, though they might breakdown or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful—just turn it off and lock it down when leaving it alone.

Unlike human companions, robot companions do not impose burdens—they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but it would seem that robotic companions would be superior to humans in most ways—at least in regards to common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship—will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you like and so on. However, these seem to be mostly technical problems involving software. Presumably all these could eventually be addressed and satisfactory companions could be created.

Since I have written specifically about sexbots in other essays, I will not discuss those here. Rather, I will discuss two potentially problematic aspect of companion bots.

One point of obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food—superficially appealing but lacking in what is needed for health. However, if the robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is stolen from the virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways in order to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to having only or primarily robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop the virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions be too easy would be analogous to going without physical exercise or challenges—one becomes emotionally soft and weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would actually lead to unhappiness. Because of this, one should carefully consider whether or not one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop the virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Robot Love I: Other Minds

Thanks to improvements in medicine humans are living longer and can be kept alive well past the point at which they would naturally die. On the plus side, longer life is generally (but not always) good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age. This care can be a considerable burden on the caregivers. Not surprisingly, there has been an effort to develop a technological solution to this problem, specifically companion robots that serve as caregivers.

While the technology is currently fairly crude, there is clearly great potential here and there are numerous advantages to effective robot caregivers. The most obvious are that robot caregivers do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be ideal 24/7/365 caregivers. This makes them superior in many ways to human caregivers who get tired, get depressed, get angry and have many other responsibilities.

There are, of course, some concerns about the use of robot caregivers. Some relate to such matters as their safety and effectiveness while others focus on other concerns. In the case of caregiving robots that are intended to provide companionship and not just things like medical and housekeeping services, there are both practical and moral concerns.

In regards to companion robots, there are at least two practical concerns regarding the companion aspect. The first is whether or not a human will accept a robot as a companion. In general, the answer seems to be that most humans will do so.

The second is whether or not the software will be advanced enough to properly read a human’s emotions and behavior in order to generate a proper emotional response. This response might or might not include conversation—after all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test—that is, read and respond correctly to human emotions. Since humans often botch this, there would be a fairly broad tolerable margin of error here. These practical concerns can be addressed technologically—it is simply a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things—the comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion appeal to all the senses.

While the practical problems can be solved with the right technology, there are some moral concerns with the use of robot caregiver companions. Some relate to people handing off their moral duties to care for their family members, but these are not specific to robots. After all, a person can hand off the duties to another person and this would raise a similar issue.

In regards to those specific to a companion robot, there are moral concerns about the effectiveness of the care—that is, are the robots good enough that trusting the life of an elderly or sick human would be morally responsible? While that question is important, a rather intriguing moral concern is that the robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate (fake) human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit and such a deceit seems to be morally wrong.

One obvious response is that people would realize that the robot does not really experience emotions, yet still gain value from its “fake” companionship. To use an analogy, people often find stuffed animals to be emotional reassuring even though they are well aware that the stuffed animal is just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect—if someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real in order to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship that a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to actually feel the emotions that they display and that they actually understand. In philosophical terms, humans have (or are) minds and robots (of the sort that will be possible in the near future) do not have minds. They merely create the illusion of having a mind.

Interestingly enough, philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior). While such behaviorism has been largely abandoned, it does survive in a variety of jokes and crude references to showing people some “love behavior.”

The usual “solution” to the problem is to go with the obvious: I think that other people have minds by an argument from analogy. I am aware of my own mental states and my behavior and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in many and various forms of deceit. For example, a person can fake being in pain or make a claim about love that is untrue. Piercing these deceptions can sometimes be very difficult since humans are often rather good at deceit. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way he wants people to believe he is thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it is displaying in its behavior, because it does not think or feel. Or so it is believed. The reason that a person would think this seems reasonable: in the case of a robot, we can go in and look at the code and the hardware to see how it all works and we will not see any emotions or thought in there. The robot, however complicated, is just a material machine, incapable of thought or feeling.

Long before robots, there were thinkers who claimed that a human is a material entity and that a suitable understanding of the mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes thoughts and emotions and understand the hardware it “runs” on.

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par—both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body—that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit—going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does and this will yield all the benefits of having a human companion.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Better to be Nothing?

There is an old legend that king Midas for a long time hunted the wise Silenus, the companion of Dionysus, in the forests, without catching him. When Silenus finally fell into the king’s hands, the king asked what was the best thing of all for men, the very finest. The daemon remained silent, motionless and inflexible, until, compelled by the king, he finally broke out into shrill laughter and said these words, “Suffering creature, born for a day, child of accident and toil, why are you forcing me to say what would give you the greatest pleasure not to hear? The very best thing for you is totally unreachable: not to have been born, not to exist, to be nothing. The second best thing for you, however, is this — to die soon.”

-Nietzsche, The Birth of Tragedy

One rather good metaphysical question is “why is there something rather than nothing?” An interesting question in the realm of value is “is it better to be nothing rather than something?” That is, is it better “not to have been born, not to exist, to be nothing?”

Addressing the question does require sorting out the measure of value that should be used to decide whether it is better to not exist or to exist. One stock approach is to use the crude currencies of pleasure and pain. A somewhat more refined approach is to calculate in terms of happiness and unhappiness. Or one could simply go generic and use the vague categories of positive value and negative value.

What also must be determined are the rules of the decision. For the individual, a sensible approach would be the theory of ethical egoism—that what a person should do is what maximizes the positive value for her. On this view, it would be better if the person did not exist if her existence would generate more negative than positive value for her. It would be better if the person did exist if her existence would generate more positive than negative value for her.

To make an argument in favor of never existing being better than existing, one likely approach is to make use of the classic problem of evil as laid out by David Hume. When discussing this matter, Hume contends that everyone believes that life is miserable and he lays out an impressive catalog of pains and evils. While he considers that pain is less frequent than pleasure, he notes that even if this is true, pain “is infinitely more violent and durable.” As such, Hume makes a rather good case that the negative value of existence outweighs its positive value.

If it is true that the negative value outweighs the positive value, and better is measured in terms of maximizing value, then it would thus seem to be better to have never existed. After all, existence will result (if Hume is right) in more pain than pleasure. In contrast, non-existence will have no pain (and no pleasure) for a total of zero. Doing the value math, since zero is greater than a negative value, never existing is better than existing.

There does seem to be something a bit odd about this sort of calculation. After all, if the person does not exist, then her pleasure and pain would not balance to zero. Rather it would seem that this sum would be an undefined value. It cannot be better for a person that she not exist, since there would (obviously) not be anyone for the nonexistence to be better for.

This can be countered by saying that this is but a semantic trick—the nonexistence would be better than the existence because of the relative balance of pleasure and pain. There is also another approach—to broaden the calculation from the individual to the world.

In this case, the question would not be about whether it would be better for the individual to exist or not, but whether or not a world with the individual would be better than a world without the individual. If a consequentialist approach is assumed, it is assumed that pain and pleasure are the measure of value and it is assumed that the pain outweighs the pleasure in every life, then the world would be better if a person never existed. This is because the absence of an individual would reduce the overall pain. Given these assumptions, a world with no humans at all would be a better world. This could be extended to its logical conclusion: if the suffering outweighs the pleasures in the case of all beings (Hume did argue that the suffering of all creatures exceeds their enjoyments), then it would be better that no feeling creatures existed at all. At this point, one might as well do away with existence altogether and have nothing. Thus, while it might not be known why there is something rather than nothing, this argument would seem to show that it would be better to have nothing rather than something.

Of course, this reasoning rests on many assumptions that can be easily challenged. It can be argued that the measure of value is not to be done solely in terms of pleasures and pains—that is, even if life resulted in more pain than pleasure, the overall positive value could be greater than the negative value. For example, the creation of art and the development of knowledge could provide value that outweighs the pain. It could also be argued that the consequentialist approach is in error—that estimating the worth of life is not just a matter of tallying up the negative and positive. There are, after all, many other moral theories regarding the value of existence. It is also possible to dispute the claim that pain exceeds pleasure (or that unhappiness exceeds happiness).

One could also take a long view—even if pain outweighs pleasure now, humans seem to be making a better world and advancing technology. As such, it is easy to imagine that a better world lies ahead and it depends on our existence. That is, if one looks beyond the pleasure and pain of one’s own life and considers the future of humanity, the overall balance could very well be that the positive outweighs the negative. As such, it would be better for a person to exist—assuming that she has a role in the causal chain leading to that ultimate result.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Critical Thinking, Ethics & Science Journalism

As part of my critical thinking class, I cover the usual topics of credibility and experiments/studies. Since people often find critical thinking a dull subject, I regularly look for real-world examples that might be marginally interesting to students. As such, I was intrigued by John Bohannon’s detailed account of how he “fooled millions into thinking chocolate helps weight loss.”

Bohannon’s con provides an excellent cautionary tale for critical thinkers. First, he lays out in detail how easy it is to rig an experiment to get (apparently) significant results. As I point out to my students, a small experiment or study can generate results that seem significant, but really are not. This is why it is important to have an adequate sample size—as a starter. What is also needed is proper control, proper selection of the groups, and so on.

Second, he provides a clear example of a disgraceful stain on academic publishing, namely “pay to publish” journals that do not engage in legitimate peer review. While some bad science does slip through peer review, these journals apparently publish almost anything—provided that the fee is paid. Since the journals have reputable sounding names and most people do not know which journals are credible and which are not, it is rather easy to generate a credible seeming journal publication. This is why I cover the importance of checking sources in my class.

Third, he details how various news outlets published or posted the story without making even perfunctory efforts to check its credibility. Not surprisingly, I also cover the media in my class both from the standpoint of being a journalist and being a consumer of news. I stress the importance of confirming credibility before accepting claims—especially when doing so is one’s job.

While Bohannon’s con does provide clear evidence of problems in regards to corrupt journals, uncritical reporting and consumer credulity, the situation does raise some points worth considering. One is that while he might have “fooled millions” of people, he seems to have fooled relative few journalists (13 out of about 5,000 reporters who subscribe to the Newswise feed Bohannon used) and these seem to be more of the likes of the Huffington Post and Cosmopolitan as opposed to what might be regarded as more serious health news sources. While it is not known why the other reporters did not run the story, it is worth considering that some of them did look at it critically and rejected it. In any case, the fact that a small number of reporters fell for a dubious story is hardly shocking. It is, in fact, just what would be expected given the long history of journalism.

Another point of concern is the ethics of engaging in such a con. It is possible to argue that Bohannon acted ethically. One way to do this is to note that using deceit to expose a problem can be justified on utilitarian grounds. For example, it seems morally acceptable for a journalist or police officer to use deceit and go undercover to expose criminal activity. As such, Bohannon could contend that his con was effectively an undercover operation—he and his fellows pretended to be the bad guys to expose a problem and thus his deceit was morally justified by the fact that it exposed problems.

One obvious objection to this is that Bohannon’s deceit did not just expose corrupt journals and incautious reporters. It also misinformed the audience who read or saw the stories. To be fair, the harm would certainly be fairly minimal—at worst, people who believed the story would consume dark chocolate and this is not exactly a health hazard. However, intentionally spreading such misinformation seems morally problematic—especially since story retractions or corrections tend to get far less attention than the original story.

One way to counter this objection is to draw an analogy to the exposure of flaws by hackers. These hackers reveal vulnerabilities in software with the stated intent of forcing companies to address the vulnerabilities. Exposing such vulnerabilities can do some harm by informing the bad guys, but the usual argument is that this is outweighed by the good done when the vulnerability is fixed.

While this does have some appeal, there is the concern that the harm done might not outweigh the good done. In Bohannon’s case it could be argued that he has done more harm than good. After all, it is already well-established that the “pay to publish” journals are corrupt, that there are incautious journalists and credulous consumers. As such, Bohannon has not exposed anything new—he has merely added more misinformation to the pile.

It could be countered that although these problems are well known, it does help to continue to bring them to the attention of the public. Going back to the analogy of software vulnerabilities, it could be argued that if a vulnerability is exposed, but nothing is done to patch it, then the problem should be brought up until it is fixed, “for it is the doom of men that they forget.” Bohannon has certainly brought these problems into the spotlight and this might do more good than harm. If so, then this con would be morally acceptable—at least on utilitarian grounds.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Shoot or Don’t Shoot?

The police shooting of unarmed black Americans has raised the question of why such shootings occurred. While some have rushed to claim that it is a blend of racism and brutality, the matter deserves careful consideration.

While there are various explanations, the most plausible involves a blend of factors. The first, which does have a connection to racism, is the existence of implicit bias. Studies involving simulators have found that officers are more likely to use force against a black suspect than a white suspect. This has generally been explained in terms of officers having a negative bias in regards to blacks. What is rather interesting is that these studies show that even black and Hispanic officers are more likely to use force against black suspects. Also interesting is that studies have shown that civilians are more likely than officers to use force in the simulators and also show more bias in regards to race.

One reason why an implicit bias can lead to a use of force is that it impacts how a person perceives another’s actions and the perception of objects. When a person knows she is in a potentially dangerous situation, she is hyper vigilant for threats and is anticipating the possibility of attack. As such, a person’s movements and any object he is wielding will be seen through that “threat filter.”  So, for example, a person reaching rapidly to grab his wallet can easily be seen as grabbing for a weapon. Perceptual errors, of course, occur quite often—think of how people who are afraid of snakes often see every vine or stick as a snake when walking in the woods. These perceptual errors also help explain shootings—a person can honestly think they saw the suspect reaching for a weapon.

Since the main difference between the officers and the civilians is most likely the training police receive, it seems reasonable to conclude that the training is having a positive effect. However, the existence of a race disparity in the use of force does show that there is still a problem to address. One point of concern is that the bias might be so embedded in American culture that training will not eliminate it. That is, as long as there is racial bias in the society, it will also infect the police. As such, eliminating the bias in police would require eliminating it in society as a whole—which goes far beyond policing.

A second often mentioned factor is what some call the “warrior culture.” Visually, this is exemplified by the use of military equipment, such as armored personal carriers, by the police. However, the warrior culture is not primarily a matter of equipment, but of attitude. While police training does include conflict resolution skill training, there is a significant evidence on combat skills, especially firearms. On the one hand, this makes sense—people who are going to be using weapons need to be properly trained in their use. On the other hand, there are grounds for being concerned with the fact that there is more focus on combat training relative to the peaceful resolution of conflicts.

Since I have seen absurd and useless “training” in conflict resolution, I do get that there would be concerns about such training. I also understand that conflict resolution is often cast in terms of “holding hands and drinking chamomile tea together” and hence is not always appealing to people who are interested in police work. However, it does seem to be a critical skill. After all, in a crisis people fall back on habit and training—and if people are trained primarily for combat, they will fall back on that. Naturally, there is the worry that too much emphasis on conflict resolution could put officers in danger—so that they keep talking well past the point at which they should have started shooting. However, this is a practical matter of training that can be addressed. A critical part of conflict resolution training is also what Aristotle would regard as moral education: developing the character to know when and how to act correctly. As Aristotle said, it is easy to be angry but it is hard to be angry at the right time for the right reasons, towards the right people and to the right degree. As Aristotle also said, this is very hard and most people are rather bad at this sort of thing, including conflict resolution. This does present a challenge even for a well-trained officer—the person she is dealing with is probably horrible at conflict-resolution. One possible solution is training for citizens—not in terms of just rolling over for the police, but in interacting with the police (and each other). Expecting the full burden of conflict resolution to fall upon the police certainly seems unfair and also not a successful strategy.

The final factor I will consider is the principle of the primacy of officer survival. One of the primary goals of police training and practice is officer survival. It would, obviously, be absurd to claim that police should not be trained in survival or that police practices should not put an emphasis on the survival of officers.  However, there are legitimate concerns about ways of training officers, the practice of law enforcement and the attitude that training and practice create.

Part of the problem, as some see it, links to the warrior mentality. The police, it is claimed, are trained to regard their job as incredibly dangerous and policing as a form of combat mission. This, obviously enough, shapes the reaction of officers to situations they encounter, which ties into the matter of perceptual bias. If a person believes that she is going out into a combat zone, she will perceive people and actions through this “combat zone filter.” As such, people will be regarded as more threatening, actions will be more likely to be interpreted as hostile and objects will be more likely to be seen as weapons. As such, it certainly makes sense that approaching officer survival by regarding police work as a combat mission would result in more civilian causalities than would different approaches.

Naturally, it can be argued that officers do not, in general, have this sort of “combat zone” attitude and that academics are presenting the emphasis on survival in the wrong sort of light. It can also be argued that the “combat zone” attitude is real, but is also correct—people do, in fact, target police officers for attack and almost any situation could turn into a battle for survival.  As such, it would be morally irresponsible to not train officers for survival, to instill in them a proper sense of fear, and to engage in practices that focus primarily on officers making it home at the end of the shift—even if this approach results in more civilian deaths, including the deaths of unarmed civilians.

This leads to a rather important moral concern, namely the degree of risk a person is obligated to take in order to minimize the harm to another person. This matter is not just connected to the issue of the use of force by police, but also the broader issue of self-defense.

I do assume that there is a moral right to self-defense and that police officers do not lose this right when acting in their professional capacity. That is, a person has a right to harm another person when legitimately defending her life, liberty or property against an unwarranted attack. Even if such a right is accepted, there is still the question of the degree of force a person is justified in using and to what extent a person should limit her response in order to minimize harm to the attacker.

In terms of the degree of force, the easy and obvious answer is that the force should be proportional to the threat but should also suffice to end the threat. For example, when I was a boy I faced the usual attacks of other boys. Since these attacks just involved fists and grappling, a proportional response was to hit back hard enough to make the other boy stop. Grabbing a rock, a bat or pulling a knife would be disproportional. As another example, if someone is shooting at a police officer, then she would certainly be in the right to use her firearm since that would be a proportional response.

One practical and moral concern about the proportional response is that the attacker might escalate. For example, if Bob swings on Mary and she lands a solid punch to his face, he might pull out a knife and stab her. If Mary had simply shot Bob, she would have not been stabbed because Bob would be badly wounded or dead. As such, some would argue, the response to an attack should be disproportional. In terms of the moral justification, this would rest on the fact that the attacker is engaged in an unjust action and the person attacked has reason to think, as Locke argued, that the person might intend to kill her.

Another practical and moral concern is that if the victim “plays fair” by responding in a proportional manner, she risks losing the encounter. For example, if Bob swings on Sally and Sally sticks with her fists, Bob might be able to beat her. Since dealing with an attacker is not a sporting event, the idea of “fair play” seems absurd—hence the victim has the moral right to respond in a disproportional manner.

However, there is also the counter-concern that a disproportional response would be excessive in the sense of being unnecessary. For example, if Bob swings at Sally and Sally shoots him four times with a twelve gauge, Sally is now safe—but if Sally could have used a Taser to stop Bob, then the use of the shotgun would seem to be wrong—after all, she did not need to kill Bob in order to save herself. As such, it would seem reasonable to hold to the moral principle that the force should be sufficient for defense, but not excessive.

The obvious practical challenge is judging what would be sufficient and what would be excessive. Laws that address self-defense issues usually leave this very vague: a person can use deadly force when facing a “reasonable perceived threat.” That is, the person must have a reasonable belief that there is a threat—there is usually no requirement that the threat be real. To use the stock example, if a man points a realistic looking toy gun at an officer and says he is going to kill her, the officer would have a reasonable belief that there is a threat. Of course, there are problems with threat assessment—as noted above, implicit bias, warrior mentality and survival focus can cause a person to greatly overestimate a threat (or see one where it does not exist).

The challenge of judging sufficient force in response to a perceived threat is directly connected with the moral concern about the degree of risk a person is obligated to face in order to avoid (excessively) harming another person.  After all, a person could “best” ensure her safety by responding to every perceived threat with maximum lethal force. If she responds with less force or delays her response, then she is at ever increasing risk. If she accepts too little risk, she would be acting wrongly towards the person threatening her. If she accepts too much risk, she would be acting wrongly towards herself and anyone she is protecting.

A general and generic approach would be to model the obligation of risk on the proportional response approach. That is, the risk one is obligated to take is proportional to the situation at hand. This then leads to the problem of working out the details of the specific situation—which is to say that the degree of risk would seem to rest heavily on the circumstances.

However, there are general factors that would impact the degree of obligatory risk. One would be the relation between the people. For example, it seems reasonable to hold that people have greater obligations to accept risk to avoid harming people they love or care about. Another factor that seems relevant is the person’s profession. For example, soldiers are expected to take some risks to avoid killing civilians—even when doing so puts them in some danger. To use a specific example, soldiers on patrol could increase their chance of survival by killing any unidentified person (adult or child) that approaches them. However, being a soldier and not a killer requires the soldiers to accept some risk to avoid murdering innocents.

In the case of police officers it could be argued that their profession obligates them to take greater risks to avoid harming others. Since their professed duty is to serve and protect, it can be argued that the survival of those who they are supposed to protect should be given equal weight to that of the survival of the officer. That is, the focus should be on everyone going home. In terms of how this would be implemented, the usual practice would be training and changes to rules regarding use of force. Limiting officer use of force can be seen as generating greater risk for the officers, but the goal would be to reduce the harm done to civilians. Since the police are supposed to protect people, they are (it might be argued) under greater obligation to accept risk than civilians.

One obvious reply to this is that many officers already have this view—they take considerable risks to avoid harming people, even when they would be justified in using force. These officers save many lives—although sometimes at the cost of their own. Another reply is that this sort of view would get officers killed because they would be too concerned about not harming suspects and not concerned enough about their own survival. That is a reasonable concern—there is the challenge of balancing the safety of the public and the safety of officers.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Is the Flash as Bad as the Reverse Flash?

Professor Zoom

Professor Zoom (Photo credit: Wikipedia)

-Spoiler Alert: Details of the Season 1 Finale of  The Flash are revealed in this post.

Philosophers often make use of fictional examples in order to discuss ethical issues. In some cases, this is because they are discussing hypotheticals and do not have real examples to discuss. For example, discussions of the ethics of utilizing artificial intelligences are currently purely hypothetical (as far as we know). In other cases, this is because a philosopher thinks that a fictional case is especially interesting or simply “cool.” For example, philosophers often enjoy writing about the moral problems in movies, books and TV shows.

The use of fictional examples can, of course, be criticized. One stock criticism is that there are a multitude of real moral examples (and problems) that should be addressed. Putting effort into fictional examples is a waste of time. To use an analogy, it would be like spending time worrying about getting more gold for a World of Warcraft character when one does not have enough real money to pay the bills.

Another standard criticism focuses on the fact that fictional examples are manufactured. Because they are made up rather than “naturally” occurring, there are obvious concerns about the usefulness of such examples and to what extent the scenario is created by fiat. For example, when philosophers create convoluted and bizarre moral puzzles, it is quite reasonable to consider whether or not such a situation is even possible.

Fortunately, a case can be made for the use of fictional examples in discussions about ethics. Examples involving what might be (such as artificial intelligence)  can be defended on the practical ground that it is preferable to discuss the matter before the problem arises rather than trying to catch up after the fact. After all, planning ahead is generally a good idea.

The use of fictional examples can also be justified on the same grounds that sports and games are justified—they might not be “useful” in a very limited and joyless sense of the term, but they can be quite fun. If poker, golf, or football can be justified on the basis of enjoyment, then so too can the use of fictional examples.

A third justification for the use of fictional examples is that they can allow the discussion of an issue in a more objective way. Since the example is fictional, it is less likely that a person will have a stake in the made-up example. Fictional examples can also allow the discussion to focus more on the issue as opposed to other factors, such as the emotions associated with an actual event. Of course, people can become emotionally involved in fictional examples. For example, fans of a particular movie character might be quite emotionally attached to that character.

A fourth reason is that a fictional example can be crafted to be an ideal example, to lay out the moral issue (or issues) clearly. Real examples are often less clear (though they do have the advantage of being real).

In light of the above, it seems reasonable to use fictional examples in discussing ethical issues. As such, I will move on to my main focus, which is discussing whether the Flash is morally worse than the Reverse Flash on CW’s show The Flash.

For those not familiar with the characters or the show, the Flash is a superhero whose power is the ability to move incredibly fast. While there have been several versions of the Flash, the Flash on the show is Barry Allen. As a superhero, the Flash has many enemies. One of his classic foes is the Reverse Flash. The Reverse Flash is also a speedster, but he is from the future (relative to the show’s main “present” timeline). Whereas the Flash’s costume is red with yellow markings, the Reverse Flash’s costume is yellow with red markings. While Barry is a good guy, Eobard Thawne (the Reverse Flash) is a super villain.

On the show, the Reverse Flash travels back in time to kill the young Barry before he becomes the Flash—with the intent of winning the battle before it even begins. However, the Flash also travels back in time to thwart the Reverse Flash and saves his past self. Out of anger, the Reverse Flash murders Barry’s mother but finds that he has lost his power. Using some creepy future technology, the Reverse Flash steals the life of the scientist Harrison Wells and takes on his identity. Using this identity, he builds the particle accelerator he needs to get back to the future and ends up, ironically, needing to create the Flash in order to get back home. The early and middle episodes of the show are about how Barry becomes the Flash and his early career in fighting crime and poor decision making.

In the later episodes, the secret of the Reverse Flash is revealed and Barry ends up defeating him in an epic battle. Before the battle, “Wells” makes the point that he has done nothing more and nothing less than what he has needed to do to get home. Interestingly, while the Reverse Flash is ruthless in achieving his goal of returning to his own time and regaining the friends, family and job he has lost, he is generally true to that claim and only harms people when he regards it as truly necessary. He even expresses what seems to be sincere regret when he decides to harm those he has befriended.

While the details are not made clear, he claims that the future Flash has wronged him terribly and he is acting from revenge, to undo the wrong and to return to his own time. While he does have a temper that drives him to senseless murder, when he is acting rationally he acts consistently with his claim: he does whatever it takes to advance his goals, but does not go beyond that.

While the case of the Reverse Flash is fictional, it does raise a real moral issue: is it morally right to harm people in order to achieve one’s goals? The answer depends, obviously, on such factors as the goals and what harms are inflicted on which people. While the wrong allegedly done to the Reverse Flash has not been revealed, he does seem to be acting selfishly. After all, he got stuck in the past because he came back to kill Barry and then murders people when he thinks he needs to do so to advance his plan of return. Kant would, obviously, regard the Reverse Flash as evil—he regularly treats other rational beings solely as means to achieving his ends. He also seems evil on utilitarian grounds—he ends numerous lives and creates considerable suffering so as to achieve his own happiness. But, this is to be expected: he is a supervillain. However, a case can be made that he is morally superior to the Flash.

In the season one finale, the Reverse Flash tells Barry how to travel back in time to save his mother—this involves using the particle accelerator. There are, however, some potential problems with the plan.

One problem is that if Barry does not run fast enough to open the wormhole to the past, he will die. Risking his own life to save his mother is certainly commendable.

A second problem is that if Barry does go back and succeed (or otherwise change things), then the timeline will be altered. The show has established that a change in the past rewrites history (although the time traveler remembers what occurred)—so going back could change the “present” in rather unpredictable ways. Rewriting the lives of people without their consent certainly seems morally problematic, even if it did not result in people being badly harmed or killed. Laying aside the time-travel aspect, the situation is one in which a person is willing to change, perhaps radically, the lives of many people (potentially everyone on the planet) without their consent just to possibly save one life. On the face of it, that seems morally wrong and rather selfish.

A third problem is that Barry has under two minutes to complete his mission and return, or a singularity will form. This singularity will, at the very least, destroy the entire city and could destroy the entire planet. So, while the Reverse Flash was willing to kill a few people to achieve his goal, the Flash is willing to risk killing everyone on earth to save his mother. On utilitarian grounds, that seems clearly wrong. Especially since even if he saved her, the singularity could just end up killing her when the “present” arrives.

Barry decides to go back to try to save his mother, but his future self directs him to not do so. Instead he says good-bye to his dying mother and returns to the “present” to fight the Reverse Flash. Unfortunately, something goes wrong and the city is being sucked up into a glowing hole in the sky. Since skyscrapers are being ripped apart and sucked up, presumably a lot of people are dying.

While the episode ends with the Flash trying to close the hole, it should be clear that he is at least as bad as the Reverse Flash, if not worse: he was willing to change, without their consent, the lives of many others and he was willing to risk killing everyone and everything on earth. This is hardly heroic. So, the Flash would seem to be rather evil—or at least horrible at making moral decisions.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Bulk Data Collection

 

A federal appeals court ruled in May, 2015 that the NSA’s bulk collection of domestic calling data is illegal. While such bulk data collection would strike many as blatantly unconstitutional, this matter has not been addressed, though that is perhaps just a matter of time. My intent is to address the general issue of bulk domestic data collection by the state in a principled way.

When it comes to the state (or, more accurately, the people who compose the state) using its compulsive force against its citizens, there are three main areas of concern: practicality, morality and legality. I will addressing this matter within the context of the state using its power to impose on the rights and liberties of the citizens for the purported purpose of protecting them. This is, of course, the stock problem of liberty versus security.

In the case of practicality, the main question is whether or not the law, policy or process is effective in achieving its goals. This, obviously, needs to be balanced against the practical costs in terms of such things as time and resources (such as money).

In the United States, this illegal bulk data collection has been going on for years. To date, there seems to be but one public claim of success involving the program, which certainly indicates that the program is not effective. When the cost of the program is considered, the level of failure is appalling.

In defense of the program, some proponents have claimed that there have been many successes, but these cannot be reported because they must be kept secret. In fairness, it is certainly worth considering that there have been such secret successes that must remain secret for security reasons. However, this defense can easily be countered.

In order to accept this alleged secret evidence, those making the claim that it exists would need to be trustworthy. However, those making the claim have a vested interest in this matter, which certainly lowers their credibility. To use an analogy, if I was receiving huge sums of money for a special teaching program and could only show one success, but said there were many secret successes, you would certainly be wise to be skeptical of my claims. There is also the fact that thanks to Snowden, it is known that the people involved have no compunctions about lying about this matter, which certainly lowers their credibility.

One obvious solution would be for credible, trusted people with security clearance to be provided with the secret evidence. These people could then speak in defense of the bulk data collection without mentioning the secret specifics. Of course, given that everyone knows about the bulk data collection, it is not clear what relevant secrets could remain that the public simply cannot know about (except, perhaps, the secret that the program does not work).

Given the available evidence, the reasonable conclusion is that the bulk data collection is ineffective. While it is possible that there is some secret evidence, there is no compelling reason to believe this claim, given the lack of credibility on the part of those making this claim. This alone would suffice as grounds for ceasing this wasteful and ineffective approach.

In the case of morality, there are two main stock approaches. The first is a utilitarian approach in which the harms of achieving the security are weighed against the benefits provided by the security. The basic idea is that the state is warranted in infringing on the rights and liberties of the citizens on the condition that the imposition is outweighed by the wellbeing gained by the citizens—either in terms of positive gains or harms avoided. This principle applies beyond matters of security. For example, people justify such things as government mandated health care and limits on soda sizes on the same grounds that others justify domestic spying: these things are supposed to protect citizens.

Bulk data collection is, obviously enough, an imposition on the moral right to privacy—though it could be argued that this harm is fairly minimal. There are, of course, also the practical costs in terms of resources that could be used elsewhere, such as in health care or other security programs. Weighing the one alleged success against these costs, it seems evident that the bulk data collection is immoral on utilitarian grounds—it does not do enough good to outweigh its moral cost.

Another stock approach to such matters is to forgo utilitarianism and argue the ethics in another manner, such as appealing to rights. In the case of bulk data collection, it can be argued that it violates the right to privacy and is thus wrong—its success or failure in practical terms is irrelevant. In the United States people often argue this way when it comes to gun rights—the right outweighs utilitarian considerations about the well-being of the public.

Rights are, of course, not absolute—everyone knows the example of how the right to free expression does not warrant slander or yelling “fire” in a crowded theater when there is no fire. So, it could be argued that the right of privacy can be imposed upon. Many stock arguments exist to justify such impositions and these typical rest either on utilitarian arguments or arguments showing that the right to privacy does not apply. For example, it is commonly argued that criminals lack a right to privacy in regards to their wicked deeds—that is, there is no moral right to secrecy in order to conceal immoral deeds. While these arguments can be used to morally justify collecting data from specific suspects, they do not seem to justify bulk data collection—unless it can be shown that all Americans have forfeited their right to privacy.

It would thus seem that the bulk data collection cannot be justified on moral grounds. As a general rule, I favor the view that there is a presumption in favor of the citizen: the state needs a moral justification to impose on the citizen and it should not be assumed the state has a right to act unless the citizen can prove differently. This is, obviously enough, analogous to the presumption of innocence in the American legal system.

In regards to the legality of the matter, the specific law in question has been addressed.  In terms of bulk data collection in general, the answer seems quite obvious. While I am obviously not a constitutional scholar, bulk data collection seems to be a clear and egregious violation of the 4th Amendment: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

The easy and obvious counter is to point out that I, as I said, am not a constitutional scholar or even a lawyer. As such, my assessment of the 4th Amendment is lacking the needed professional authority. This is, of course, true—which is why this matter needs to be addressed by the Supreme Court.

In sum, there seems to be no practical, moral or legal justification for such bulk data collection by the state and hence it should not be permitted. This is my position as a philosopher and the 2016 Uncandidate.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Law Enforcement as Revenue Stream

After the financial class melted down the world economy, local governments faced an obvious reduction in their revenues. As the economy recovered under a Democrat President, the Republicans held onto or gained power in many state governments, such as my own adopted state of Florida. With laudable consistency with their professed ideology, Republicans routinely cut taxes for businesses, the well off and sometimes even almost everyone. While the theory seems to be that cutting taxes will increase the revenue for state and local governments, shockingly the opposite seems to happen: state and local governments find themselves running short of funds needed to meet the expenses of actually operating a civilization.

Being resourceful, local leaders seek other revenue streams in order to pay the bills. While cities like Ferguson provide well-known examples of a common “solution”, many cities and towns have embraced the practice of law-enforcement as revenue stream. While the general practice of getting revenue from law enforcement is nothing new, the extent to which some local governments rely on it is rather shocking. How the system works is also often shocking—it often amounts to a shakedown system one would expect to see in a corrupt country unfamiliar with the rule of law or the rights of citizens.

Since Ferguson, where Michael Brown was shot on August 9, 2014, has been the subject of extensive study, I will use the statistics from that town. Unfortunately, Ferguson does not appear to be unique or even unusual.

In 2013, Ferguson’s court dealt with 12,108 cases and 24,532 warrants. This works out to an average of 1.5 cases and 3 warrants per household in Ferguson. The fines and court fees that year totaled $2,635,400—making the municipal court the second largest revenue stream.

It would certainly be one thing if these numbers were the result of the legitimate workings of the machinery of justice. That is, if the cases and warrants were proportional to the actual crimes being committed and that justice was being dispensed fairly. That is, the justice was just.

One point of concern that has been widely addressed in the national media is that the legal system seems to disproportionally target blacks. In Ferguson, as in many places, the majority of the cases handled by the court arise from car stops. Ferguson is 29% white, but whites make up only 12.7% of those stopped. When a person is stopped, a black citizen will be searched 12.1% of the time, while a white citizen will be searched 6.9% of the time. In terms of arrest, a black citizen was arrested 10.4% of the time and a white citizen was arrested 5.2% of the time.

One stock reply to such figures is the claim that blacks commit more crimes than whites. If it were true that blacks were being arrested in proportion to the rate at which they were committing crimes, then this would be (on the face of it) fair. However, this does not seem to be the case. Interesting, even though blacks were more likely to be searched, the police discovered contraband 21.7% of the time. Whites who were searched were found with contraband 34.0% of the time. Also, 93% of those arrested in Ferguson were black. While certainly not impossible, it seems somewhat odd that 93% of the crime committed in the city was committed by black citizens.

Naturally, these numbers can be talked around or even explained away. It could be argued that blacks are not being targeted as a specific source of revenue and the arrest rates are proportional and just. This still leaves the matter of how the legal system operates in terms of being focused on revenue.

Laying aside all talk of race, Ferguson stands out as an example of how law enforcement can turn into a collection system. One key component is, of course, having a system of high fines. For example, Ferguson had a $531 fine for high grass and weeds, $792 for Failure to Obey, $527 for Failure to Comply, $427 for a Peace Disturbance violation, and so on.

If a person can pay, then the person is not arrested. But, if a person cannot afford the fine, then an arrest warrant is issued—this is the second part of the system. The city issued 32,975 arrest warrants for minor offenses in 2013—and the city has a population of 21,000 people.

After a person is arrested, she faces even more fees, such the obvious court fees and these can quickly pile up. For example, a person might get a $150 parking ticket that she cannot pay. She is then arrested and subject to more fees and more charges. This initial ticket might grow to a debt of almost$1,000 to the city. Given that the people who tend to be targeted are poor, it is likely they will not be able to pay the initial ticket. They will then be arrested, which could cost them their job, thus make them unable to pay their court fees. This could easily spiral into a court inflicted cycle of poverty and debt. This, obviously enough, is not what the legal system is supposed to do.

From a moral standpoint, one main problem with using this sort of law enforcement as a revenue stream is the damage it does to the citizens who cannot afford the fines and fees. As noted in the example above, a person could find her life ruined by a single parking ticket. The point of law enforcement in a just society is to protect the citizens from harm, not ruin them.

A second point of moral concern is that this sort of system is racketeering—it puts forth a threat of arrest and court fees, and then offers “protection” from that threat in return for a fee. That is, citizens are threatened to buy their way out of a greater harm. This is hardly justice. If it was practice by anyone else, it would be criminal racketeering and a protection scheme.

A third point of moral concern is that the system of exploiting the citizens by force and threat of force damages the fundamental relation between the citizen and the democratic state. In feudal states and in the domains of warlords, one expects the thugs of the warlords to shake down the peasants. However, that sort of thing is contrary to the nature of a democratic state. As happened during the revolts against feudalism and warlords, people will rise up against such oppression—and this is to be expected. Robin Hood is, after all, the hero and the Sheriff of Nottingham is the villain.

This is not to say that there should not be fines, penalties and punishments. However, they should be proportional to the offenses, they should be fairly applied, and should be aimed at protecting the citizens, not filling the coffers of the kingdom. As a final point, we should certainly not be cutting the taxes of the well off and then slamming the poor with the cost of doing so. That is certainly unjust and will, intended or not, result in dire social consequences.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter