Tag Archives: Ethics

Autonomous Weapons I: The Letter

On July 28, 2015 the Future of Life Institute released an open letter expressing opposition to the development of autonomous weapons. Although the name of the organization sounds like one I would use as a cover for an evil, world-ending cult in a Call of Cthulhu campaign, I am willing to accept that this group is sincere in its professed values. While I do respect their position on the issue, I believe that they are mistaken. I will assess and reply to the arguments in the letter.

As the letter notes, an autonomous weapon is capable of selecting and engaging targets without human intervention. An excellent science fiction example of such a weapon is the claw of Philip K. Dick’s classic “Second Variety” (a must read for anyone interested in the robopocalypse). A real world example of such a weapon, albeit a stupid one, is the land mine—they are placed and then engage automatically.

The first main argument presented in the letter is essentially a proliferation argument. If a major power pushes AI development, the other powers will also do so, creating an arms race. This will lead to the development of cheap, easy to mass-produce AI weapons. These weapons, it is claimed, will end up being acquired by terrorists, warlords, and dictators. These evil people will use these weapons for assassinations, destabilization, oppression and ethnic cleansing. That is, for what these evil people already use existing weapons to do quite effectively. This raises the obvious concern about whether or not autonomous weapons would actually have a significant impact in these areas.

The authors of the letter do have a reasonable point: as science fiction stories have long pointed out, killer robots tend to simply obey orders and they can (at least in fiction) be extremely effective. However, history has shown that terrorists, warlords, and dictators rarely have trouble finding humans who are willing to commit acts of incredible evil. Humans are also quite good at these sort of things and although killer robots are awesomely competent in fiction, it remains to be seen if they will be better than humans in the real world. Especially the cheap, mass produced weapons in question.

That said, it is reasonable to be concerned that a small group or individual could buy a cheap robot army when they would otherwise not be able to put together a human force. These “Walmart” warlords could be a real threat in the future—although small groups and individuals can already do considerable damage with existing technology, such as homemade bombs. They can also easily create weaponized versions of non-combat technology, such as civilian drones and autonomous cars—so even if robotic weapons are not manufactured, enterprising terrorists and warlords will build their own. Think, for example, of a self-driving car equipped with machine guns or just loaded with explosives.

A reasonable reply is that the warlords, terrorists and dictators would have a harder time of it without cheap, off the shelf robotic weapons. This, it could be argued, would make the proposed ban on autonomous weapons worthwhile on utilitarian grounds: it would result in less deaths and less oppression.

The authors then claim that just as chemists and biologists are generally not in favor of creating chemical or biological weapons, most researchers in AI do not want to design AI weapons. They do argue that the creation of AI weapons could create a backlash against AI in general, which has the potential to do considerable good (although there are those who are convinced that even non-weapon AIs will wipe out humanity).

The authors do have a reasonable point here—members of the public do often panic over technology in ways that can impede the public good. One example is in regards to vaccines and the anti-vaccination movement. Another example is the panic over GMOs that is having some negative impact on the development of improved crops. But, as these two examples show, backlash against technology is not limited to weapons, so the AI backlash could arise from any AI technology and for no rational reason. A movement might arise, for example, against autonomous cars. Interestingly, military use of technology seems to rarely create backlash from the public—people do not refuse to fly in planes because the military uses them to kill people. Most people also love GPS, which was developed for military use.

The authors note that chemists, biologists and physicists have supported bans on weapons in their fields. This might be aimed at attempting to establish an analogy between AI researchers and other researchers, perhaps to try to show these researchers that it is a common practice to be in favor of bans against weapons in one’s area of study. Or, as some have suggested, the letter might be making an analogy between autonomous weapons and weapons of mass destruction (biological, chemical and nuclear weapons).

One clear problem with the analogy is that biological, chemical and nuclear weapons tend to be the opposite of robotic smart weapons: they “target” everyone without any discrimination. Nerve gas, for example, injures or kills everyone. A nuclear bomb also kills or wounds everyone in the area of effect. While AI weapons could carry nuclear, biological or chemical payloads and they could be set to simply kill everyone, this lack of discrimination and WMD nature is not inherent to autonomous weapons. In contrast, most proposed autonomous weapons seem intended to be very precise and discriminating in their killing. After all, if the goal is mass destruction, there is already the well-established arsenal of biological, chemical and nuclear weapons. Terrorists, warlords and dictators often have no problems using WMDs already and AI weapons would not seem to significantly increase their capabilities.

In my next essay on this subject, I will argue in favor of AI weapons.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

ISIS & Rape

Looked at in the abstract, ISIS seems to be another experiment in the limits of human evil, addressing the question of how bad people can become before they are unable to function as social beings. While ISIS is well known for its theologically justified murder and destruction, it has now become known for its theologically justified slavery and rape.

While I am not a scholar of religion, it is quite evident that scriptural justifications of slavery and rape exist and require little in the way of interpretation. In this, Islamic scripture is similar to the bible—this book also contains rules about the practice of slavery and guidelines regarding the proper practice of rape. Not surprisingly, mainstream religious scholars of Islam and Christianity tend to argue that these aspects of scripture no longer apply or that they can be interpreted in ways that do not warrant slavery or rape. Opponents of these faiths tend to argue that the mainstream scholars are mistaken and that the wicked behavior enjoined in such specific passages express the true principles of the faith.

Disputes over specific passages lead to the broader debate about the true tenets of a faith and what it is to be a true member of that faith. To use a current example, opponents of Islam often claim that Islam is inherently violent and that the terrorists exemplify the true members of Islam. Likewise, some who are hostile to Christianity claim that it is a hateful religion and point to Christian extremists, such as God Hates Fags, as exemplars of true Christianity. This is a rather difficult and controversial matter and one I have addressed in other essays.

A reasonable case can be made that slavery and rape are not in accord with Islam, just as a reasonable case can be made that slavery and rape are not in accord with Christianity. As noted above, it can argued that times have changed, that the texts do not truly justify the practices and so on. However, these passages remain and can be pointed to as theological evidence in favor of the religious legitimacy of these practices. The practice of being selective about scripture is indeed a common one and people routinely focus on passages they like while ignoring passages that they do not like. This selectivity is, not surprisingly, most often used to “justify” prejudice, hatred and misdeeds. Horribly, ISIS does indeed have textual support, however controversial it might be with mainstream Islamic thinkers. That, I think, cannot be disputed.

ISIS members not only claim that slavery and rape are acceptable, they go so far as to claim that rape is pleasing to God. According to Rukmini Callimachi’s article in the New York Times, ISIS rapists pray before raping, rape, and then pray after raping. They are not praying for forgiveness—the rape is part of the religious ritual that is supposed to please God.

The vast majority of monotheists would certainly be horrified by this and would assert that God is not pleased by rape (despite textual support to the contrary). Being in favor of rape is certainly inconsistent with the philosophical conception of God as an all good being. However, there is the general problem of sorting out what God finds pleasing and what He condemns. In the case of human authorities it is generally easy to sort out what pleases them and what they condemn: they act to support and encourage what pleases them and act to discourage, prevent and punish what they condemn. If God exists, He certainly is allowing ISIS to do as it will—He never acts to stop them or even to send a clear sign that He condemns their deeds. But, of course, God seems to share the same policy as Star Fleet’s Prime Directive now: He never interferes or makes His presence known.

The ISIS horror is yet another series of examples in the long standing problem of evil—if God is all powerful, all-knowing and good, then there should be no evil. But, since ISIS is freely doing what it does it would seem to follow that God is lacking in some respect, He does not exist or He, as ISIS claims, is pleased by the rape of children.

Not surprisingly, religion is not particularly helpful here—while scripture and interpretations of scripture can be used to condemn ISIS, scripture can also be used to support them in their wickedness. God, as usual, is not getting involved, so we do not know what He really thinks. So, it would seem to be up human morality to settle this matter.

While there is considerable dispute about morality, the evil of rape and slavery certainly seem to be well-established. It can be noted that moral arguments have been advanced in favor of slavery, usually on the grounds of alleged superiority. However, these moral arguments certainly seem to have been adequately refuted. There are far fewer moral arguments in defense of rape, which is hardly surprising. However, these also seem to have been effectively refuted. In any case, I would contend that the burden of proof rests on those who would claim that slavery or rape are morally acceptable and invite readers to advance such arguments for due consideration.

Moving away from morality, there are also practical matters. ISIS does have a clear reason to embrace its theology of rape: as was argued by Rukmini Callimachi, it is a powerful recruiting tool. ISIS offers men a group in which killing, destruction and rape are not only tolerated but praised as being pleasing to God—the ultimate endorsement. While there are people who do not feel any need to justify their evil, even very wicked people often still want to believe that their terrible crimes are warranted or even laudable. As such, ISIS has considerable attraction to those who wish to do evil.

Accepting this theology of slavery and rape is not without negative consequences for recruiting—while there are many who find it appealing, there are certainly many more who find it appalling. Some ISIS supporters have endeavored to deny that ISIS has embraced this theology of rape and slavery—even they recognize some moral limits. Other supporters have not been dismayed by these revelations and perhaps even approve. Whether this theology of rape and slavery benefits ISIS more than it harms it will depend largely on the moral character of its potential recruits and supporters. I certainly hope that this is a line that many are not willing to cross, thus cutting into ISIS’ potential manpower and financial support. What impact this has on ISIS’ support will certainly reveal much about the character of their supporters—do they have some moral limits?

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Lion, the HitchBOT and the Fetus

After Cecil the Lion was shot, the internet erupted in righteous fury against the killer. Not everyone was part of this eruption and some folks argued against feeling bad for Cecil—some accusing the mourners of being phonies and pointing out that lions kill people. What really caught my attention, however, was the use of a common tactic—to “refute” those condemning the killing of Cecil by asserting that these “lion lovers” do not get equally upset about the fetuses killed in abortions.

When HitchBOT was destroyed, a similar sort of response was made—in fact, when I have written about ethics and robots (or robot-like things) I have been subject to criticism on the same grounds: it is claimed that I value robots more than fetuses and presumably I have thus made some sort of error in my arguments about robots.

Since I find this tactic interesting and have been its target, I thought it would be worth my while to examine it in a reasonable and (hopefully) fair way.

One way to look at this approach is to take it as the use of the Consistent Application method, which is as follows. A moral principle is consistently applied when it is applied in the same way to similar beings in similar circumstances. Inconsistent application is a problem because it violates three commonly accepted moral assumptions: equality, impartiality and relevant difference.

Equality is the assumption that those that moral equals must be treated as such. It also requires that those that are not morally equal be treated differently.

Impartiality is the assumption that moral principles must not be applied with partiality. Inconsistent application would involve non-impartial application.

Relevant difference is a common moral assumption. It is the view that different treatment must be justified by relevant differences. What counts as a relevant difference in particular cases can be a matter of great controversy. For example, while many people do not think that gender is a relevant difference in terms of how people should be treated other people think it is very important. This assumption requires that principles be applied consistently.

The method of Consistent Application involves showing that a principle or standard has been applied differently in situations that are not relevantly different. This allows one to conclude that the application is inconsistent, which is generally regarded as a problem. The general form is as follows:

Step 1: Show that a principle/standard has been applied differently in situations that are not adequately different.

Step 2: Conclude that the principle has been applied inconsistently.

Step 3 (Optional): Require that the principle be applied consistently.

Applying this method often requires determining the principle the person/group is using. Unfortunately, people are not often clear in regards to what principle they are actually using. In general, people tend to just make moral assertions and leave it to others to guess what their principles might be. In some cases, it is likely that people are not even aware of the principles they are appealing to when making moral claims.

Turning now to the cases of the lion, the HitchBOT and the fetus consistent application could be applied as follows:

Step 1: Those who are outraged at the killing of the lion are using the principle that the killing of living things is wrong. Those outraged at the destruction of HitchBOT are using the principle that helpless things should not be destroyed. These people are not outraged by abortions in general and the Planned Parenthood abortions in particular.

Step 2: The lion and HitchBOT mourners are not being consistent in their application of the principle since fetuses are helpless (like HitchBOT) and living things (like Cecil the lion).

Step 3 (Optional): Those mourning for Cecil and HitchBOT should mourn for the fetuses and oppose abortion in general and Planned Parenthood in particular.

This sort of use of Consistent Application is quite appealing and I routinely use the method myself. For example, I have argued (in a reverse of this situation) that people who are anti-abortion should also be anti-hunting and that people who are fine with hunting should also be morally okay with abortion.

As with any method of arguing, there are counter methods. In the case of this method, there are three general reasonable responses. The first is to admit the inconsistency and stop applying the principle in an inconsistent manner. This obviously does not defend against the charge but can be an honest reply. People, as might be imagined, rarely take this option.

A second way to reply and one that is an actual defense is to dissolve the inconsistency by showing that the alleged inconsistency is merely apparent. The primary way to do this is by showing that there is a relevant difference in the situation. For example, someone who wants to be morally opposed to the shooting of Cecil while being morally tolerant of abortions could argue that the adult lion has a moral status different from the fetus—one common approach is to note the relation of the fetus to the woman and how a lion is an independent entity. The challenge lies in making a case for the relevance of the difference.

A third way to reply is to reject the attributed principle. In the situation at hand, the assumption is that a person is against killing the lion simply because it is alive. However, that might not be the principle the person is, in fact, using. His principle might be based on the suffering of a conscious being and not on mere life. In this case, the person would be consistent in his application.

Naturally enough, the “new” principle is still subject to evaluation. For example, it could be argued the suffering principle is wrong and that the life principle should be accepted instead. In any case, this method is not an automatic “win.”

An alternative interpretation of this tactic is to regard it as an ad homimen: An ad Hominem is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made (or the character, circumstances, or actions of the person reporting the claim). Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). This type of “argument” has the following form:

  1. Person A makes claim X.
  2. Person B makes an attack on person A.
  3. Therefore A’s claim is false.

The reason why an ad Hominem (of any kind) is a fallacy is that the character, circumstances, or actions of a person do not (in most cases) have a bearing on the truth or falsity of the claim being made (or the quality of the argument being made).

In the case of the lion, the HitchBOT and the fetus, the reasoning can be seen as follows:

  1. Person A claims that killing Cecil was wrong or that destroying HitchBOT was wrong.
  2. Person B notes that A does not condemn abortions in general or Planned Parenthood’s abortions in particular.
  3. Therefore A is wrong about Cecil or HitchBOT.

Obviously enough, a person’s view of abortion does not prove or disprove her view about the ethics of the killing of Cecil or HitchBOT (although a person can, of course, be engaged in inconsistency or other errors—but these are rather different matters).

A third alternative is that the remarks are not meant as an argument, either the reasonable application of a Consistent Application criticism or the unreasonable attack of an ad homimen. In this case, the point is to assert that the lion lovers and bot buddies are awful people or, at best, misguided.

The gist of the tactic is, presumably, to make these people seem bad by presenting a contrast: these lion lovers and bot buddies are broken up about lions and trashcans, but do not care about fetuses—what awful people they are.

One clear point of concern is that moral concern is not a zero-sum game. That is, regarding the killing of Cecil as wrong and being upset about it does not entail that a person thus cares less (or not at all) about fetuses. After all, people do not just get a few “moral tokens” to place such that being concerned about one misdeed entails they must be unable to be concerned about another. Put directly, a person can condemn the killing of Cecil and also condemn abortion.

The obvious response is that there are people who are known to condemn the killing of Cecil or the destruction of HitchBOT and also known to be pro-choice. These people, it can be claimed, are morally awful. The equally obvious counter is that while it is easy to claim such people are morally awful, the challenge lies in showing that they are actually awful. That is, that their position on abortion is morally wrong. Noting that they are against lion killing or bot bashing and pro-choice does not show they are in error—although, as noted above, they could be challenged on the grounds of consistency. But this requires laying out an argument rather than merely juxtaposing their views on these issues. This version of the tactic simply amounts to asserting or implying that there is something wrong with the person because one disagree with that person. But a person thinking that hunting lions or bashing bots is okay and that abortion is wrong, does not prove that the opposing view is in error. It just states the disagreement.

Since the principle of charity requires reconstructing and interpreting arguments in the best possible way, I endeavor to cast this sort of criticism as a Consistent Application attack rather than the other two. This approach is respectful and, most importantly, helps avoid creating a straw man of the opposition.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

HitchBOT & Kant

Dr. Frauke Zeller and Dr. David Smith created HitchBOT (essentially a solar powered iPhone in an anthropomorphic shell) and sent him on trip to explore the USA on July 17, 2015. HitchBOT had previously successfully journey across Canada and Germany. The experiment was aimed at seeing how humans would interact with the “robot.”  He lasted about two weeks in the United States, meeting his end in Philadelphia. The exact details of his destruction (and the theft of the iPhone) are not currently known, although the last people known to be with HitchBOT posted what seems to be faked “surveillance camera” video of HitchBOT’s demise. This serves to support the plausible claim that the internet eventually ruins everything it touches.

The experiment was certainly both innovative and interesting. It also generated questions about what the fate of HitchBOT says about us. We do, of course, already know a great deal about us: we do awful things to each other, so it is hardly surprising that someone would do something awful to the HitchBOT. People are killed every day in the United States, vandalism occurs regularly and the theft of technology is routine—thus it is no surprise that HitchBOT came to a bad end. In some ways, it was impressive that he made it as far as he did.

While HitchBOT seems to have met his untimely doom at the hands of someone awful, what is most interesting is how well HitchBOT was treated. After all, he was essentially an iPhone in a shell that was being transported about by random people.

One reason that HitchBOT was well treated and transported about by people is no doubt because it fits into the travelling gnome tradition. For those not familiar with the travelling gnome prank, it involves “stealing” a lawn gnome and then sending the owner photographs of the gnome from various places. The gnome is then returned (at least by nice pranksters). HitchBOT is a rather more elaborate version of the traveling gnome and, obviously, differs from the classic travelling gnome in that the owners sent HitchBOT on his fatal adventure. People, perhaps, responded negatively to the destruction of HitchBOT because it broke the rules of the travelling gnome game—the gnome is supposed to roam and make its way safely back home.

A second reason for HitchBOT’s positive adventures (and perhaps also his negative adventure) is that he became a minor internet celebrity. Since celebrity status, like moth dust, can rub off onto those who have close contact it is not surprising that people wanted to spend time with HitchBOT and post photos and videos of their adventures with the iPhone in a trash can. On the dark side, destroying something like HitchBOT is also a way to gain some fame.

A third reason, which is probably more debatable, is that HitchBOT was given a human shape, a cute name and a non-threatening appearance and these tend to incline people to react positively. Natural selection has probably favored humans that are generally friendly to other humans and this presumably extends to things that resemble humans. There is probably also some hardwiring for liking cute things, which causes humans to generally like things like young creatures and cute stuffed animals. HitchBOT was also given a social media personality by those conducting the experiment which probably influenced people into feeling that it had a personality of its own—even though they knew better.

Seeing a busted up HitchBOT, which has an anthropomorphic form, presumably triggers a response similar too (but rather weaker than) what a sane human would have to seeing the busted up remains of a fellow human.

While some people were rather upset by the destruction of HitchBOT, others have claimed that it was literally “a pile of trash that got what it deserved.” A more moderate position is that while it was unfortunate that HitchBOT was busted up, it is unreasonable to be overly concerned by this act of vandalism because HitchBOT was just an iPhone in a fairly cheap shell. As such, while it is fine to condemn the destruction as vandalism, theft and the wrecking of a fun experiment, it is unreasonable to see the matter as actually being important. After all, there are far more horrible things to be concerned about, such as the usual murdering of actual humans.

My view is that the moderate position is quite reasonable: it is too bad HitchBOT was vandalized, but it was just an iPhone in a shell. As such, its destruction is not a matter of great concern. That said, the way HitchBOT was treated is still morally significant. In support of this, I turn to what has become my stock argument in regards to the ethics of treating entities that lack moral status. This argument is stolen from Kant and is a modification of his argument regarding the treatment of animals.

Kant argues that we should treat animals well despite his view that animals have the same moral status as objects. Here is how he does it (or tries to do it).

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X would obligate us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in his old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot the dog?

Kant’s answer seems to be rather consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Interestingly enough, Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being rather gentle with a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are essentially practice for us: how we treat them is training for how we will treat human beings.

Being an iPhone in a cheap shell, HitchBOT obviously had the moral status of an object and not that of a person. He did not feel or think and the positive feelings people had towards it were due to its appearance (cute and vaguely human) and the way those running the experiment served as its personality via social media. It was, in many ways, a virtual person—or at least the manufactured illusion of a person.

Given the manufactured pseudo-personhood of HitchBOT, it could be taken as being comparable to an animal, at least in Kant’s view. After all, animals are mere objects and have no moral status of their own. Likewise for HitchBOT Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Thus, a key matter to settle is whether HitchBOT was more like an animal or more like a stone—at least in regards to the matter at hand.

If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person engaging in said behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also extend to HitchBOT. For example, if engaging in certain activities with a HitchBOT would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with HitchBOT would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior.

While the result of interactions with the HitchBOT would need to be properly studied, it makes intuitive sense that being “nice” to the HitchBOT would help incline people to be somewhat nicer to others (much along the lines of how children are encouraged to play nicely with their stuffed animals). It also makes intuitive sense that being “mean” to HitchBOT would incline people to be somewhat less nice to others. Naturally, people would also tend to respond to HitchBOT based on whether they already tend to be nice or not. As such, it is actually reasonable to praise nice behavior towards HitchBOT and condemn bad behavior—after all, it was a surrogate for a person. But, obviously, not a person.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Introduction to Philosophy

The following provides a (mostly) complete Introduction to Philosophy course.

Readings & Notes (PDF)

Class Videos (YouTube)

Part I Introduction

Class #1

Class #2: This is the unedited video for the 5/12/2015 Introduction to Philosophy class. It covers the last branches of philosophy, two common misconceptions about philosophy, and argument basics.

Class #3: This is the unedited video for class three (5/13/2015) of Introduction to Philosophy. It covers analogical argument, argument by example, argument from authority and some historical background for Western philosophy.

Class #4: This is the unedited video for the 5/14/2015 Introduction to Philosophy class. It concludes the background for Socrates, covers the start of the Apology and includes most of the information about the paper.

Class#5: This is the unedited video of the 5/18/2015 Introduction to Philosophy class. It concludes the details of the paper, covers the end of the Apology and begins part II (Philosophy & Religion).

Part II Philosophy & Religion

Class #6: This is the unedited video for the 5/19/2015 Introduction to Philosophy class. It concludes the introduction to Part II (Philosophy & Religion), covers St. Anselm’s Ontological Argument and some of the background for St. Thomas Aquinas.

Class #7: This is the unedited video from the 5/20/2015 Introduction to Philosophy class. It covers Thomas Aquinas’ Five Ways.

Class #8: This is the unedited video for the eighth Introduction to Philosophy class (5/21/2015). It covers the end of Aquinas, Leibniz’ proofs for God’s existence and his replies to the problem of evil, and the introduction to David Hume.

Class #9: This is the unedited video from the ninth Introduction to Philosophy class on 5/26/2015. This class continues the discussion of David Hume’s philosophy of religion, including his work on the problem of evil. The class also covers the first 2/3 of his discussion of the immortality of the soul.

Class #10: This is the unedited video for the 5/27/2015 Introduction to Philosophy class. It concludes Hume’s discussion of immortality, covers Kant’s critiques of the three arguments for God’s existence, explores Pascal’s Wager and starts Part III (Epistemology & Metaphysics). Best of all, I am wearing a purple shirt.

Part III Epistemology & Metaphysics

Class #11: This is the 11th Introduction to Philosophy class (5/28/2015). The course covers Plato’s theory of knowledge, his metaphysics, the Line and the Allegory of the Cave.

Class #12: This is the unedited video for the 12th Introduction to Philosophy class (6/1/2015). This class covers skepticism and the introduction to Descartes.

Class #13: This is the unedited video for the 13th Introduction to Philosophy class (6/2/2015). The class covers Descartes 1st Meditation, Foundationalism and Coherentism as well as the start to the Metaphysics section.

Class #14: This is the unedited video for the fourteenth Introduction to Philosophy class (6/3/2015). It covers the methodology of metaphysics and roughly the first half of Locke’s theory of personal identity.

Class #15: This is the unedited video of the fifteen Introduction to Philosophy class (6/4/2015). The class covers the 2nd half of Locke’s theory of personal identity, Hume’s theory of personal identity, Buddha’s no self doctrine and “Ghosts & Minds.”

Class #16: This is the unedited video for the 16th Introduction to Philosophy class. It covers the problem of universals,  the metaphysics of time travel in “Meeting Yourself” and the start of the metaphysics of Taoism.

Part IV Value

Class #17: This is the unedited video for the seventeenth Introduction to Philosophy class (6/9/2015). It begins part IV and covers the introduction to ethics and the start of utilitarianism.

Class #18: This is the unedited video for the eighteenth Introduction to Philosophy class (6/10/2015). It covers utilitarianism and some standard problems with the theory.

Class #19: This is the unedited video for the 19th Introduction to Philosophy class (6/11/2015). It covers Kant’s categorical imperative.

Class #20: This is the unedited video for the twentieth Introduction to Philosophy class (6/15/2015). This class covers the introduction to aesthetics and Wilde’s “The New Aesthetics.” The class also includes the start of political and social philosophy, with the introduction to liberty and fascism.

Class #21: No video.

Class #22: This is the unedited video for the 22nd Introduction to Philosophy class (6/17/2015). It covers Emma Goldman’s anarchism.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Ethics of Backdoors

In philosophy, one of the classic moral debates has focused on the conflict between liberty and security. While this topic covers many issues, the main problem is determining the extent to which liberty should be sacrificed in order to gain security. There is also the practical question of whether or not the security gain is actually effective.

One of the recent versions of this debate focuses on tech companies being required to include electronic backdoors in certain software and hardware. Put in simple terms, a backdoor of this sort would allow government agencies (such as the police, FBI and NSA) to gain access even to files and hardware protected by encryption. To use an analogy, this would be like requiring that all dwellings be equipped with a special door that could be secretly opened by the government to allow access to the contents of the house.

The main argument in support of mandating such backdoors is a fairly stock one: governments need such access for criminal investigators, gathering military intelligence and (of course) to “fight terrorism.” The concern is that if there is not a backdoor, criminals and terrorists will be able to secure their data and thus prevent state agencies from undertaking surveillance or acquiring evidence.

As is so often the case with such arguments, various awful or nightmare scenarios are often presented in making the case. For example, it might be claimed that the location and shutdown codes for ticking bombs could be on an encrypted iPhone. If the NSA had a key, they could just get that information and save the day. Without the key, New York will be a radioactive crater. As another example, it might be claimed that a clever child pornographer could encrypt all his pornography, making it impossible to make the case against him, thus ensuring he will be free to pursue his misdeeds with impunity.

While this argument is not without merit, there are numerous stock counter arguments. Many of these are grounded in views of individual liberty and privacy—the basic idea being that an individual has the right to have such security against the state. These arguments are appealing to both liberals (who tend to profess to like privacy rights) and conservatives (who tend to claim to be against the intrusions of big government).

Another moral argument is grounded in the fact that the United States government has shown that it cannot be trusted. To use an analogy, imagine that agents of the state were caught sneaking into the dwellings of all citizens and going through their stuff in clear violation of the law, the constitution and basic moral rights. Then someone developed a lock that could only be opened by the person with the proper key. If the state then demanded that the lock company include a master key function to allow the state to get in whenever it wanted, the obvious response would be that the state has already shown that it cannot be trusted with such access. If the state had behaved responsibly and in accord with the laws, then it could have been trusted. But, like a guest who abused her access to a house, the state cannot and should not be trusted with a key After all, we already know what they will do.

This argument also applies to other states that have done similar things. In the case of states that are even worse in their spying on and oppression of their citizens, the moral concerns are even greater. Such backdoors would allow the North Korean, Chinese and Iranian governments to gain access to devices, while encryption would provide their citizens with some degree of protection.

The strongest moral and practical argument is grounded on the technical vulnerabilities of integrated backdoors. One way that a built-in backdoor creates vulnerability is its very existence. To use a somewhat oversimplified analogy, if thieves know that all vaults have a built in backdoor designed to allow access by the government, they will know that a vulnerability exists that can be exploited.

One counter-argument against this is that the backdoor would not be that sort of vulnerability—that is, it would not be like a weaker secret door into a vault. Rather, it would be analogous to the government having its own combination that would work on all the vaults. The vault itself would be as strong as ever; it is just that the agents of the state would be free to enter the vault when they are allowed to legally do so (or when they feel like doing so).

The obvious moral and practical concern here is that the government’s combination to the vaults (to continue with the analogy) could be stolen and used to allow criminals or enemies easy access to all the vaults. The security of such vaults would be only as good as the security the government used to protect this combination (or combinations—perhaps one for each manufacturer). As such, the security of every user depends on the state’s ability to secure its means of access to hardware and software.

The obvious problem is that governments, such as the United States, have shown that they are not very good at providing such security. From a moral standpoint, it would seem to be wrong to expect people to trust the state with such access, given the fact that the state has shown that it cannot be depended on in such matters. To use an analogy, imagine you have a friend who is very sloppy about securing his credit card numbers, keys, PINs and such—in fact, you know that his information is routinely stolen. Then imagine that this friend insists that he needs your credit card numbers, PINs and such and that he will “keep them safe.” Given his own track record, you have no reason to trust this friend nor any obligation to put yourself at risk, regardless of how much he claims that he needs the information.

One obvious counter to this analogy is that this irresponsible friend is not a good analogue to the state. The state has compulsive power that the friend lacks, so the state can use its power to force you to hand over this information.

The counter to this is that the mere fact that the state does have compulsive force does not mean that it is thus responsible—which is the key concern in regards to both the ethics of the matter and the practical aspect of the matter. That is, the burden of proof would seem to rest on those that claim there is a moral obligation to provide a clearly irresponsible party with such access.

It might then be argued that the state could improve its security and responsibility, and thus merit being trusted with such access. While this does have some appeal, there is the obvious fact that if hackers and governments knew that that the keys to the backdoors existed, they would expend considerable effort to acquire them and would, almost certainly, succeed. I can even picture the sort of headlines that would appear: “U.S. Government Hacked: Backdoor Codes Now on Sale on the Dark Web” or “Hackers Linked to China Hack Backdoor Keys; All Updated Apple and Android Devices Vulnerable!” As such, the state would not seem to have a moral right to insist on having such backdoors, given that the keys will inevitably be stolen.

At this point, the stock opening argument could be brought up again: the state needs backdoor access in order to fight crime and terrorism. There are two easy and obvious replies to this sort of argument.

The first is based in an examination of past spying, such as that done under the auspices of the Patriot Act. The evidence seems to show that this spying was completely ineffective in regards to fighting terrorism. These is no reason to think that backdoor access would change this.

The second is a utilitarian argument (which can be cast as a practical or moral argument) in which the likely harm done by having backdoor access must be weighed against the likely advantages of having such access. The consensus among those who are experts in security is that the vulnerability created by backdoors vastly exceeds the alleged gain to protecting people from criminals and terrorists.

Somewhat ironically, what is alleged to be a critical tool for fighting crime (and terrorism) would simply make cybercrime much easier by building vulnerabilities right into software and devices.

In light of the above discussion, it would seem that baked-in backdoors are morally wrong on many grounds (privacy violations, creation of needless vulnerability, etc.) and lack a practical justification. As such, they should not be required by the state.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robot Love II: Roboslation under the Naked Sun

In his Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human worlds is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide rather than endure such contact.

While this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. But, philosophers do love a good science fiction context for discussion, hence the use of The Naked Sun.

Almost everyone is now familiar with the popular narrative about smart phones and their role in allowing unrelenting access to social media. The main narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people (friends, relatives or even strangers) gathered together physically, yet ignoring each other in favor of gazing into the screens of their lords and masters. There are a multitude of anecdotes about this and many folks have their favorite tales of such events. As a professor, I see students engrossed by their phones—but, to be fair, Plato has nothing on cat videos. Like most people, I have had dates in which the other person was working two smartphones at once. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else—all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the main focus, namely robots. However, the reader should keep in mind the social isolation created by social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, what can be called social robots are a relatively new thing. Sure, there have long been “robot” toys and things like Teddy Ruxpin (essentially a tape player embedded in a simple amnitronic bear toy). But, the creation of reasonably sophisticated social robots is a relatively new thing. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from a pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies that are and will sell social robots are, unsurprisingly, quite positive about the future of social robots. There are, of course, some good arguments in their favor. Robot pets provide a good choice for people with allergies, who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person with special needs (such as someone who is ill, elderly or injured) requires round the clock attention and monitoring that would be expensive, burdensome or difficult for other humans to supply.

Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots and social media, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction—people are emotionally shortchanging themselves and those they are physically with in favor of staying relentlessly connected to social media. This, obviously enough, seems to be a taste of what Asimov created in The Naked Sun: people who view, but no longer see one another. Given the apparent importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. To use an analogy, human-human social interactions can be seen as being like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as being like consuming junk food or drugs—it is very addictive, but leaves one ultimately empty…yet always craving more.

It can be argued that this worry is unfounded—that social media is an adjunct to social interaction in the real world and that social interaction via things like Facebook and Twitter can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter does have some appeal, social robots do seem to be a different matter in that they are something new and rather radically different. While humans have had toys, stuffed animals and even simple mechanisms for non-living company, these are quite different from social robots. After all, social robots aim to effectively mimic or simulate animals or humans.

One concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant—that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This would make robotic companions rather more appealing than human company. At least the robots whose cost is not subsidized by advertising—imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner/user would be able to make changed to the robot. A person can, quite literally, make a friend with the desired qualities and missing undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right, at least in terms of some qualities.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends—there is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this regard.

Social robots, though they might breakdown or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful—just turn it off and lock it down when leaving it alone.

Unlike human companions, robot companions do not impose burdens—they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but it would seem that robotic companions would be superior to humans in most ways—at least in regards to common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship—will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you like and so on. However, these seem to be mostly technical problems involving software. Presumably all these could eventually be addressed and satisfactory companions could be created.

Since I have written specifically about sexbots in other essays, I will not discuss those here. Rather, I will discuss two potentially problematic aspect of companion bots.

One point of obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food—superficially appealing but lacking in what is needed for health. However, if the robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is stolen from the virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways in order to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to having only or primarily robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop the virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions be too easy would be analogous to going without physical exercise or challenges—one becomes emotionally soft and weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would actually lead to unhappiness. Because of this, one should carefully consider whether or not one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop the virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Robot Love I: Other Minds

Thanks to improvements in medicine humans are living longer and can be kept alive well past the point at which they would naturally die. On the plus side, longer life is generally (but not always) good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age. This care can be a considerable burden on the caregivers. Not surprisingly, there has been an effort to develop a technological solution to this problem, specifically companion robots that serve as caregivers.

While the technology is currently fairly crude, there is clearly great potential here and there are numerous advantages to effective robot caregivers. The most obvious are that robot caregivers do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be ideal 24/7/365 caregivers. This makes them superior in many ways to human caregivers who get tired, get depressed, get angry and have many other responsibilities.

There are, of course, some concerns about the use of robot caregivers. Some relate to such matters as their safety and effectiveness while others focus on other concerns. In the case of caregiving robots that are intended to provide companionship and not just things like medical and housekeeping services, there are both practical and moral concerns.

In regards to companion robots, there are at least two practical concerns regarding the companion aspect. The first is whether or not a human will accept a robot as a companion. In general, the answer seems to be that most humans will do so.

The second is whether or not the software will be advanced enough to properly read a human’s emotions and behavior in order to generate a proper emotional response. This response might or might not include conversation—after all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test—that is, read and respond correctly to human emotions. Since humans often botch this, there would be a fairly broad tolerable margin of error here. These practical concerns can be addressed technologically—it is simply a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things—the comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion appeal to all the senses.

While the practical problems can be solved with the right technology, there are some moral concerns with the use of robot caregiver companions. Some relate to people handing off their moral duties to care for their family members, but these are not specific to robots. After all, a person can hand off the duties to another person and this would raise a similar issue.

In regards to those specific to a companion robot, there are moral concerns about the effectiveness of the care—that is, are the robots good enough that trusting the life of an elderly or sick human would be morally responsible? While that question is important, a rather intriguing moral concern is that the robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate (fake) human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit and such a deceit seems to be morally wrong.

One obvious response is that people would realize that the robot does not really experience emotions, yet still gain value from its “fake” companionship. To use an analogy, people often find stuffed animals to be emotional reassuring even though they are well aware that the stuffed animal is just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect—if someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real in order to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship that a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to actually feel the emotions that they display and that they actually understand. In philosophical terms, humans have (or are) minds and robots (of the sort that will be possible in the near future) do not have minds. They merely create the illusion of having a mind.

Interestingly enough, philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior). While such behaviorism has been largely abandoned, it does survive in a variety of jokes and crude references to showing people some “love behavior.”

The usual “solution” to the problem is to go with the obvious: I think that other people have minds by an argument from analogy. I am aware of my own mental states and my behavior and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in many and various forms of deceit. For example, a person can fake being in pain or make a claim about love that is untrue. Piercing these deceptions can sometimes be very difficult since humans are often rather good at deceit. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way he wants people to believe he is thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it is displaying in its behavior, because it does not think or feel. Or so it is believed. The reason that a person would think this seems reasonable: in the case of a robot, we can go in and look at the code and the hardware to see how it all works and we will not see any emotions or thought in there. The robot, however complicated, is just a material machine, incapable of thought or feeling.

Long before robots, there were thinkers who claimed that a human is a material entity and that a suitable understanding of the mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes thoughts and emotions and understand the hardware it “runs” on.

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par—both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body—that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit—going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does and this will yield all the benefits of having a human companion.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Better to be Nothing?

There is an old legend that king Midas for a long time hunted the wise Silenus, the companion of Dionysus, in the forests, without catching him. When Silenus finally fell into the king’s hands, the king asked what was the best thing of all for men, the very finest. The daemon remained silent, motionless and inflexible, until, compelled by the king, he finally broke out into shrill laughter and said these words, “Suffering creature, born for a day, child of accident and toil, why are you forcing me to say what would give you the greatest pleasure not to hear? The very best thing for you is totally unreachable: not to have been born, not to exist, to be nothing. The second best thing for you, however, is this — to die soon.”

-Nietzsche, The Birth of Tragedy

One rather good metaphysical question is “why is there something rather than nothing?” An interesting question in the realm of value is “is it better to be nothing rather than something?” That is, is it better “not to have been born, not to exist, to be nothing?”

Addressing the question does require sorting out the measure of value that should be used to decide whether it is better to not exist or to exist. One stock approach is to use the crude currencies of pleasure and pain. A somewhat more refined approach is to calculate in terms of happiness and unhappiness. Or one could simply go generic and use the vague categories of positive value and negative value.

What also must be determined are the rules of the decision. For the individual, a sensible approach would be the theory of ethical egoism—that what a person should do is what maximizes the positive value for her. On this view, it would be better if the person did not exist if her existence would generate more negative than positive value for her. It would be better if the person did exist if her existence would generate more positive than negative value for her.

To make an argument in favor of never existing being better than existing, one likely approach is to make use of the classic problem of evil as laid out by David Hume. When discussing this matter, Hume contends that everyone believes that life is miserable and he lays out an impressive catalog of pains and evils. While he considers that pain is less frequent than pleasure, he notes that even if this is true, pain “is infinitely more violent and durable.” As such, Hume makes a rather good case that the negative value of existence outweighs its positive value.

If it is true that the negative value outweighs the positive value, and better is measured in terms of maximizing value, then it would thus seem to be better to have never existed. After all, existence will result (if Hume is right) in more pain than pleasure. In contrast, non-existence will have no pain (and no pleasure) for a total of zero. Doing the value math, since zero is greater than a negative value, never existing is better than existing.

There does seem to be something a bit odd about this sort of calculation. After all, if the person does not exist, then her pleasure and pain would not balance to zero. Rather it would seem that this sum would be an undefined value. It cannot be better for a person that she not exist, since there would (obviously) not be anyone for the nonexistence to be better for.

This can be countered by saying that this is but a semantic trick—the nonexistence would be better than the existence because of the relative balance of pleasure and pain. There is also another approach—to broaden the calculation from the individual to the world.

In this case, the question would not be about whether it would be better for the individual to exist or not, but whether or not a world with the individual would be better than a world without the individual. If a consequentialist approach is assumed, it is assumed that pain and pleasure are the measure of value and it is assumed that the pain outweighs the pleasure in every life, then the world would be better if a person never existed. This is because the absence of an individual would reduce the overall pain. Given these assumptions, a world with no humans at all would be a better world. This could be extended to its logical conclusion: if the suffering outweighs the pleasures in the case of all beings (Hume did argue that the suffering of all creatures exceeds their enjoyments), then it would be better that no feeling creatures existed at all. At this point, one might as well do away with existence altogether and have nothing. Thus, while it might not be known why there is something rather than nothing, this argument would seem to show that it would be better to have nothing rather than something.

Of course, this reasoning rests on many assumptions that can be easily challenged. It can be argued that the measure of value is not to be done solely in terms of pleasures and pains—that is, even if life resulted in more pain than pleasure, the overall positive value could be greater than the negative value. For example, the creation of art and the development of knowledge could provide value that outweighs the pain. It could also be argued that the consequentialist approach is in error—that estimating the worth of life is not just a matter of tallying up the negative and positive. There are, after all, many other moral theories regarding the value of existence. It is also possible to dispute the claim that pain exceeds pleasure (or that unhappiness exceeds happiness).

One could also take a long view—even if pain outweighs pleasure now, humans seem to be making a better world and advancing technology. As such, it is easy to imagine that a better world lies ahead and it depends on our existence. That is, if one looks beyond the pleasure and pain of one’s own life and considers the future of humanity, the overall balance could very well be that the positive outweighs the negative. As such, it would be better for a person to exist—assuming that she has a role in the causal chain leading to that ultimate result.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Critical Thinking, Ethics & Science Journalism

As part of my critical thinking class, I cover the usual topics of credibility and experiments/studies. Since people often find critical thinking a dull subject, I regularly look for real-world examples that might be marginally interesting to students. As such, I was intrigued by John Bohannon’s detailed account of how he “fooled millions into thinking chocolate helps weight loss.”

Bohannon’s con provides an excellent cautionary tale for critical thinkers. First, he lays out in detail how easy it is to rig an experiment to get (apparently) significant results. As I point out to my students, a small experiment or study can generate results that seem significant, but really are not. This is why it is important to have an adequate sample size—as a starter. What is also needed is proper control, proper selection of the groups, and so on.

Second, he provides a clear example of a disgraceful stain on academic publishing, namely “pay to publish” journals that do not engage in legitimate peer review. While some bad science does slip through peer review, these journals apparently publish almost anything—provided that the fee is paid. Since the journals have reputable sounding names and most people do not know which journals are credible and which are not, it is rather easy to generate a credible seeming journal publication. This is why I cover the importance of checking sources in my class.

Third, he details how various news outlets published or posted the story without making even perfunctory efforts to check its credibility. Not surprisingly, I also cover the media in my class both from the standpoint of being a journalist and being a consumer of news. I stress the importance of confirming credibility before accepting claims—especially when doing so is one’s job.

While Bohannon’s con does provide clear evidence of problems in regards to corrupt journals, uncritical reporting and consumer credulity, the situation does raise some points worth considering. One is that while he might have “fooled millions” of people, he seems to have fooled relative few journalists (13 out of about 5,000 reporters who subscribe to the Newswise feed Bohannon used) and these seem to be more of the likes of the Huffington Post and Cosmopolitan as opposed to what might be regarded as more serious health news sources. While it is not known why the other reporters did not run the story, it is worth considering that some of them did look at it critically and rejected it. In any case, the fact that a small number of reporters fell for a dubious story is hardly shocking. It is, in fact, just what would be expected given the long history of journalism.

Another point of concern is the ethics of engaging in such a con. It is possible to argue that Bohannon acted ethically. One way to do this is to note that using deceit to expose a problem can be justified on utilitarian grounds. For example, it seems morally acceptable for a journalist or police officer to use deceit and go undercover to expose criminal activity. As such, Bohannon could contend that his con was effectively an undercover operation—he and his fellows pretended to be the bad guys to expose a problem and thus his deceit was morally justified by the fact that it exposed problems.

One obvious objection to this is that Bohannon’s deceit did not just expose corrupt journals and incautious reporters. It also misinformed the audience who read or saw the stories. To be fair, the harm would certainly be fairly minimal—at worst, people who believed the story would consume dark chocolate and this is not exactly a health hazard. However, intentionally spreading such misinformation seems morally problematic—especially since story retractions or corrections tend to get far less attention than the original story.

One way to counter this objection is to draw an analogy to the exposure of flaws by hackers. These hackers reveal vulnerabilities in software with the stated intent of forcing companies to address the vulnerabilities. Exposing such vulnerabilities can do some harm by informing the bad guys, but the usual argument is that this is outweighed by the good done when the vulnerability is fixed.

While this does have some appeal, there is the concern that the harm done might not outweigh the good done. In Bohannon’s case it could be argued that he has done more harm than good. After all, it is already well-established that the “pay to publish” journals are corrupt, that there are incautious journalists and credulous consumers. As such, Bohannon has not exposed anything new—he has merely added more misinformation to the pile.

It could be countered that although these problems are well known, it does help to continue to bring them to the attention of the public. Going back to the analogy of software vulnerabilities, it could be argued that if a vulnerability is exposed, but nothing is done to patch it, then the problem should be brought up until it is fixed, “for it is the doom of men that they forget.” Bohannon has certainly brought these problems into the spotlight and this might do more good than harm. If so, then this con would be morally acceptable—at least on utilitarian grounds.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter