Tag Archives: Ethics

Ex Machina & Other Minds II: Is the Android a Psychopath?

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” As in this essay, there will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and clearly passes the Cartesian test (she uses true language appropriately). But, Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking down doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a rather positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people in order to achieve one’s goals. In the movie, Ava clearly has what might be called Manipulative Intelligence (M.Q.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—thus suggesting she is a psychopath.

While the term “psychopath” gets thrown around quite a bit, it is important to be a bit more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying.

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it seems rather important to be able to determine whether a machine is a psychopath or not and to do so well before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often quite charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and are able to pass themselves off as normal people.

While Ava is a fictional android, the movie does present a rather effective appeal to intuition by creating a plausible android psychopath. She is able to manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter well worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals might be rather positive. For example, it is easy to imagine a medical or care-giving robot that uses its MQ to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MQ to please its partners. However, these goals might be rather negative—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath or not matters is the same reason why being able to determine if a human is a psychopath or not matters. Roughly put, it is important to know whether or not someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same function (only more reliably) as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in an artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 sort of situation).

In regards to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to an artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings, there would be the possibility that a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since an artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would still remain the question of what is really going on in that mind.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Davis & Ad Hominems

Kim Davis, a county clerk in Kentucky, has been the focus of national media because of her refusal to issue marriage licenses to same-sex couples. As this is being written, Davis has been sent to jail for disobeying a court order.

The Scarlet Letter (1926 film)

The Scarlet Letter (1926 film) (Photo credit: Wikipedia)

As should be expected, opponents of same-sex marriage have tended to focus on the claim that Davis’ religious liberty is being violated. As should also be expected, her critics sought and found evidence of what seems to be her hypocrisy: Davis has been divorced three times and is on her fourth marriage. Some bloggers, eager to attack her, have claimed that she is guilty of adultery. These attacks can be relevant to certain issues, but they are also irrelevant in important ways. It is certainly worth sorting between the relevant and the irrelevant.

If the issue at hand is whether or not Davis is consistent in her professed religious values, then her actions are clearly relevant. After all, if a person claims to have a set of values and acts in ways that violate those values, then this provides legitimate grounds for accusations of hypocrisy and even of claims that the person does not really hold to that belief set. That said, there can be many reasons why a person acts in violation of her professed values. One obvious reason is moral weakness—most people, myself included, do act in violation of their principle due to the many flaws and frailties that we all possess. Since none of us is without sin, we should not be hasty in judging the perceived failings of others.  However, it is reasonable to consider a person’s actions when assessing whether or not she is acting in a manner consistent with her professed values.

If Davis is, in fact, operating on the principle that marriage licenses should not be issued to people who have violated the rules of God (presumably as presented in the bible), then she would have to accept that she should not have been issued a marriage license (after all, there is a wealth of scriptural condemnation of adultery and divorce). If she accepts that she should have been issued her license despite her violations of religious rules, then consistency would seem to require that the same treatment be afforded to everyone—including same-sex couples. After all, adultery makes God’s top ten list while homosexuality is only mentioned in a single line (and one that also marks shellfish as an abomination). So, if adulterers can get licenses, it would be rather difficult to justify denying same-sex couples licenses on the grounds of a Christian faith.

If the issue at hand is whether or not Davis is right in her professed view and her refusal to grant licenses to same-sex couples, then references to her divorce and alleged adultery are logically irrelevant. If a person claims that Davis is wrong in her view or acted wrongly in denying licenses because she has been divorced or has (allegedly) committed adultery, then this would be a mere personal attack ad hominem. A personal attack is committed when a person substitutes abusive remarks for evidence when attacking another person’s claim or claims. This line of “reasoning” is fallacious because the attack is directed at the person making the claim and not the claim itself. The truth value of a claim is independent of the person making the claim. After all, no matter how repugnant an individual might be, he or she can still make true claims.

If a critic of Davis asserts that her claim about same-sex marriage is in error because of her own alleged hypocrisy, then the critic is engaged in an ad hominem tu quoque.  This fallacy is committed when it is concluded that a person’s claim is false because 1) it is inconsistent with something else a person has said or 2) what a person says is inconsistent with her actions. The fact that a person makes inconsistent claims does not make any particular claim she makes false (although of any pair of inconsistent claims only one can be true—but both can be false). Also, the fact that a person’s claims are not consistent with her actions might indicate that the person is a hypocrite but this does not prove her claims are false. As such, Davis’ behavior has no bearing on the truth of her claims or the rightness of her decision to deny marriage licenses to same-sex couples.

Dan Savage and others have also made the claim that Davis is motivated by her desire to profit from the fame she is garnering from her actions. Savage asserts that “But no one is stating the obvious: this isn’t about Kim Davis standing up for her supposed principles—proof of that in a moment—it’s about Kim Davis cashing in.” Given, as Savage notes, the monetary windfall received by the pizza parlor owners who refused to cate a same-sex wedding, this has some plausibility.

If the issue at hand is Davis’ sincerity and the morality of her motivations, then whether or not she is motivated by hopes of profit or sincere belief does matter. If she is opposing same-sex marriage based on her informed conscience or, at the least, on a sincerely held principle, then that is a rather different matter than being motivated by a desire for fame and profit. A person motivated by principle to take a moral stand is at least attempting to act rightly—whether or not her principle is actually good or not. Claiming to be acting from principle while being motivated by fame and fortune would be to engage in deceit.

However, if the issue is whether or not Davis is right about her claim regarding same-sex marriage, then her motivations are not relevant. To think otherwise would be to fall victim to yet another ad hominem, the circumstantial ad hominem. This is a fallacy in which one attempts to attack a claim by asserting that the person making the claim is making it simply out of self-interest. In some cases, this fallacy involves substituting an attack on a person’s circumstances (such as the person’s religion, political affiliation, ethnic background, etc.). This ad hominem is a fallacy because a person’s interests and circumstances have no bearing on the truth or falsity of the claim being made. While a person’s interests will provide them with motives to support certain claims, the claims stand or fall on their own. It is also the case that a person’s circumstances (religion, political affiliation, etc.) do not affect the truth or falsity of the claim. This is made quite clear by the following example: “Bill claims that 1+1 =2. But he is a Christian, so his claim is false.” Or, if someone claimed that Dan Savage was wrong simply because of his beliefs.

Thus, Davis’ behavior, beliefs, and motivations are relevant to certain issues. However, they are not relevant to the truth (or falsity) of her claims regarding same-sex marriage.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Autonomous Weapons I: The Letter

On July 28, 2015 the Future of Life Institute released an open letter expressing opposition to the development of autonomous weapons. Although the name of the organization sounds like one I would use as a cover for an evil, world-ending cult in a Call of Cthulhu campaign, I am willing to accept that this group is sincere in its professed values. While I do respect their position on the issue, I believe that they are mistaken. I will assess and reply to the arguments in the letter.

As the letter notes, an autonomous weapon is capable of selecting and engaging targets without human intervention. An excellent science fiction example of such a weapon is the claw of Philip K. Dick’s classic “Second Variety” (a must read for anyone interested in the robopocalypse). A real world example of such a weapon, albeit a stupid one, is the land mine—they are placed and then engage automatically.

The first main argument presented in the letter is essentially a proliferation argument. If a major power pushes AI development, the other powers will also do so, creating an arms race. This will lead to the development of cheap, easy to mass-produce AI weapons. These weapons, it is claimed, will end up being acquired by terrorists, warlords, and dictators. These evil people will use these weapons for assassinations, destabilization, oppression and ethnic cleansing. That is, for what these evil people already use existing weapons to do quite effectively. This raises the obvious concern about whether or not autonomous weapons would actually have a significant impact in these areas.

The authors of the letter do have a reasonable point: as science fiction stories have long pointed out, killer robots tend to simply obey orders and they can (at least in fiction) be extremely effective. However, history has shown that terrorists, warlords, and dictators rarely have trouble finding humans who are willing to commit acts of incredible evil. Humans are also quite good at these sort of things and although killer robots are awesomely competent in fiction, it remains to be seen if they will be better than humans in the real world. Especially the cheap, mass produced weapons in question.

That said, it is reasonable to be concerned that a small group or individual could buy a cheap robot army when they would otherwise not be able to put together a human force. These “Walmart” warlords could be a real threat in the future—although small groups and individuals can already do considerable damage with existing technology, such as homemade bombs. They can also easily create weaponized versions of non-combat technology, such as civilian drones and autonomous cars—so even if robotic weapons are not manufactured, enterprising terrorists and warlords will build their own. Think, for example, of a self-driving car equipped with machine guns or just loaded with explosives.

A reasonable reply is that the warlords, terrorists and dictators would have a harder time of it without cheap, off the shelf robotic weapons. This, it could be argued, would make the proposed ban on autonomous weapons worthwhile on utilitarian grounds: it would result in less deaths and less oppression.

The authors then claim that just as chemists and biologists are generally not in favor of creating chemical or biological weapons, most researchers in AI do not want to design AI weapons. They do argue that the creation of AI weapons could create a backlash against AI in general, which has the potential to do considerable good (although there are those who are convinced that even non-weapon AIs will wipe out humanity).

The authors do have a reasonable point here—members of the public do often panic over technology in ways that can impede the public good. One example is in regards to vaccines and the anti-vaccination movement. Another example is the panic over GMOs that is having some negative impact on the development of improved crops. But, as these two examples show, backlash against technology is not limited to weapons, so the AI backlash could arise from any AI technology and for no rational reason. A movement might arise, for example, against autonomous cars. Interestingly, military use of technology seems to rarely create backlash from the public—people do not refuse to fly in planes because the military uses them to kill people. Most people also love GPS, which was developed for military use.

The authors note that chemists, biologists and physicists have supported bans on weapons in their fields. This might be aimed at attempting to establish an analogy between AI researchers and other researchers, perhaps to try to show these researchers that it is a common practice to be in favor of bans against weapons in one’s area of study. Or, as some have suggested, the letter might be making an analogy between autonomous weapons and weapons of mass destruction (biological, chemical and nuclear weapons).

One clear problem with the analogy is that biological, chemical and nuclear weapons tend to be the opposite of robotic smart weapons: they “target” everyone without any discrimination. Nerve gas, for example, injures or kills everyone. A nuclear bomb also kills or wounds everyone in the area of effect. While AI weapons could carry nuclear, biological or chemical payloads and they could be set to simply kill everyone, this lack of discrimination and WMD nature is not inherent to autonomous weapons. In contrast, most proposed autonomous weapons seem intended to be very precise and discriminating in their killing. After all, if the goal is mass destruction, there is already the well-established arsenal of biological, chemical and nuclear weapons. Terrorists, warlords and dictators often have no problems using WMDs already and AI weapons would not seem to significantly increase their capabilities.

In my next essay on this subject, I will argue in favor of AI weapons.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

ISIS & Rape

Looked at in the abstract, ISIS seems to be another experiment in the limits of human evil, addressing the question of how bad people can become before they are unable to function as social beings. While ISIS is well known for its theologically justified murder and destruction, it has now become known for its theologically justified slavery and rape.

While I am not a scholar of religion, it is quite evident that scriptural justifications of slavery and rape exist and require little in the way of interpretation. In this, Islamic scripture is similar to the bible—this book also contains rules about the practice of slavery and guidelines regarding the proper practice of rape. Not surprisingly, mainstream religious scholars of Islam and Christianity tend to argue that these aspects of scripture no longer apply or that they can be interpreted in ways that do not warrant slavery or rape. Opponents of these faiths tend to argue that the mainstream scholars are mistaken and that the wicked behavior enjoined in such specific passages express the true principles of the faith.

Disputes over specific passages lead to the broader debate about the true tenets of a faith and what it is to be a true member of that faith. To use a current example, opponents of Islam often claim that Islam is inherently violent and that the terrorists exemplify the true members of Islam. Likewise, some who are hostile to Christianity claim that it is a hateful religion and point to Christian extremists, such as God Hates Fags, as exemplars of true Christianity. This is a rather difficult and controversial matter and one I have addressed in other essays.

A reasonable case can be made that slavery and rape are not in accord with Islam, just as a reasonable case can be made that slavery and rape are not in accord with Christianity. As noted above, it can argued that times have changed, that the texts do not truly justify the practices and so on. However, these passages remain and can be pointed to as theological evidence in favor of the religious legitimacy of these practices. The practice of being selective about scripture is indeed a common one and people routinely focus on passages they like while ignoring passages that they do not like. This selectivity is, not surprisingly, most often used to “justify” prejudice, hatred and misdeeds. Horribly, ISIS does indeed have textual support, however controversial it might be with mainstream Islamic thinkers. That, I think, cannot be disputed.

ISIS members not only claim that slavery and rape are acceptable, they go so far as to claim that rape is pleasing to God. According to Rukmini Callimachi’s article in the New York Times, ISIS rapists pray before raping, rape, and then pray after raping. They are not praying for forgiveness—the rape is part of the religious ritual that is supposed to please God.

The vast majority of monotheists would certainly be horrified by this and would assert that God is not pleased by rape (despite textual support to the contrary). Being in favor of rape is certainly inconsistent with the philosophical conception of God as an all good being. However, there is the general problem of sorting out what God finds pleasing and what He condemns. In the case of human authorities it is generally easy to sort out what pleases them and what they condemn: they act to support and encourage what pleases them and act to discourage, prevent and punish what they condemn. If God exists, He certainly is allowing ISIS to do as it will—He never acts to stop them or even to send a clear sign that He condemns their deeds. But, of course, God seems to share the same policy as Star Fleet’s Prime Directive now: He never interferes or makes His presence known.

The ISIS horror is yet another series of examples in the long standing problem of evil—if God is all powerful, all-knowing and good, then there should be no evil. But, since ISIS is freely doing what it does it would seem to follow that God is lacking in some respect, He does not exist or He, as ISIS claims, is pleased by the rape of children.

Not surprisingly, religion is not particularly helpful here—while scripture and interpretations of scripture can be used to condemn ISIS, scripture can also be used to support them in their wickedness. God, as usual, is not getting involved, so we do not know what He really thinks. So, it would seem to be up human morality to settle this matter.

While there is considerable dispute about morality, the evil of rape and slavery certainly seem to be well-established. It can be noted that moral arguments have been advanced in favor of slavery, usually on the grounds of alleged superiority. However, these moral arguments certainly seem to have been adequately refuted. There are far fewer moral arguments in defense of rape, which is hardly surprising. However, these also seem to have been effectively refuted. In any case, I would contend that the burden of proof rests on those who would claim that slavery or rape are morally acceptable and invite readers to advance such arguments for due consideration.

Moving away from morality, there are also practical matters. ISIS does have a clear reason to embrace its theology of rape: as was argued by Rukmini Callimachi, it is a powerful recruiting tool. ISIS offers men a group in which killing, destruction and rape are not only tolerated but praised as being pleasing to God—the ultimate endorsement. While there are people who do not feel any need to justify their evil, even very wicked people often still want to believe that their terrible crimes are warranted or even laudable. As such, ISIS has considerable attraction to those who wish to do evil.

Accepting this theology of slavery and rape is not without negative consequences for recruiting—while there are many who find it appealing, there are certainly many more who find it appalling. Some ISIS supporters have endeavored to deny that ISIS has embraced this theology of rape and slavery—even they recognize some moral limits. Other supporters have not been dismayed by these revelations and perhaps even approve. Whether this theology of rape and slavery benefits ISIS more than it harms it will depend largely on the moral character of its potential recruits and supporters. I certainly hope that this is a line that many are not willing to cross, thus cutting into ISIS’ potential manpower and financial support. What impact this has on ISIS’ support will certainly reveal much about the character of their supporters—do they have some moral limits?


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Lion, the HitchBOT and the Fetus

After Cecil the Lion was shot, the internet erupted in righteous fury against the killer. Not everyone was part of this eruption and some folks argued against feeling bad for Cecil—some accusing the mourners of being phonies and pointing out that lions kill people. What really caught my attention, however, was the use of a common tactic—to “refute” those condemning the killing of Cecil by asserting that these “lion lovers” do not get equally upset about the fetuses killed in abortions.

When HitchBOT was destroyed, a similar sort of response was made—in fact, when I have written about ethics and robots (or robot-like things) I have been subject to criticism on the same grounds: it is claimed that I value robots more than fetuses and presumably I have thus made some sort of error in my arguments about robots.

Since I find this tactic interesting and have been its target, I thought it would be worth my while to examine it in a reasonable and (hopefully) fair way.

One way to look at this approach is to take it as the use of the Consistent Application method, which is as follows. A moral principle is consistently applied when it is applied in the same way to similar beings in similar circumstances. Inconsistent application is a problem because it violates three commonly accepted moral assumptions: equality, impartiality and relevant difference.

Equality is the assumption that those that moral equals must be treated as such. It also requires that those that are not morally equal be treated differently.

Impartiality is the assumption that moral principles must not be applied with partiality. Inconsistent application would involve non-impartial application.

Relevant difference is a common moral assumption. It is the view that different treatment must be justified by relevant differences. What counts as a relevant difference in particular cases can be a matter of great controversy. For example, while many people do not think that gender is a relevant difference in terms of how people should be treated other people think it is very important. This assumption requires that principles be applied consistently.

The method of Consistent Application involves showing that a principle or standard has been applied differently in situations that are not relevantly different. This allows one to conclude that the application is inconsistent, which is generally regarded as a problem. The general form is as follows:

Step 1: Show that a principle/standard has been applied differently in situations that are not adequately different.

Step 2: Conclude that the principle has been applied inconsistently.

Step 3 (Optional): Require that the principle be applied consistently.

Applying this method often requires determining the principle the person/group is using. Unfortunately, people are not often clear in regards to what principle they are actually using. In general, people tend to just make moral assertions and leave it to others to guess what their principles might be. In some cases, it is likely that people are not even aware of the principles they are appealing to when making moral claims.

Turning now to the cases of the lion, the HitchBOT and the fetus consistent application could be applied as follows:

Step 1: Those who are outraged at the killing of the lion are using the principle that the killing of living things is wrong. Those outraged at the destruction of HitchBOT are using the principle that helpless things should not be destroyed. These people are not outraged by abortions in general and the Planned Parenthood abortions in particular.

Step 2: The lion and HitchBOT mourners are not being consistent in their application of the principle since fetuses are helpless (like HitchBOT) and living things (like Cecil the lion).

Step 3 (Optional): Those mourning for Cecil and HitchBOT should mourn for the fetuses and oppose abortion in general and Planned Parenthood in particular.

This sort of use of Consistent Application is quite appealing and I routinely use the method myself. For example, I have argued (in a reverse of this situation) that people who are anti-abortion should also be anti-hunting and that people who are fine with hunting should also be morally okay with abortion.

As with any method of arguing, there are counter methods. In the case of this method, there are three general reasonable responses. The first is to admit the inconsistency and stop applying the principle in an inconsistent manner. This obviously does not defend against the charge but can be an honest reply. People, as might be imagined, rarely take this option.

A second way to reply and one that is an actual defense is to dissolve the inconsistency by showing that the alleged inconsistency is merely apparent. The primary way to do this is by showing that there is a relevant difference in the situation. For example, someone who wants to be morally opposed to the shooting of Cecil while being morally tolerant of abortions could argue that the adult lion has a moral status different from the fetus—one common approach is to note the relation of the fetus to the woman and how a lion is an independent entity. The challenge lies in making a case for the relevance of the difference.

A third way to reply is to reject the attributed principle. In the situation at hand, the assumption is that a person is against killing the lion simply because it is alive. However, that might not be the principle the person is, in fact, using. His principle might be based on the suffering of a conscious being and not on mere life. In this case, the person would be consistent in his application.

Naturally enough, the “new” principle is still subject to evaluation. For example, it could be argued the suffering principle is wrong and that the life principle should be accepted instead. In any case, this method is not an automatic “win.”

An alternative interpretation of this tactic is to regard it as an ad homimen: An ad Hominem is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made (or the character, circumstances, or actions of the person reporting the claim). Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). This type of “argument” has the following form:

  1. Person A makes claim X.
  2. Person B makes an attack on person A.
  3. Therefore A’s claim is false.

The reason why an ad Hominem (of any kind) is a fallacy is that the character, circumstances, or actions of a person do not (in most cases) have a bearing on the truth or falsity of the claim being made (or the quality of the argument being made).

In the case of the lion, the HitchBOT and the fetus, the reasoning can be seen as follows:

  1. Person A claims that killing Cecil was wrong or that destroying HitchBOT was wrong.
  2. Person B notes that A does not condemn abortions in general or Planned Parenthood’s abortions in particular.
  3. Therefore A is wrong about Cecil or HitchBOT.

Obviously enough, a person’s view of abortion does not prove or disprove her view about the ethics of the killing of Cecil or HitchBOT (although a person can, of course, be engaged in inconsistency or other errors—but these are rather different matters).

A third alternative is that the remarks are not meant as an argument, either the reasonable application of a Consistent Application criticism or the unreasonable attack of an ad homimen. In this case, the point is to assert that the lion lovers and bot buddies are awful people or, at best, misguided.

The gist of the tactic is, presumably, to make these people seem bad by presenting a contrast: these lion lovers and bot buddies are broken up about lions and trashcans, but do not care about fetuses—what awful people they are.

One clear point of concern is that moral concern is not a zero-sum game. That is, regarding the killing of Cecil as wrong and being upset about it does not entail that a person thus cares less (or not at all) about fetuses. After all, people do not just get a few “moral tokens” to place such that being concerned about one misdeed entails they must be unable to be concerned about another. Put directly, a person can condemn the killing of Cecil and also condemn abortion.

The obvious response is that there are people who are known to condemn the killing of Cecil or the destruction of HitchBOT and also known to be pro-choice. These people, it can be claimed, are morally awful. The equally obvious counter is that while it is easy to claim such people are morally awful, the challenge lies in showing that they are actually awful. That is, that their position on abortion is morally wrong. Noting that they are against lion killing or bot bashing and pro-choice does not show they are in error—although, as noted above, they could be challenged on the grounds of consistency. But this requires laying out an argument rather than merely juxtaposing their views on these issues. This version of the tactic simply amounts to asserting or implying that there is something wrong with the person because one disagree with that person. But a person thinking that hunting lions or bashing bots is okay and that abortion is wrong, does not prove that the opposing view is in error. It just states the disagreement.

Since the principle of charity requires reconstructing and interpreting arguments in the best possible way, I endeavor to cast this sort of criticism as a Consistent Application attack rather than the other two. This approach is respectful and, most importantly, helps avoid creating a straw man of the opposition.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

HitchBOT & Kant

Dr. Frauke Zeller and Dr. David Smith created HitchBOT (essentially a solar powered iPhone in an anthropomorphic shell) and sent him on trip to explore the USA on July 17, 2015. HitchBOT had previously successfully journey across Canada and Germany. The experiment was aimed at seeing how humans would interact with the “robot.”  He lasted about two weeks in the United States, meeting his end in Philadelphia. The exact details of his destruction (and the theft of the iPhone) are not currently known, although the last people known to be with HitchBOT posted what seems to be faked “surveillance camera” video of HitchBOT’s demise. This serves to support the plausible claim that the internet eventually ruins everything it touches.

The experiment was certainly both innovative and interesting. It also generated questions about what the fate of HitchBOT says about us. We do, of course, already know a great deal about us: we do awful things to each other, so it is hardly surprising that someone would do something awful to the HitchBOT. People are killed every day in the United States, vandalism occurs regularly and the theft of technology is routine—thus it is no surprise that HitchBOT came to a bad end. In some ways, it was impressive that he made it as far as he did.

While HitchBOT seems to have met his untimely doom at the hands of someone awful, what is most interesting is how well HitchBOT was treated. After all, he was essentially an iPhone in a shell that was being transported about by random people.

One reason that HitchBOT was well treated and transported about by people is no doubt because it fits into the travelling gnome tradition. For those not familiar with the travelling gnome prank, it involves “stealing” a lawn gnome and then sending the owner photographs of the gnome from various places. The gnome is then returned (at least by nice pranksters). HitchBOT is a rather more elaborate version of the traveling gnome and, obviously, differs from the classic travelling gnome in that the owners sent HitchBOT on his fatal adventure. People, perhaps, responded negatively to the destruction of HitchBOT because it broke the rules of the travelling gnome game—the gnome is supposed to roam and make its way safely back home.

A second reason for HitchBOT’s positive adventures (and perhaps also his negative adventure) is that he became a minor internet celebrity. Since celebrity status, like moth dust, can rub off onto those who have close contact it is not surprising that people wanted to spend time with HitchBOT and post photos and videos of their adventures with the iPhone in a trash can. On the dark side, destroying something like HitchBOT is also a way to gain some fame.

A third reason, which is probably more debatable, is that HitchBOT was given a human shape, a cute name and a non-threatening appearance and these tend to incline people to react positively. Natural selection has probably favored humans that are generally friendly to other humans and this presumably extends to things that resemble humans. There is probably also some hardwiring for liking cute things, which causes humans to generally like things like young creatures and cute stuffed animals. HitchBOT was also given a social media personality by those conducting the experiment which probably influenced people into feeling that it had a personality of its own—even though they knew better.

Seeing a busted up HitchBOT, which has an anthropomorphic form, presumably triggers a response similar too (but rather weaker than) what a sane human would have to seeing the busted up remains of a fellow human.

While some people were rather upset by the destruction of HitchBOT, others have claimed that it was literally “a pile of trash that got what it deserved.” A more moderate position is that while it was unfortunate that HitchBOT was busted up, it is unreasonable to be overly concerned by this act of vandalism because HitchBOT was just an iPhone in a fairly cheap shell. As such, while it is fine to condemn the destruction as vandalism, theft and the wrecking of a fun experiment, it is unreasonable to see the matter as actually being important. After all, there are far more horrible things to be concerned about, such as the usual murdering of actual humans.

My view is that the moderate position is quite reasonable: it is too bad HitchBOT was vandalized, but it was just an iPhone in a shell. As such, its destruction is not a matter of great concern. That said, the way HitchBOT was treated is still morally significant. In support of this, I turn to what has become my stock argument in regards to the ethics of treating entities that lack moral status. This argument is stolen from Kant and is a modification of his argument regarding the treatment of animals.

Kant argues that we should treat animals well despite his view that animals have the same moral status as objects. Here is how he does it (or tries to do it).

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X would obligate us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in his old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot the dog?

Kant’s answer seems to be rather consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Interestingly enough, Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being rather gentle with a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are essentially practice for us: how we treat them is training for how we will treat human beings.

Being an iPhone in a cheap shell, HitchBOT obviously had the moral status of an object and not that of a person. He did not feel or think and the positive feelings people had towards it were due to its appearance (cute and vaguely human) and the way those running the experiment served as its personality via social media. It was, in many ways, a virtual person—or at least the manufactured illusion of a person.

Given the manufactured pseudo-personhood of HitchBOT, it could be taken as being comparable to an animal, at least in Kant’s view. After all, animals are mere objects and have no moral status of their own. Likewise for HitchBOT Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Thus, a key matter to settle is whether HitchBOT was more like an animal or more like a stone—at least in regards to the matter at hand.

If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person engaging in said behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also extend to HitchBOT. For example, if engaging in certain activities with a HitchBOT would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with HitchBOT would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior.

While the result of interactions with the HitchBOT would need to be properly studied, it makes intuitive sense that being “nice” to the HitchBOT would help incline people to be somewhat nicer to others (much along the lines of how children are encouraged to play nicely with their stuffed animals). It also makes intuitive sense that being “mean” to HitchBOT would incline people to be somewhat less nice to others. Naturally, people would also tend to respond to HitchBOT based on whether they already tend to be nice or not. As such, it is actually reasonable to praise nice behavior towards HitchBOT and condemn bad behavior—after all, it was a surrogate for a person. But, obviously, not a person.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Introduction to Philosophy

The following provides a (mostly) complete Introduction to Philosophy course.

Readings & Notes (PDF)

Class Videos (YouTube)

Part I Introduction

Class #1

Class #2: This is the unedited video for the 5/12/2015 Introduction to Philosophy class. It covers the last branches of philosophy, two common misconceptions about philosophy, and argument basics.

Class #3: This is the unedited video for class three (5/13/2015) of Introduction to Philosophy. It covers analogical argument, argument by example, argument from authority and some historical background for Western philosophy.

Class #4: This is the unedited video for the 5/14/2015 Introduction to Philosophy class. It concludes the background for Socrates, covers the start of the Apology and includes most of the information about the paper.

Class#5: This is the unedited video of the 5/18/2015 Introduction to Philosophy class. It concludes the details of the paper, covers the end of the Apology and begins part II (Philosophy & Religion).

Part II Philosophy & Religion

Class #6: This is the unedited video for the 5/19/2015 Introduction to Philosophy class. It concludes the introduction to Part II (Philosophy & Religion), covers St. Anselm’s Ontological Argument and some of the background for St. Thomas Aquinas.

Class #7: This is the unedited video from the 5/20/2015 Introduction to Philosophy class. It covers Thomas Aquinas’ Five Ways.

Class #8: This is the unedited video for the eighth Introduction to Philosophy class (5/21/2015). It covers the end of Aquinas, Leibniz’ proofs for God’s existence and his replies to the problem of evil, and the introduction to David Hume.

Class #9: This is the unedited video from the ninth Introduction to Philosophy class on 5/26/2015. This class continues the discussion of David Hume’s philosophy of religion, including his work on the problem of evil. The class also covers the first 2/3 of his discussion of the immortality of the soul.

Class #10: This is the unedited video for the 5/27/2015 Introduction to Philosophy class. It concludes Hume’s discussion of immortality, covers Kant’s critiques of the three arguments for God’s existence, explores Pascal’s Wager and starts Part III (Epistemology & Metaphysics). Best of all, I am wearing a purple shirt.

Part III Epistemology & Metaphysics

Class #11: This is the 11th Introduction to Philosophy class (5/28/2015). The course covers Plato’s theory of knowledge, his metaphysics, the Line and the Allegory of the Cave.

Class #12: This is the unedited video for the 12th Introduction to Philosophy class (6/1/2015). This class covers skepticism and the introduction to Descartes.

Class #13: This is the unedited video for the 13th Introduction to Philosophy class (6/2/2015). The class covers Descartes 1st Meditation, Foundationalism and Coherentism as well as the start to the Metaphysics section.

Class #14: This is the unedited video for the fourteenth Introduction to Philosophy class (6/3/2015). It covers the methodology of metaphysics and roughly the first half of Locke’s theory of personal identity.

Class #15: This is the unedited video of the fifteen Introduction to Philosophy class (6/4/2015). The class covers the 2nd half of Locke’s theory of personal identity, Hume’s theory of personal identity, Buddha’s no self doctrine and “Ghosts & Minds.”

Class #16: This is the unedited video for the 16th Introduction to Philosophy class. It covers the problem of universals,  the metaphysics of time travel in “Meeting Yourself” and the start of the metaphysics of Taoism.

Part IV Value

Class #17: This is the unedited video for the seventeenth Introduction to Philosophy class (6/9/2015). It begins part IV and covers the introduction to ethics and the start of utilitarianism.

Class #18: This is the unedited video for the eighteenth Introduction to Philosophy class (6/10/2015). It covers utilitarianism and some standard problems with the theory.

Class #19: This is the unedited video for the 19th Introduction to Philosophy class (6/11/2015). It covers Kant’s categorical imperative.

Class #20: This is the unedited video for the twentieth Introduction to Philosophy class (6/15/2015). This class covers the introduction to aesthetics and Wilde’s “The New Aesthetics.” The class also includes the start of political and social philosophy, with the introduction to liberty and fascism.

Class #21: No video.

Class #22: This is the unedited video for the 22nd Introduction to Philosophy class (6/17/2015). It covers Emma Goldman’s anarchism.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Ethics of Backdoors

In philosophy, one of the classic moral debates has focused on the conflict between liberty and security. While this topic covers many issues, the main problem is determining the extent to which liberty should be sacrificed in order to gain security. There is also the practical question of whether or not the security gain is actually effective.

One of the recent versions of this debate focuses on tech companies being required to include electronic backdoors in certain software and hardware. Put in simple terms, a backdoor of this sort would allow government agencies (such as the police, FBI and NSA) to gain access even to files and hardware protected by encryption. To use an analogy, this would be like requiring that all dwellings be equipped with a special door that could be secretly opened by the government to allow access to the contents of the house.

The main argument in support of mandating such backdoors is a fairly stock one: governments need such access for criminal investigators, gathering military intelligence and (of course) to “fight terrorism.” The concern is that if there is not a backdoor, criminals and terrorists will be able to secure their data and thus prevent state agencies from undertaking surveillance or acquiring evidence.

As is so often the case with such arguments, various awful or nightmare scenarios are often presented in making the case. For example, it might be claimed that the location and shutdown codes for ticking bombs could be on an encrypted iPhone. If the NSA had a key, they could just get that information and save the day. Without the key, New York will be a radioactive crater. As another example, it might be claimed that a clever child pornographer could encrypt all his pornography, making it impossible to make the case against him, thus ensuring he will be free to pursue his misdeeds with impunity.

While this argument is not without merit, there are numerous stock counter arguments. Many of these are grounded in views of individual liberty and privacy—the basic idea being that an individual has the right to have such security against the state. These arguments are appealing to both liberals (who tend to profess to like privacy rights) and conservatives (who tend to claim to be against the intrusions of big government).

Another moral argument is grounded in the fact that the United States government has shown that it cannot be trusted. To use an analogy, imagine that agents of the state were caught sneaking into the dwellings of all citizens and going through their stuff in clear violation of the law, the constitution and basic moral rights. Then someone developed a lock that could only be opened by the person with the proper key. If the state then demanded that the lock company include a master key function to allow the state to get in whenever it wanted, the obvious response would be that the state has already shown that it cannot be trusted with such access. If the state had behaved responsibly and in accord with the laws, then it could have been trusted. But, like a guest who abused her access to a house, the state cannot and should not be trusted with a key After all, we already know what they will do.

This argument also applies to other states that have done similar things. In the case of states that are even worse in their spying on and oppression of their citizens, the moral concerns are even greater. Such backdoors would allow the North Korean, Chinese and Iranian governments to gain access to devices, while encryption would provide their citizens with some degree of protection.

The strongest moral and practical argument is grounded on the technical vulnerabilities of integrated backdoors. One way that a built-in backdoor creates vulnerability is its very existence. To use a somewhat oversimplified analogy, if thieves know that all vaults have a built in backdoor designed to allow access by the government, they will know that a vulnerability exists that can be exploited.

One counter-argument against this is that the backdoor would not be that sort of vulnerability—that is, it would not be like a weaker secret door into a vault. Rather, it would be analogous to the government having its own combination that would work on all the vaults. The vault itself would be as strong as ever; it is just that the agents of the state would be free to enter the vault when they are allowed to legally do so (or when they feel like doing so).

The obvious moral and practical concern here is that the government’s combination to the vaults (to continue with the analogy) could be stolen and used to allow criminals or enemies easy access to all the vaults. The security of such vaults would be only as good as the security the government used to protect this combination (or combinations—perhaps one for each manufacturer). As such, the security of every user depends on the state’s ability to secure its means of access to hardware and software.

The obvious problem is that governments, such as the United States, have shown that they are not very good at providing such security. From a moral standpoint, it would seem to be wrong to expect people to trust the state with such access, given the fact that the state has shown that it cannot be depended on in such matters. To use an analogy, imagine you have a friend who is very sloppy about securing his credit card numbers, keys, PINs and such—in fact, you know that his information is routinely stolen. Then imagine that this friend insists that he needs your credit card numbers, PINs and such and that he will “keep them safe.” Given his own track record, you have no reason to trust this friend nor any obligation to put yourself at risk, regardless of how much he claims that he needs the information.

One obvious counter to this analogy is that this irresponsible friend is not a good analogue to the state. The state has compulsive power that the friend lacks, so the state can use its power to force you to hand over this information.

The counter to this is that the mere fact that the state does have compulsive force does not mean that it is thus responsible—which is the key concern in regards to both the ethics of the matter and the practical aspect of the matter. That is, the burden of proof would seem to rest on those that claim there is a moral obligation to provide a clearly irresponsible party with such access.

It might then be argued that the state could improve its security and responsibility, and thus merit being trusted with such access. While this does have some appeal, there is the obvious fact that if hackers and governments knew that that the keys to the backdoors existed, they would expend considerable effort to acquire them and would, almost certainly, succeed. I can even picture the sort of headlines that would appear: “U.S. Government Hacked: Backdoor Codes Now on Sale on the Dark Web” or “Hackers Linked to China Hack Backdoor Keys; All Updated Apple and Android Devices Vulnerable!” As such, the state would not seem to have a moral right to insist on having such backdoors, given that the keys will inevitably be stolen.

At this point, the stock opening argument could be brought up again: the state needs backdoor access in order to fight crime and terrorism. There are two easy and obvious replies to this sort of argument.

The first is based in an examination of past spying, such as that done under the auspices of the Patriot Act. The evidence seems to show that this spying was completely ineffective in regards to fighting terrorism. These is no reason to think that backdoor access would change this.

The second is a utilitarian argument (which can be cast as a practical or moral argument) in which the likely harm done by having backdoor access must be weighed against the likely advantages of having such access. The consensus among those who are experts in security is that the vulnerability created by backdoors vastly exceeds the alleged gain to protecting people from criminals and terrorists.

Somewhat ironically, what is alleged to be a critical tool for fighting crime (and terrorism) would simply make cybercrime much easier by building vulnerabilities right into software and devices.

In light of the above discussion, it would seem that baked-in backdoors are morally wrong on many grounds (privacy violations, creation of needless vulnerability, etc.) and lack a practical justification. As such, they should not be required by the state.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robot Love II: Roboslation under the Naked Sun

In his Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human worlds is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide rather than endure such contact.

While this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. But, philosophers do love a good science fiction context for discussion, hence the use of The Naked Sun.

Almost everyone is now familiar with the popular narrative about smart phones and their role in allowing unrelenting access to social media. The main narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people (friends, relatives or even strangers) gathered together physically, yet ignoring each other in favor of gazing into the screens of their lords and masters. There are a multitude of anecdotes about this and many folks have their favorite tales of such events. As a professor, I see students engrossed by their phones—but, to be fair, Plato has nothing on cat videos. Like most people, I have had dates in which the other person was working two smartphones at once. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else—all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the main focus, namely robots. However, the reader should keep in mind the social isolation created by social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, what can be called social robots are a relatively new thing. Sure, there have long been “robot” toys and things like Teddy Ruxpin (essentially a tape player embedded in a simple amnitronic bear toy). But, the creation of reasonably sophisticated social robots is a relatively new thing. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from a pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies that are and will sell social robots are, unsurprisingly, quite positive about the future of social robots. There are, of course, some good arguments in their favor. Robot pets provide a good choice for people with allergies, who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person with special needs (such as someone who is ill, elderly or injured) requires round the clock attention and monitoring that would be expensive, burdensome or difficult for other humans to supply.

Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots and social media, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction—people are emotionally shortchanging themselves and those they are physically with in favor of staying relentlessly connected to social media. This, obviously enough, seems to be a taste of what Asimov created in The Naked Sun: people who view, but no longer see one another. Given the apparent importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. To use an analogy, human-human social interactions can be seen as being like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as being like consuming junk food or drugs—it is very addictive, but leaves one ultimately empty…yet always craving more.

It can be argued that this worry is unfounded—that social media is an adjunct to social interaction in the real world and that social interaction via things like Facebook and Twitter can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter does have some appeal, social robots do seem to be a different matter in that they are something new and rather radically different. While humans have had toys, stuffed animals and even simple mechanisms for non-living company, these are quite different from social robots. After all, social robots aim to effectively mimic or simulate animals or humans.

One concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant—that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This would make robotic companions rather more appealing than human company. At least the robots whose cost is not subsidized by advertising—imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner/user would be able to make changed to the robot. A person can, quite literally, make a friend with the desired qualities and missing undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right, at least in terms of some qualities.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends—there is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this regard.

Social robots, though they might breakdown or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful—just turn it off and lock it down when leaving it alone.

Unlike human companions, robot companions do not impose burdens—they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but it would seem that robotic companions would be superior to humans in most ways—at least in regards to common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship—will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you like and so on. However, these seem to be mostly technical problems involving software. Presumably all these could eventually be addressed and satisfactory companions could be created.

Since I have written specifically about sexbots in other essays, I will not discuss those here. Rather, I will discuss two potentially problematic aspect of companion bots.

One point of obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food—superficially appealing but lacking in what is needed for health. However, if the robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is stolen from the virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways in order to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to having only or primarily robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop the virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions be too easy would be analogous to going without physical exercise or challenges—one becomes emotionally soft and weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would actually lead to unhappiness. Because of this, one should carefully consider whether or not one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop the virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Robot Love I: Other Minds

Thanks to improvements in medicine humans are living longer and can be kept alive well past the point at which they would naturally die. On the plus side, longer life is generally (but not always) good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age. This care can be a considerable burden on the caregivers. Not surprisingly, there has been an effort to develop a technological solution to this problem, specifically companion robots that serve as caregivers.

While the technology is currently fairly crude, there is clearly great potential here and there are numerous advantages to effective robot caregivers. The most obvious are that robot caregivers do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be ideal 24/7/365 caregivers. This makes them superior in many ways to human caregivers who get tired, get depressed, get angry and have many other responsibilities.

There are, of course, some concerns about the use of robot caregivers. Some relate to such matters as their safety and effectiveness while others focus on other concerns. In the case of caregiving robots that are intended to provide companionship and not just things like medical and housekeeping services, there are both practical and moral concerns.

In regards to companion robots, there are at least two practical concerns regarding the companion aspect. The first is whether or not a human will accept a robot as a companion. In general, the answer seems to be that most humans will do so.

The second is whether or not the software will be advanced enough to properly read a human’s emotions and behavior in order to generate a proper emotional response. This response might or might not include conversation—after all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test—that is, read and respond correctly to human emotions. Since humans often botch this, there would be a fairly broad tolerable margin of error here. These practical concerns can be addressed technologically—it is simply a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things—the comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion appeal to all the senses.

While the practical problems can be solved with the right technology, there are some moral concerns with the use of robot caregiver companions. Some relate to people handing off their moral duties to care for their family members, but these are not specific to robots. After all, a person can hand off the duties to another person and this would raise a similar issue.

In regards to those specific to a companion robot, there are moral concerns about the effectiveness of the care—that is, are the robots good enough that trusting the life of an elderly or sick human would be morally responsible? While that question is important, a rather intriguing moral concern is that the robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate (fake) human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit and such a deceit seems to be morally wrong.

One obvious response is that people would realize that the robot does not really experience emotions, yet still gain value from its “fake” companionship. To use an analogy, people often find stuffed animals to be emotional reassuring even though they are well aware that the stuffed animal is just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect—if someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real in order to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship that a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to actually feel the emotions that they display and that they actually understand. In philosophical terms, humans have (or are) minds and robots (of the sort that will be possible in the near future) do not have minds. They merely create the illusion of having a mind.

Interestingly enough, philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior). While such behaviorism has been largely abandoned, it does survive in a variety of jokes and crude references to showing people some “love behavior.”

The usual “solution” to the problem is to go with the obvious: I think that other people have minds by an argument from analogy. I am aware of my own mental states and my behavior and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in many and various forms of deceit. For example, a person can fake being in pain or make a claim about love that is untrue. Piercing these deceptions can sometimes be very difficult since humans are often rather good at deceit. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way he wants people to believe he is thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it is displaying in its behavior, because it does not think or feel. Or so it is believed. The reason that a person would think this seems reasonable: in the case of a robot, we can go in and look at the code and the hardware to see how it all works and we will not see any emotions or thought in there. The robot, however complicated, is just a material machine, incapable of thought or feeling.

Long before robots, there were thinkers who claimed that a human is a material entity and that a suitable understanding of the mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes thoughts and emotions and understand the hardware it “runs” on.

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par—both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body—that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit—going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does and this will yield all the benefits of having a human companion.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter