Tag Archives: Ethics

Ethics (for Free)

LaBossiere EthicsThe following provides links to my Ethics course, allowing a person to get some ethics for free. Also probably works well as a sleep aid.*

Notes & Readings

Practice Tests

Power Point


Class YouTube Videos

These are unedited videos from the Fall 2015 Ethics class. Spoiler: I do not die at the end.

Part One Videos: Introduction & Moral Reasoning

Video 1:  It covers the syllabus.

Video 2: It covers the introduction to ethics, value, and the dreaded spectrum of morality.

Video 3: It covers the case paper.

Video 4: No video. Battery failure.

Video 5: It covers inductive arguments and the analogical argument.

Video 6: It covers Argument by/from Example and Argument from Authority.

Video 7:  It covers Inconsistent Application and Reversing the Situation.

Video 8:  It covers Argument by Definition, Appeal to Intuition, and Apply a Moral Principle. The death of the battery cuts this video a bit short.

Video 9:  It covers Applying Moral Principles, Applying Moral Theories, the “Playing God” Argument and the Unnatural Argument.

Video 10: It covers Appeal to Consequences and Appeal to Rules.

Video 11:  It covers Appeal to Rights and Mixing Norms.

Part Two Videos: Moral Theories

Video 12:  It covers the introduction to Part II and the start of virtue theory.

Video 13: It covers Confucius and Aristotle.

Video 14:  This continues Aristotle’s virtue theory.

Video 15: It covers the intro to ethics and religion as well as the start of Aquinas’ moral theory.

Video 16: It covers St. Thomas Aquinas, divine command theory, and John Duns Scotus.

Video 17: It covers the end of religion & ethics and the beginning of consequentialism.

Video 18: It covers Thomas Hobbes and two of the problems with ethical egoism.

Video 19:  It covers the third objection to ethical egoism, the introduction to utilitarianism and the start of the discussion of J.S. Mill. Includes reference to Jeremy “Headless” Bentham.

Video 20: This video covers the second part of utilitarianism, the objections against utilitarianism and the intro to deontology.

Video 21: It covers the categorical imperative.

Part Three Videos: Why Be Good?, Moral Education & Equality

Video 22: It covers the question of “why be good?” and Plato’s Ring of Gyges.

Video 23: It covers the introduction to moral education and the start of Aristotle’s theory of moral education.

Video 24: It covers more of Aristotle’s theory of moral education.

Video 25: It covers the end of Rousseau and the start of equality.

Video 26: It covers the end of Rousseau and the start of equality.

Video 27:  It covers Mary Wollstonecraft’s Vindication of the Rights of Women.

Video 28:  This video covers the second part of Wollstonecraft and gender equality.

Video 29: It covers the start of ethics and race.

Video 30: It covers St. Thomas Aquinas’ discussion of animals and ethics.

Video 31: It covers Descartes’ discussion of animals. Includes reference to Siberian Huskies.

Video 32: It covers the end of Kant’s animal ethics and the utilitarian approach to animal ethics.

Part IV: Rights, Obedience & Liberty

Video 33: It covers the introduction to rights and a bit of Hobbes.

Video 34: It covers Thomas Hobbes’ view of rights and the start of John Locke’s theory of rights.

Video 35:  It covers John Locke’s state of nature and theory of natural rights.

Video 36:  It covers Locke’s theory of property and tyranny. It also covers the introduction to obedience and disobedience.

Video 37: It covers the Crito and the start of Thoreau’s theory of civil disobedience.

Video 38:  It covers the second part of Thoreau’s essay on civil disobedience.

Video 39:  It covers the end of Thoreau’s civil disobedience, Mussolini’s essay on fascism and the start of J.S. Mill’s theory of Liberty.

Video 40:  It covers Mill’s theory of liberty.

Narration YouTube Videos

These videos consist of narration over Powerpoint slides. Good for naptime.

Part One Videos

Part Two Videos

Part Three Videos

Part Four Videos

*This course has not been evaluated by the FDA as a sleep aid. Use at your own risk. Side effects might include Categorical Kidneys, Virtuous Spleen, and Spontaneous Implosion.

Terraforming & Abortion

Copied from Image:MarsTransitionV.jpg: "....

(Photo credit: Wikipedia)

While terraforming and abortion are both subjects of moral debate, they would seem to have little else in common. However, some of the moral arguments used to justify abortion can be used to justify terraforming. These arguments will be given due consideration.

Briefly put, terraforming is the process of making a planet more earthlike. While this is still mostly a matter of science fiction, serious consideration has been given to how Mars, for example, might be changed to make it more compatible with terrestrial life. While there are some moral concerns with terraforming dead worlds, the main moral worries involve planets that already have life—or, at the very least, real potential for the emergence of life. If a world needs to be terraformed for human habitation, such terraforming is likely to prove harmful or even fatal for the indigenous life. For example, changing the atmosphere of a world to match that of earth would probably be problematic for whatever was breathing the original atmosphere. While it can be argued that there might be cases in which terraforming benefits the local life, I will focus on terraforming that exterminates the local life. I call this terminal terraforming.

One way to look at such terminal terraforming is to consider it as analogous to abortion. As will be shown, there are some important differences between the two—but for now I will focus on the moral similarities.

One stock type of argument in favor of the moral acceptability of abortion is the status argument. While these arguments take various forms, the gist is that the termination of a pregnancy is morally acceptable on the grounds that the woman has a superior moral status to the aborted entity (readers are free to use whichever term they prefer—I am endeavoring to use neutral terms to avoid begging the question). This sort of argument is very similar to the sort used by St. Aquinas and St. Augustine to morally justify killing plants and animals for food. Roughly put, humans are better than animals, so it is acceptable for us to harm them when we need to do so.

This argument can be pressed into use to justify terminal terraforming: if the indigenous life has less moral status than the terraforming species, then this would provide the grounds for arguing that the terraforming is morally acceptable.

The status argument has numerous variations. One common version uses the notion of rights—the rights of the woman outweigh the rights (if any) of the aborted entity. This is because the woman has the superior moral status. This argument is also commonly used to justify killing animals for food or sport—while they have some rights (maybe), the rights of humans’ trump those of animals.

In the case of terraforming, a similar sort of appeal to rights could be used to justify terminal terraforming. For example, if humans need to expand to a world that has only single-celled life, then the rights of humans would outweigh the rights of those creatures.

Another common version uses the notion of utilitarianism: the interests, happiness and unhappiness of the woman is weighed against the interests, happiness and unhappiness of the aborted entity. Those favoring this argument note that the interests, happiness and unhappiness of the woman far outweigh that of the aborted entity—usually because it lacks the capacities of an adult. Not surprisingly, this sort of argument is also used to justify the killing of animals. For example, it is often argued that the happiness people get from eating meat outweighs the unhappiness of the animals that are to be eaten.

As with the other status arguments, this can also be used to justify terraforming. As with all utilitarian arguments, it would involve weighing the happiness and unhappiness of the involved parties. If the life on the planet to be terraformed had less capacities than humans in regard to happiness and unhappiness (such as world whose highest form of life is the alien equivalent of algae), then it would be morally acceptable for humans to terraform that world. Or so it could be argued.

The status argument is sometimes itself supported by an argument focusing on the difference between actuality and potentiality. While the entity to be aborted is a potential person (on some views), it is not an actual person. Since the woman is an actual person, she has the higher status. The philosophical discussions of the potential versus the actual are rather old and are a matter of metaphysics. However, the argument can be made without a journey into the metaphysical realm simply by using the intuitive notions of potentiality and actuality. For example, an actual masterpiece of painting has higher worth than the blank canvas and unused paint that constitute a potential masterpiece. This sort of argument can also be used to justify terraforming on worlds whose lifeforms are not (yet) people and also, obviously enough, on worlds that merely have the potential of producing life.

While the analogy between the two has merit, there are some rather obvious ways to try to break the comparison. One obvious point is that in the case of abortion, the woman is the “owner” of the body where the aborted entity used to live. It is this relation that is often used to morally warrant abortion and to provide a moral distinction between a woman choosing to have an abortion and someone else who kills the product of conception.

When humans arrive to terraform a world that already has life, the life that lives there already “owns” the world and hence humans cannot claim that special relation that would justify choosing to kill. Instead, the situation would be more similar to killing the life within another person and this would presumably change the ethics of the situation.

Another important difference is that while abortion (typically) kills just one entity, terraforming would (typically) wipe out entire species. As such, terraforming of this sort would be analogous to aborting all pregnancies and exterminating the human race—as opposed to the termination of some pregnancies. This moral concern is, obviously enough, the same as the concern about human caused extinction here on earth. While people are concerned about the death of individual entities, there is the view that the extermination of a species is something morally worse than the death of all the individuals (that is, the wrong of extinction is not merely a sum of the wrong of all the individual deaths.

These considerations show that the analogy does have obvious problems. That said, there still seems to be a core moral concern that connects abortion and terraforming: what (if anything) morally justifies killing on the grounds of (alleged) superior moral status?

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Trump & Truthful Hyperbole

In Art of the Deal Donald Trump calls one of his rhetorical tools “truthful hyperbole.” He both defends and praises it as “an innocent form of exaggeration — and a very effective form of promotion.” As a promoter, Trump made extensive use of this technique. Now he is using it in his bid for President.

Hyperbole is an extravagant overstatement and it can be either positive or negative in character. When describing himself and his plans, Trump makes extensive use of positive hyperbole: he is the best and every plan of his is the best. He also makes extensive use of negative hyperbole—often to a degree that seems to cross over from exaggeration to fabrication. In any case, his concept of “truthful hyperbole” is well worth considering.

From a logical standpoint, “truthful hyperbole” is an impossibility. This is because hyperbole is, by definition, not true.  Hyperbole is not a merely a matter of using extreme language. After all, extreme language might accurately describe something. For example, describing Daesh as monstrous and evil would be spot on. Hyperbole is a matter of exaggeration that goes beyond the actual facts. For example, describing Donald Trump as monstrously evil would be hyperbole. As such, hyperbole is always untrue. Because of this, the phrase “truthful hyperbole” says the same thing as “accurate exaggeration”, which nicely reveals the problem.

Trump, a brilliant master of rhetoric, is right about the rhetorical value of hyperbole—it can have considerable psychological force. It, however, lacks logical force—it provides no logical reason to accept a claim. Trump also seems to be right in that there can be innocent exaggeration. I will now turn to the ethics of hyperbole.

Since hyperbole is by definition untrue, there are two main concerns. One is how far the hyperbole deviates from the truth. The other is whether the exaggeration is harmless or not. I will begin with consideration of the truth.

While a hyperbolic claim is necessarily untrue, it can deviate from the truth in varying degrees. As with fish stories, there does seem to be some moral wiggle room in regards to proximity to the truth. While there is no exact line (to require that would be to fall into the line drawing fallacy) that defines the exact boundary of morally acceptable exaggeration, some untruths go beyond that line. This line varies with the circumstances—the ethics of fish stories, for example, differs from the ethics of job interviews.

While hyperbole is untrue, it does have to have at least some anchor in the truth. If it does not, then it is not exaggeration but fabrication. This is the difference between being close to the truth and being completely untrue. Naturally, hyperbole can be mixed in with fabrication.

For example, if it is claimed that some people in America celebrated the terrorism of 9/11, then that is almost certainly true—there was surely at least one person who did this. If someone claims that dozens of people celebrated in public in America on 9/11 and this was shown on TV, then this might be an exaggeration (we do not know how many people in America celebrated) but it certainly includes a fabrication (the TV part). If it is claimed that hundreds did so, the exaggeration might be considerable—but it still contains a key fabrication. When the claim reaches thousands, the exaggeration might be extreme. Or it might not—thousands might have celebrated in secret. However, the claim that people were seen celebrating in public and video existed for Trump to see is false. So, his remarks might be an exaggeration, but they definitely contain fabrication. This could, of course, lead to a debate about the distinction between exaggeration and fabrication. For example, suppose that someone filmed himself celebrating on 9/11 and showed it to someone else. This could be “exaggerated” into the claim that thousands celebrated on video and people saw it. However, saying this is an exaggeration would seem to be an understatement. Fabrication would seem the far better fit in this hypothetical case.

One way to help determine the ethical boundaries of hyperbole is to consider the second concern, namely whether the hyperbole (untruth) is harmless or not. Trump is right to claim there can be innocent forms of exaggeration. This can be taken as exaggeration that is morally acceptable and can be used as a basis to distinguish such hyperbole from lying.

One realm in which exaggeration can be quite innocent is that of storytelling. Aristotle, in the Poetics, notes that “everyone tells a story with his own addition, knowing his hearers like it.” While a lover of truth Aristotle recognized the role of untruth in good storytelling, saying that “Homer has chiefly taught other poets the art of telling lies skillfully.” The telling of tall tales that feature even extravagant extravagation is morally acceptable because the tales are intended to entertain—that is, the intention is good. In the case of exaggerating in stories to entertain the audience or a small bit of rhetorical “shine” to polish a point, the exaggeration is harmless—which ties back to the possibility that Trump sees himself as an entertainer and not an actual candidate.

In contrast, exaggerations that have a malign intent would be morally wrong. Exaggerations that are not intended to be harmful, yet prove to be so would also be problematic—but discussing the complexities of intent and consequences would take the essay to far afield.

The extent of the exaggeration would also be relevant here—the greater the exaggeration that is aimed at malign purposes or that has harmful consequences, the worse it would be morally. After all, if deviating from the truth is (generally) wrong, then deviating from it more would be worse. In the case of Trump’s claim about thousands of people celebrating on 9/11, this untruth feeds into fear, racism and religious intolerance. As such, it is not an innocent exaggeration, but a malign untruth.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Taxing the 1% II: Coercion

As noted in my previous essay on this topic, those with the highest income in the United States currently pay about 1/3 of their income in taxes. There have been serious proposals on the left to increase this rate to 40% or even as high as 45%. Most conservatives are opposed to any increase to the taxes of the wealthy while many on the left favor such increases. As in the previous essay on this subject, I will focus on arguments against increasing the tax rate.

One way to argue against increasing taxes (or having any taxes at all) is to contend that to increase the taxes of the wealthy against their wishes would be an act of coercion. There are more hyperbolic ways to make this sort of argument, such as asserting that taxes are theft and robbery by the state. However, I will use the somewhat more neutral term of “coercion.” While “coercion” certainly has a negative connotation, the connotations of “theft” and “robbery” are rather more negative.

If coercion is morally wrong, then coercing the wealthy into paying more taxes would be wrong. As such, a key issue here is whether coercion is wrong or not. On the face of it, the morality of an act of coercion would seem to depend on a variety of factors, such as the goal of the coercion, the nature of the coercive act and the parties involved. A rather important factor is whether the coerced consented to the system of coercion. For example, it can be argued that criminals consented to the use of coercive force against them by being citizens of the state—they (in general) cannot claim they are being wronged when they are arrested and punished.

It could be claimed that by remaining citizens of the United States and participating in a democratic political system, the richest do give their consent to the decisions made by the legitimate authorities of the state. So, if Congress creates laws that change the tax rates, then the rich are obligated to go along. They might not like the specific decision that was made, but that is how a democratic system works. The state is to use its coercive power to ensure that the laws are followed—be they laws against murder, laws against infringing the patents of pharmaceutical companies or laws increasing the tax rate.

A reasonable response to this is that although the citizens of the state have agreed to be subject to the coercive power of the state, there are still moral limits on the power. Returning to the example of the police, there are moral limits on what sort of coercion they should use—even when the law and common practice might allow them to use such methods. Returning to the matter of laws, there are clearly unjust laws. As such, agreeing to be part of a coercive system does not entail that all the coercive actions of that system or its laws are morally acceptable. Given this, it could be claimed that the state coercing the rich into paying more taxes might be wrong.

It could be countered that if the taxes on the rich are increased, this would be after the state and the rich have engaged in negotiations regarding the taxes. The rich often have organizations, such as corporations, that enable them to present a unified front to the state. One might even say that these are unions of the wealthy. The rich also have lobbyists that can directly negotiate with the people in the government and, of course, the rich have the usual ability of any citizen to negotiate with the government.

If the rich fare poorly in their negotiations, perhaps because those making the decisions do not place enough value on what the rich have to offer in the negotiations, then the rich must accept this result. After all, that is how the free market of democratic politics works. To restrict the freedom of the state in its negotiations with rules and regulations regarding how much it can tax the rich would be an assault on freedom and a clear violation of the rights of the state. If the rich do not like the results, they should have brought more to the table or been better at negotiating. They can also find another country—and some do just that. Or create or take over their own state.

It could be objected that the negotiations between the state and the rich is unfair. While the rich can have considerable power, the state has far greater power. After all, the United States has trillions of dollars, police, and the military. This imbalance of power makes it impossible for the rich to fairly negotiate with the state—unless there are rules and regulations governing how the rich can be treated by the greater power of the state. There could be, for example, rules about how much the state should be able to tax the rich and these rules should be based on a rational analysis of the facts. This would allow a fair maximum tax to be set that would allow the rich to be treated justly.

The relation between a state intent on maximizing tax income and the rich can be seen as analogous to the relation between employees and businesses intent on maximizing profits. If it is acceptable for the wealthy to organize corporations to negotiate with the more powerful state, then it would also be acceptable for employees to organize unions to negotiate with the more powerful corporations. While the merits of individual corporations and unions can be debated endlessly, the basic principle of organizing to negotiate with others is essentially the same for both and if one is acceptable, so is the other.

Continuing the analogy, if it is accepted that the state’s freedom to impose taxes should be regulated, limited and restricted by law, then it would seem that imposing limits, regulations and restrictions on the economic freedom of employers in regards to how they treat employees. After all, employees are almost always in the weaker position and thus usually negotiate at a marked disadvantage. While workers, like the rich, could try to find another job, create their own business or go to another land, the options of most workers are rather limited.

To use a specific example, if it is morally right to set a rational limit to the maximum tax for the rich, it is also morally right to set a rational limit on the minimum wage that an employee can be paid. Naturally, there can be a wide range of complexities in regards to both the taxes and the wages, but the basic principle is the same in both cases: the more powerful should be limited in their economic impositions on the less powerful. There is also the shared principle of how much a person has a right to, be it the money she keeps or the money she is paid for her work.

Like any argument by analogy, the argument I have made can be challenged by showing the relevant similarities between the analogues are outweighed by the relevant dissimilarities. There are various ways this could be done.

One obvious difference is that when the state imposes taxes on the rich, the state is using political coercion. In the case of the employer imposing on the employee, the coercion is economic (although some employers do have the ability to get the state to use its coercive powers in their favor). It could be argued that this difference is strong enough to break the analogy and show that although the state should be limited in its imposition on the rich, employers should have considerable freedom to employ their economic coercion against employees. The challenge is showing how political coercion is morally different from economic coercion in a way that breaks the analogy.

Another obvious difference is that the state is imposing taxes on the rich while the employer is not taxing her employees. She is merely setting their wages, benefits, vacation time, work conditions and so on.  So, while the state can reduce the money of the rich by taxing them, it could be argued that this is relevantly different from an employer reducing the money of employees by paying low wages. As such, it could be argued that this difference is sufficient to break the analogy.

As a final point, it could be argued that the rich differ from employees in ways that break the analogy. For example, it could be argued that since the rich are of a better economic class than employees, they are entitled to better treatment, even if they happen to be unable to negotiate for that better treatment. The challenge is, of course, to show that the rich being rich entitles them to a better class of treatment.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds II: Is the Android a Psychopath?

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” As in this essay, there will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and clearly passes the Cartesian test (she uses true language appropriately). But, Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking down doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a rather positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people in order to achieve one’s goals. In the movie, Ava clearly has what might be called Manipulative Intelligence (M.Q.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—thus suggesting she is a psychopath.

While the term “psychopath” gets thrown around quite a bit, it is important to be a bit more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying.

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it seems rather important to be able to determine whether a machine is a psychopath or not and to do so well before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often quite charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and are able to pass themselves off as normal people.

While Ava is a fictional android, the movie does present a rather effective appeal to intuition by creating a plausible android psychopath. She is able to manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter well worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals might be rather positive. For example, it is easy to imagine a medical or care-giving robot that uses its MQ to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MQ to please its partners. However, these goals might be rather negative—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath or not matters is the same reason why being able to determine if a human is a psychopath or not matters. Roughly put, it is important to know whether or not someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same function (only more reliably) as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in an artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 sort of situation).

In regards to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to an artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings, there would be the possibility that a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since an artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would still remain the question of what is really going on in that mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Davis & Ad Hominems

Kim Davis, a county clerk in Kentucky, has been the focus of national media because of her refusal to issue marriage licenses to same-sex couples. As this is being written, Davis has been sent to jail for disobeying a court order.

The Scarlet Letter (1926 film)

The Scarlet Letter (1926 film) (Photo credit: Wikipedia)

As should be expected, opponents of same-sex marriage have tended to focus on the claim that Davis’ religious liberty is being violated. As should also be expected, her critics sought and found evidence of what seems to be her hypocrisy: Davis has been divorced three times and is on her fourth marriage. Some bloggers, eager to attack her, have claimed that she is guilty of adultery. These attacks can be relevant to certain issues, but they are also irrelevant in important ways. It is certainly worth sorting between the relevant and the irrelevant.

If the issue at hand is whether or not Davis is consistent in her professed religious values, then her actions are clearly relevant. After all, if a person claims to have a set of values and acts in ways that violate those values, then this provides legitimate grounds for accusations of hypocrisy and even of claims that the person does not really hold to that belief set. That said, there can be many reasons why a person acts in violation of her professed values. One obvious reason is moral weakness—most people, myself included, do act in violation of their principle due to the many flaws and frailties that we all possess. Since none of us is without sin, we should not be hasty in judging the perceived failings of others.  However, it is reasonable to consider a person’s actions when assessing whether or not she is acting in a manner consistent with her professed values.

If Davis is, in fact, operating on the principle that marriage licenses should not be issued to people who have violated the rules of God (presumably as presented in the bible), then she would have to accept that she should not have been issued a marriage license (after all, there is a wealth of scriptural condemnation of adultery and divorce). If she accepts that she should have been issued her license despite her violations of religious rules, then consistency would seem to require that the same treatment be afforded to everyone—including same-sex couples. After all, adultery makes God’s top ten list while homosexuality is only mentioned in a single line (and one that also marks shellfish as an abomination). So, if adulterers can get licenses, it would be rather difficult to justify denying same-sex couples licenses on the grounds of a Christian faith.

If the issue at hand is whether or not Davis is right in her professed view and her refusal to grant licenses to same-sex couples, then references to her divorce and alleged adultery are logically irrelevant. If a person claims that Davis is wrong in her view or acted wrongly in denying licenses because she has been divorced or has (allegedly) committed adultery, then this would be a mere personal attack ad hominem. A personal attack is committed when a person substitutes abusive remarks for evidence when attacking another person’s claim or claims. This line of “reasoning” is fallacious because the attack is directed at the person making the claim and not the claim itself. The truth value of a claim is independent of the person making the claim. After all, no matter how repugnant an individual might be, he or she can still make true claims.

If a critic of Davis asserts that her claim about same-sex marriage is in error because of her own alleged hypocrisy, then the critic is engaged in an ad hominem tu quoque.  This fallacy is committed when it is concluded that a person’s claim is false because 1) it is inconsistent with something else a person has said or 2) what a person says is inconsistent with her actions. The fact that a person makes inconsistent claims does not make any particular claim she makes false (although of any pair of inconsistent claims only one can be true—but both can be false). Also, the fact that a person’s claims are not consistent with her actions might indicate that the person is a hypocrite but this does not prove her claims are false. As such, Davis’ behavior has no bearing on the truth of her claims or the rightness of her decision to deny marriage licenses to same-sex couples.

Dan Savage and others have also made the claim that Davis is motivated by her desire to profit from the fame she is garnering from her actions. Savage asserts that “But no one is stating the obvious: this isn’t about Kim Davis standing up for her supposed principles—proof of that in a moment—it’s about Kim Davis cashing in.” Given, as Savage notes, the monetary windfall received by the pizza parlor owners who refused to cate a same-sex wedding, this has some plausibility.

If the issue at hand is Davis’ sincerity and the morality of her motivations, then whether or not she is motivated by hopes of profit or sincere belief does matter. If she is opposing same-sex marriage based on her informed conscience or, at the least, on a sincerely held principle, then that is a rather different matter than being motivated by a desire for fame and profit. A person motivated by principle to take a moral stand is at least attempting to act rightly—whether or not her principle is actually good or not. Claiming to be acting from principle while being motivated by fame and fortune would be to engage in deceit.

However, if the issue is whether or not Davis is right about her claim regarding same-sex marriage, then her motivations are not relevant. To think otherwise would be to fall victim to yet another ad hominem, the circumstantial ad hominem. This is a fallacy in which one attempts to attack a claim by asserting that the person making the claim is making it simply out of self-interest. In some cases, this fallacy involves substituting an attack on a person’s circumstances (such as the person’s religion, political affiliation, ethnic background, etc.). This ad hominem is a fallacy because a person’s interests and circumstances have no bearing on the truth or falsity of the claim being made. While a person’s interests will provide them with motives to support certain claims, the claims stand or fall on their own. It is also the case that a person’s circumstances (religion, political affiliation, etc.) do not affect the truth or falsity of the claim. This is made quite clear by the following example: “Bill claims that 1+1 =2. But he is a Christian, so his claim is false.” Or, if someone claimed that Dan Savage was wrong simply because of his beliefs.

Thus, Davis’ behavior, beliefs, and motivations are relevant to certain issues. However, they are not relevant to the truth (or falsity) of her claims regarding same-sex marriage.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Autonomous Weapons I: The Letter

On July 28, 2015 the Future of Life Institute released an open letter expressing opposition to the development of autonomous weapons. Although the name of the organization sounds like one I would use as a cover for an evil, world-ending cult in a Call of Cthulhu campaign, I am willing to accept that this group is sincere in its professed values. While I do respect their position on the issue, I believe that they are mistaken. I will assess and reply to the arguments in the letter.

As the letter notes, an autonomous weapon is capable of selecting and engaging targets without human intervention. An excellent science fiction example of such a weapon is the claw of Philip K. Dick’s classic “Second Variety” (a must read for anyone interested in the robopocalypse). A real world example of such a weapon, albeit a stupid one, is the land mine—they are placed and then engage automatically.

The first main argument presented in the letter is essentially a proliferation argument. If a major power pushes AI development, the other powers will also do so, creating an arms race. This will lead to the development of cheap, easy to mass-produce AI weapons. These weapons, it is claimed, will end up being acquired by terrorists, warlords, and dictators. These evil people will use these weapons for assassinations, destabilization, oppression and ethnic cleansing. That is, for what these evil people already use existing weapons to do quite effectively. This raises the obvious concern about whether or not autonomous weapons would actually have a significant impact in these areas.

The authors of the letter do have a reasonable point: as science fiction stories have long pointed out, killer robots tend to simply obey orders and they can (at least in fiction) be extremely effective. However, history has shown that terrorists, warlords, and dictators rarely have trouble finding humans who are willing to commit acts of incredible evil. Humans are also quite good at these sort of things and although killer robots are awesomely competent in fiction, it remains to be seen if they will be better than humans in the real world. Especially the cheap, mass produced weapons in question.

That said, it is reasonable to be concerned that a small group or individual could buy a cheap robot army when they would otherwise not be able to put together a human force. These “Walmart” warlords could be a real threat in the future—although small groups and individuals can already do considerable damage with existing technology, such as homemade bombs. They can also easily create weaponized versions of non-combat technology, such as civilian drones and autonomous cars—so even if robotic weapons are not manufactured, enterprising terrorists and warlords will build their own. Think, for example, of a self-driving car equipped with machine guns or just loaded with explosives.

A reasonable reply is that the warlords, terrorists and dictators would have a harder time of it without cheap, off the shelf robotic weapons. This, it could be argued, would make the proposed ban on autonomous weapons worthwhile on utilitarian grounds: it would result in less deaths and less oppression.

The authors then claim that just as chemists and biologists are generally not in favor of creating chemical or biological weapons, most researchers in AI do not want to design AI weapons. They do argue that the creation of AI weapons could create a backlash against AI in general, which has the potential to do considerable good (although there are those who are convinced that even non-weapon AIs will wipe out humanity).

The authors do have a reasonable point here—members of the public do often panic over technology in ways that can impede the public good. One example is in regards to vaccines and the anti-vaccination movement. Another example is the panic over GMOs that is having some negative impact on the development of improved crops. But, as these two examples show, backlash against technology is not limited to weapons, so the AI backlash could arise from any AI technology and for no rational reason. A movement might arise, for example, against autonomous cars. Interestingly, military use of technology seems to rarely create backlash from the public—people do not refuse to fly in planes because the military uses them to kill people. Most people also love GPS, which was developed for military use.

The authors note that chemists, biologists and physicists have supported bans on weapons in their fields. This might be aimed at attempting to establish an analogy between AI researchers and other researchers, perhaps to try to show these researchers that it is a common practice to be in favor of bans against weapons in one’s area of study. Or, as some have suggested, the letter might be making an analogy between autonomous weapons and weapons of mass destruction (biological, chemical and nuclear weapons).

One clear problem with the analogy is that biological, chemical and nuclear weapons tend to be the opposite of robotic smart weapons: they “target” everyone without any discrimination. Nerve gas, for example, injures or kills everyone. A nuclear bomb also kills or wounds everyone in the area of effect. While AI weapons could carry nuclear, biological or chemical payloads and they could be set to simply kill everyone, this lack of discrimination and WMD nature is not inherent to autonomous weapons. In contrast, most proposed autonomous weapons seem intended to be very precise and discriminating in their killing. After all, if the goal is mass destruction, there is already the well-established arsenal of biological, chemical and nuclear weapons. Terrorists, warlords and dictators often have no problems using WMDs already and AI weapons would not seem to significantly increase their capabilities.

In my next essay on this subject, I will argue in favor of AI weapons.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

ISIS & Rape

Looked at in the abstract, ISIS seems to be another experiment in the limits of human evil, addressing the question of how bad people can become before they are unable to function as social beings. While ISIS is well known for its theologically justified murder and destruction, it has now become known for its theologically justified slavery and rape.

While I am not a scholar of religion, it is quite evident that scriptural justifications of slavery and rape exist and require little in the way of interpretation. In this, Islamic scripture is similar to the bible—this book also contains rules about the practice of slavery and guidelines regarding the proper practice of rape. Not surprisingly, mainstream religious scholars of Islam and Christianity tend to argue that these aspects of scripture no longer apply or that they can be interpreted in ways that do not warrant slavery or rape. Opponents of these faiths tend to argue that the mainstream scholars are mistaken and that the wicked behavior enjoined in such specific passages express the true principles of the faith.

Disputes over specific passages lead to the broader debate about the true tenets of a faith and what it is to be a true member of that faith. To use a current example, opponents of Islam often claim that Islam is inherently violent and that the terrorists exemplify the true members of Islam. Likewise, some who are hostile to Christianity claim that it is a hateful religion and point to Christian extremists, such as God Hates Fags, as exemplars of true Christianity. This is a rather difficult and controversial matter and one I have addressed in other essays.

A reasonable case can be made that slavery and rape are not in accord with Islam, just as a reasonable case can be made that slavery and rape are not in accord with Christianity. As noted above, it can argued that times have changed, that the texts do not truly justify the practices and so on. However, these passages remain and can be pointed to as theological evidence in favor of the religious legitimacy of these practices. The practice of being selective about scripture is indeed a common one and people routinely focus on passages they like while ignoring passages that they do not like. This selectivity is, not surprisingly, most often used to “justify” prejudice, hatred and misdeeds. Horribly, ISIS does indeed have textual support, however controversial it might be with mainstream Islamic thinkers. That, I think, cannot be disputed.

ISIS members not only claim that slavery and rape are acceptable, they go so far as to claim that rape is pleasing to God. According to Rukmini Callimachi’s article in the New York Times, ISIS rapists pray before raping, rape, and then pray after raping. They are not praying for forgiveness—the rape is part of the religious ritual that is supposed to please God.

The vast majority of monotheists would certainly be horrified by this and would assert that God is not pleased by rape (despite textual support to the contrary). Being in favor of rape is certainly inconsistent with the philosophical conception of God as an all good being. However, there is the general problem of sorting out what God finds pleasing and what He condemns. In the case of human authorities it is generally easy to sort out what pleases them and what they condemn: they act to support and encourage what pleases them and act to discourage, prevent and punish what they condemn. If God exists, He certainly is allowing ISIS to do as it will—He never acts to stop them or even to send a clear sign that He condemns their deeds. But, of course, God seems to share the same policy as Star Fleet’s Prime Directive now: He never interferes or makes His presence known.

The ISIS horror is yet another series of examples in the long standing problem of evil—if God is all powerful, all-knowing and good, then there should be no evil. But, since ISIS is freely doing what it does it would seem to follow that God is lacking in some respect, He does not exist or He, as ISIS claims, is pleased by the rape of children.

Not surprisingly, religion is not particularly helpful here—while scripture and interpretations of scripture can be used to condemn ISIS, scripture can also be used to support them in their wickedness. God, as usual, is not getting involved, so we do not know what He really thinks. So, it would seem to be up human morality to settle this matter.

While there is considerable dispute about morality, the evil of rape and slavery certainly seem to be well-established. It can be noted that moral arguments have been advanced in favor of slavery, usually on the grounds of alleged superiority. However, these moral arguments certainly seem to have been adequately refuted. There are far fewer moral arguments in defense of rape, which is hardly surprising. However, these also seem to have been effectively refuted. In any case, I would contend that the burden of proof rests on those who would claim that slavery or rape are morally acceptable and invite readers to advance such arguments for due consideration.

Moving away from morality, there are also practical matters. ISIS does have a clear reason to embrace its theology of rape: as was argued by Rukmini Callimachi, it is a powerful recruiting tool. ISIS offers men a group in which killing, destruction and rape are not only tolerated but praised as being pleasing to God—the ultimate endorsement. While there are people who do not feel any need to justify their evil, even very wicked people often still want to believe that their terrible crimes are warranted or even laudable. As such, ISIS has considerable attraction to those who wish to do evil.

Accepting this theology of slavery and rape is not without negative consequences for recruiting—while there are many who find it appealing, there are certainly many more who find it appalling. Some ISIS supporters have endeavored to deny that ISIS has embraced this theology of rape and slavery—even they recognize some moral limits. Other supporters have not been dismayed by these revelations and perhaps even approve. Whether this theology of rape and slavery benefits ISIS more than it harms it will depend largely on the moral character of its potential recruits and supporters. I certainly hope that this is a line that many are not willing to cross, thus cutting into ISIS’ potential manpower and financial support. What impact this has on ISIS’ support will certainly reveal much about the character of their supporters—do they have some moral limits?

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Lion, the HitchBOT and the Fetus

After Cecil the Lion was shot, the internet erupted in righteous fury against the killer. Not everyone was part of this eruption and some folks argued against feeling bad for Cecil—some accusing the mourners of being phonies and pointing out that lions kill people. What really caught my attention, however, was the use of a common tactic—to “refute” those condemning the killing of Cecil by asserting that these “lion lovers” do not get equally upset about the fetuses killed in abortions.

When HitchBOT was destroyed, a similar sort of response was made—in fact, when I have written about ethics and robots (or robot-like things) I have been subject to criticism on the same grounds: it is claimed that I value robots more than fetuses and presumably I have thus made some sort of error in my arguments about robots.

Since I find this tactic interesting and have been its target, I thought it would be worth my while to examine it in a reasonable and (hopefully) fair way.

One way to look at this approach is to take it as the use of the Consistent Application method, which is as follows. A moral principle is consistently applied when it is applied in the same way to similar beings in similar circumstances. Inconsistent application is a problem because it violates three commonly accepted moral assumptions: equality, impartiality and relevant difference.

Equality is the assumption that those that moral equals must be treated as such. It also requires that those that are not morally equal be treated differently.

Impartiality is the assumption that moral principles must not be applied with partiality. Inconsistent application would involve non-impartial application.

Relevant difference is a common moral assumption. It is the view that different treatment must be justified by relevant differences. What counts as a relevant difference in particular cases can be a matter of great controversy. For example, while many people do not think that gender is a relevant difference in terms of how people should be treated other people think it is very important. This assumption requires that principles be applied consistently.

The method of Consistent Application involves showing that a principle or standard has been applied differently in situations that are not relevantly different. This allows one to conclude that the application is inconsistent, which is generally regarded as a problem. The general form is as follows:

Step 1: Show that a principle/standard has been applied differently in situations that are not adequately different.

Step 2: Conclude that the principle has been applied inconsistently.

Step 3 (Optional): Require that the principle be applied consistently.

Applying this method often requires determining the principle the person/group is using. Unfortunately, people are not often clear in regards to what principle they are actually using. In general, people tend to just make moral assertions and leave it to others to guess what their principles might be. In some cases, it is likely that people are not even aware of the principles they are appealing to when making moral claims.

Turning now to the cases of the lion, the HitchBOT and the fetus consistent application could be applied as follows:

Step 1: Those who are outraged at the killing of the lion are using the principle that the killing of living things is wrong. Those outraged at the destruction of HitchBOT are using the principle that helpless things should not be destroyed. These people are not outraged by abortions in general and the Planned Parenthood abortions in particular.

Step 2: The lion and HitchBOT mourners are not being consistent in their application of the principle since fetuses are helpless (like HitchBOT) and living things (like Cecil the lion).

Step 3 (Optional): Those mourning for Cecil and HitchBOT should mourn for the fetuses and oppose abortion in general and Planned Parenthood in particular.

This sort of use of Consistent Application is quite appealing and I routinely use the method myself. For example, I have argued (in a reverse of this situation) that people who are anti-abortion should also be anti-hunting and that people who are fine with hunting should also be morally okay with abortion.

As with any method of arguing, there are counter methods. In the case of this method, there are three general reasonable responses. The first is to admit the inconsistency and stop applying the principle in an inconsistent manner. This obviously does not defend against the charge but can be an honest reply. People, as might be imagined, rarely take this option.

A second way to reply and one that is an actual defense is to dissolve the inconsistency by showing that the alleged inconsistency is merely apparent. The primary way to do this is by showing that there is a relevant difference in the situation. For example, someone who wants to be morally opposed to the shooting of Cecil while being morally tolerant of abortions could argue that the adult lion has a moral status different from the fetus—one common approach is to note the relation of the fetus to the woman and how a lion is an independent entity. The challenge lies in making a case for the relevance of the difference.

A third way to reply is to reject the attributed principle. In the situation at hand, the assumption is that a person is against killing the lion simply because it is alive. However, that might not be the principle the person is, in fact, using. His principle might be based on the suffering of a conscious being and not on mere life. In this case, the person would be consistent in his application.

Naturally enough, the “new” principle is still subject to evaluation. For example, it could be argued the suffering principle is wrong and that the life principle should be accepted instead. In any case, this method is not an automatic “win.”

An alternative interpretation of this tactic is to regard it as an ad homimen: An ad Hominem is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made (or the character, circumstances, or actions of the person reporting the claim). Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). This type of “argument” has the following form:

  1. Person A makes claim X.
  2. Person B makes an attack on person A.
  3. Therefore A’s claim is false.

The reason why an ad Hominem (of any kind) is a fallacy is that the character, circumstances, or actions of a person do not (in most cases) have a bearing on the truth or falsity of the claim being made (or the quality of the argument being made).

In the case of the lion, the HitchBOT and the fetus, the reasoning can be seen as follows:

  1. Person A claims that killing Cecil was wrong or that destroying HitchBOT was wrong.
  2. Person B notes that A does not condemn abortions in general or Planned Parenthood’s abortions in particular.
  3. Therefore A is wrong about Cecil or HitchBOT.

Obviously enough, a person’s view of abortion does not prove or disprove her view about the ethics of the killing of Cecil or HitchBOT (although a person can, of course, be engaged in inconsistency or other errors—but these are rather different matters).

A third alternative is that the remarks are not meant as an argument, either the reasonable application of a Consistent Application criticism or the unreasonable attack of an ad homimen. In this case, the point is to assert that the lion lovers and bot buddies are awful people or, at best, misguided.

The gist of the tactic is, presumably, to make these people seem bad by presenting a contrast: these lion lovers and bot buddies are broken up about lions and trashcans, but do not care about fetuses—what awful people they are.

One clear point of concern is that moral concern is not a zero-sum game. That is, regarding the killing of Cecil as wrong and being upset about it does not entail that a person thus cares less (or not at all) about fetuses. After all, people do not just get a few “moral tokens” to place such that being concerned about one misdeed entails they must be unable to be concerned about another. Put directly, a person can condemn the killing of Cecil and also condemn abortion.

The obvious response is that there are people who are known to condemn the killing of Cecil or the destruction of HitchBOT and also known to be pro-choice. These people, it can be claimed, are morally awful. The equally obvious counter is that while it is easy to claim such people are morally awful, the challenge lies in showing that they are actually awful. That is, that their position on abortion is morally wrong. Noting that they are against lion killing or bot bashing and pro-choice does not show they are in error—although, as noted above, they could be challenged on the grounds of consistency. But this requires laying out an argument rather than merely juxtaposing their views on these issues. This version of the tactic simply amounts to asserting or implying that there is something wrong with the person because one disagree with that person. But a person thinking that hunting lions or bashing bots is okay and that abortion is wrong, does not prove that the opposing view is in error. It just states the disagreement.

Since the principle of charity requires reconstructing and interpreting arguments in the best possible way, I endeavor to cast this sort of criticism as a Consistent Application attack rather than the other two. This approach is respectful and, most importantly, helps avoid creating a straw man of the opposition.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

HitchBOT & Kant

Dr. Frauke Zeller and Dr. David Smith created HitchBOT (essentially a solar powered iPhone in an anthropomorphic shell) and sent him on trip to explore the USA on July 17, 2015. HitchBOT had previously successfully journey across Canada and Germany. The experiment was aimed at seeing how humans would interact with the “robot.”  He lasted about two weeks in the United States, meeting his end in Philadelphia. The exact details of his destruction (and the theft of the iPhone) are not currently known, although the last people known to be with HitchBOT posted what seems to be faked “surveillance camera” video of HitchBOT’s demise. This serves to support the plausible claim that the internet eventually ruins everything it touches.

The experiment was certainly both innovative and interesting. It also generated questions about what the fate of HitchBOT says about us. We do, of course, already know a great deal about us: we do awful things to each other, so it is hardly surprising that someone would do something awful to the HitchBOT. People are killed every day in the United States, vandalism occurs regularly and the theft of technology is routine—thus it is no surprise that HitchBOT came to a bad end. In some ways, it was impressive that he made it as far as he did.

While HitchBOT seems to have met his untimely doom at the hands of someone awful, what is most interesting is how well HitchBOT was treated. After all, he was essentially an iPhone in a shell that was being transported about by random people.

One reason that HitchBOT was well treated and transported about by people is no doubt because it fits into the travelling gnome tradition. For those not familiar with the travelling gnome prank, it involves “stealing” a lawn gnome and then sending the owner photographs of the gnome from various places. The gnome is then returned (at least by nice pranksters). HitchBOT is a rather more elaborate version of the traveling gnome and, obviously, differs from the classic travelling gnome in that the owners sent HitchBOT on his fatal adventure. People, perhaps, responded negatively to the destruction of HitchBOT because it broke the rules of the travelling gnome game—the gnome is supposed to roam and make its way safely back home.

A second reason for HitchBOT’s positive adventures (and perhaps also his negative adventure) is that he became a minor internet celebrity. Since celebrity status, like moth dust, can rub off onto those who have close contact it is not surprising that people wanted to spend time with HitchBOT and post photos and videos of their adventures with the iPhone in a trash can. On the dark side, destroying something like HitchBOT is also a way to gain some fame.

A third reason, which is probably more debatable, is that HitchBOT was given a human shape, a cute name and a non-threatening appearance and these tend to incline people to react positively. Natural selection has probably favored humans that are generally friendly to other humans and this presumably extends to things that resemble humans. There is probably also some hardwiring for liking cute things, which causes humans to generally like things like young creatures and cute stuffed animals. HitchBOT was also given a social media personality by those conducting the experiment which probably influenced people into feeling that it had a personality of its own—even though they knew better.

Seeing a busted up HitchBOT, which has an anthropomorphic form, presumably triggers a response similar too (but rather weaker than) what a sane human would have to seeing the busted up remains of a fellow human.

While some people were rather upset by the destruction of HitchBOT, others have claimed that it was literally “a pile of trash that got what it deserved.” A more moderate position is that while it was unfortunate that HitchBOT was busted up, it is unreasonable to be overly concerned by this act of vandalism because HitchBOT was just an iPhone in a fairly cheap shell. As such, while it is fine to condemn the destruction as vandalism, theft and the wrecking of a fun experiment, it is unreasonable to see the matter as actually being important. After all, there are far more horrible things to be concerned about, such as the usual murdering of actual humans.

My view is that the moderate position is quite reasonable: it is too bad HitchBOT was vandalized, but it was just an iPhone in a shell. As such, its destruction is not a matter of great concern. That said, the way HitchBOT was treated is still morally significant. In support of this, I turn to what has become my stock argument in regards to the ethics of treating entities that lack moral status. This argument is stolen from Kant and is a modification of his argument regarding the treatment of animals.

Kant argues that we should treat animals well despite his view that animals have the same moral status as objects. Here is how he does it (or tries to do it).

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X would obligate us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in his old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot the dog?

Kant’s answer seems to be rather consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Interestingly enough, Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being rather gentle with a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are essentially practice for us: how we treat them is training for how we will treat human beings.

Being an iPhone in a cheap shell, HitchBOT obviously had the moral status of an object and not that of a person. He did not feel or think and the positive feelings people had towards it were due to its appearance (cute and vaguely human) and the way those running the experiment served as its personality via social media. It was, in many ways, a virtual person—or at least the manufactured illusion of a person.

Given the manufactured pseudo-personhood of HitchBOT, it could be taken as being comparable to an animal, at least in Kant’s view. After all, animals are mere objects and have no moral status of their own. Likewise for HitchBOT Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Thus, a key matter to settle is whether HitchBOT was more like an animal or more like a stone—at least in regards to the matter at hand.

If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person engaging in said behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also extend to HitchBOT. For example, if engaging in certain activities with a HitchBOT would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with HitchBOT would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior.

While the result of interactions with the HitchBOT would need to be properly studied, it makes intuitive sense that being “nice” to the HitchBOT would help incline people to be somewhat nicer to others (much along the lines of how children are encouraged to play nicely with their stuffed animals). It also makes intuitive sense that being “mean” to HitchBOT would incline people to be somewhat less nice to others. Naturally, people would also tend to respond to HitchBOT based on whether they already tend to be nice or not. As such, it is actually reasonable to praise nice behavior towards HitchBOT and condemn bad behavior—after all, it was a surrogate for a person. But, obviously, not a person.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter