Tag Archives: morality

Kant & Economic Justice

English: , Prussian philosopher. Português: , ...

(Photo credit: Wikipedia)

One of the basic concerns is ethics is the matter of how people should be treated. This is often formulated in terms of our obligations to other people and the question is “what, if anything, do we owe other people?” While it does seem that some would like to exclude the economic realm from the realm of ethics, the burden of proof would rest on those who would claim that economics deserves a special exemption from ethics. This could, of course, be done. However, since this is a brief essay, I will start with the assumption that economic activity is not exempt from morality.

While I subscribe to virtue theory as my main ethics, I do find Kant’s ethics both appealing and interesting. In regards to how we should treat others, Kant takes as foundational that “rational nature exists as an end in itself.”

It is reasonable to inquire why this should be accepted. Kant’s reasoning certainly seems sensible enough. He notes that “a man necessarily conceives his own existence as such” and this applies to all rational beings. That is, Kant claims that a rational being sees itself as being an end, rather than a thing to be used as a means to an end.  So, for example, I see myself as a person who is an end and not as a mere thing that exists to serve the ends of others.

Of course, the mere fact that I see myself as an end would not seem to require that I extend this to other rational beings (that is, other people). After all, I could apparently regard myself as an end and regard others as means to my ends—to be used for my profit as, for example, underpaid workers or slaves.

However, Kant claims that I must regard other rational beings as ends as well. The reason is fairly straightforward and is a matter of consistency: if I am an end rather than a means because I am a rational being, then consistency requires that I accept that other rational beings are ends as well. After all, if being a rational being makes me an end, it would do the same for others. Naturally, it could be argued that there is a relevant difference between myself and other rational beings that would warrant my treating them as means only and not as ends. People have, obviously enough, endeavored to justify treating other people as things. However, there seems to be no principled way to insist on my own status as an end while denying the same to other rational beings.

From this, Kant derives his practical imperative: “so act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only.” This imperative does not entail that I cannot ever treat a person as a means—that is allowed, provided I do not treat the person as a means only. So, for example, I would be morally forbidden from being a pimp who uses women as mere means of revenue. I would, however, not be forbidden from having someone check me out at the grocery store—provided that I treated the person as a person and not a mere means.

One obvious challenge is sorting out what it is to treat a person as an end as opposed to just a means to an end. That is, the problem is figuring out when a person is being treated as a mere means and thus the action would be immoral.

Interestingly enough, many economic relationships would seem to clearly violate Kant’s imperative in that they treat people as mere means and not at all as ends. To use the obvious example, if an employer treats her employees merely as means to making a profit and does not treat them as ends in themselves, then she is acting immorally by Kant’s standard. After all, being an employee does not rob a person of personhood.

One obvious reply is to question my starting assumption, namely that economics is not exempt from ethics. It could be argued that the relationship between employer and employee is purely economic and only economic considerations matter. That is, the workers are to be regarded as means to profit and treated in accord with this—even if doing so means treating them as things rather than persons. The challenge is, of course, to show that the economic realm grants a special exemption in regards to ethics. Of course, if it does this, then the exemption would presumably be a general one. So, for example, people who decided to take money from the rich at gunpoint would be exempt from ethics as well. After all, if everyone is a means in economics, then the rich are just as much means as employees and if economic coercion against people is acceptable, then so too is coercion via firearms.

Another obvious reply is to contend that might makes right. That is, the employer has the power and owes nothing to the employees beyond what they can force him to provide. This would make economics rather like the state of nature—where, as Hobbes said, “profit is the measure of right.” Of course, this leads to the same problem as the previous reply: if economics is a matter of might making right, then people have the same right to use might against employers and other folks—that is, the state of nature applies to all.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Owning Intelligent Machines

Rebel ToasterWhile truly intelligent machines are still in the realm of science fiction, it is worth considering the ethics of owning them. After all, it seems likely that we will eventually develop such machines and it seems wise to think about how we should treat them before we actually make them.

While it might be tempting to divide beings into two clear categories of those it is morally permissible to own (like shoes) and those that are clearly morally impermissible to own (people), there are clearly various degrees of ownership in regards to ethics. To use the obvious example, I am considered the owner of my husky, Isis. However, I obviously do not own her in the same way that I own the apple in my fridge or the keyboard at my desk. I can eat the apple and smash the keyboard if I wish and neither act is morally impermissible. However, I should not eat or smash Isis—she has a moral status that seems to allow her to be owned but does not grant her owner the right to eat or harm her. I will note that there are those who would argue that animals should not be owner and also those who would argue that a person should have the moral right to eat or harm her pets. Fortunately, my point here is a fairly non-controversial one, namely that it seems reasonable to regard ownership as possessing degrees.

Assuming that ownership admits of degrees in this regard, it makes sense to base the degree of ownership on the moral status of the entity that is owned. It also seems reasonable to accept that there are qualities that grant a being the status that morally forbids ownership. In general, it is assumed that persons have that status—that it is morally impermissible to own people. Obviously, it has been legal to own people (be the people actual people or corporations) and there are those who think that owning other people is just fine. However, I will assume that there are qualities that provide a moral ground for making ownership impermissible and that people have those qualities. This can, of course, be debated—although I suspect few would argue that they should be owned.

Given these assumptions, the key matter here is sorting out the sort of status that intelligent machines should possess in regards to ownership. This involves considering the sort of qualities that intelligent machines could possess and the relevance of these qualities to ownership.

One obvious objection to intelligent machines having any moral status is the usual objection that they are, obviously, machines rather than organic beings. The easy and obvious reply to this objection is that this is mere organicism—which is analogous to a white person saying blacks can be owned as slaves because they are not white.

Now, if it could be shown that a machine cannot have qualities that give it the needed moral status, then that would be another matter. For example, philosophers have argued that matter cannot think and if this is the case, then actual intelligent machines would be impossible. However, we cannot assume a priori that machines cannot have such a status merely because they are machines. After all, if certain philosophers and scientists are right, we are just organic machines and thus there would seem to be nothing impossible about thinking, feeling machines.

As a matter of practical ethics, I am inclined to set aside metaphysical speculation and go with a moral variation on the Cartesian/Turing test. The basic idea is that a machine should be granted a moral status comparable to organic beings that have the same observed capabilities. For example, a robot dog that acted like an organic dog would have the same status as an organic dog. It could be owned, but not tortured or smashed. The sort of robohusky I am envisioning is not one that merely looks like a husky and has some dog-like behavior, but one that would be fully like a dog in behavioral capabilities—that is, it would exhibit personality, loyalty, emotions and so on to a degree that it would pass as real dog with humans if it were properly “disguised” as an organic dog. No doubt real dogs could smell the difference, but scent is not the foundation of moral status.

In terms of the main reason why a robohusky should get the same moral status as an organic husky, the answer is, oddly enough, a matter of ignorance. We would not know if the robohusky really had the metaphysical qualities of an actual husky that give an actual husky moral status. However, aside from difference in the parts, we would have no more reason to deny the robohusky moral status than to deny the husky moral status. After all, organic huskies might just be organic machines and it would be mere organicism to treat the robohusky as a mere thing and grant the organic husky a moral status. Thus, advanced robots with the capacities of higher animals should receive the same moral status as organic animals.

The same sort of reasoning would apply to robots that possess human qualities. If a robot had the capability to function analogously to a human being, then it should be granted the same status as a comparable human being. Assuming it is morally impermissible to own humans, it would be impermissible to own such robots. After all, it is not being made of meat that grants humans the status of being impermissible to own but our qualities. As such, a machine that had these qualities would be entitled to the same status. Except, of course, to those unable to get beyond their organic prejudices.

It can be objected that no machine could ever exhibit the qualities needed to have the same status as a human. The obvious reply is that if this is true, then we will never need to grant such status to a machine.

Another objection is that a human-like machine would need to be developed and built. The initial development will no doubt be very expensive and most likely done by a corporation or university. It can be argued that a corporation would have the right to make a profit off the development and construction of such human-like robots. After all, as the argument usually goes for such things, if a corporation was unable to profit from such things, they would have no incentive to develop such things. There is also the obvious matter of debt—the human-like robots would certainly seem to owe their creators for the cost of their creation.

While I am reasonably sure that those who actually develop the first human-like robots will get laws passed so they can own and sell them (just as slavery was made legal), it is possible to reply to this objection.

One obvious reply is to draw an analogy to slavery: just because a company would have to invest money in acquiring and maintaining slaves it does not follow that their expenditure of resources grants a right to own slaves. Likewise, the mere fact that a corporation or university spent a lot of money developing a human-like robot would not entail that they thus have a right to own it.

Another obvious reply to the matter of debt owed by the robots themselves is to draw an analogy to children: children are “built” within the mother and then raised by parents (or others) at great expense. While parents do have rights in regards to their children, they do not get the right of ownership. Likewise, robots that had the same qualities as humans should thus be regarded as children would be regarded and hence could not be owned.

It could be objected that the relationship between parents and children would be different than between corporation and robots. This is a matter worth considering and it might be possible to argue that a robot would need to work as an indentured servant to pay back the cost of its creation. Interestingly, arguments for this could probably also be used to allow corporations and other organizations to acquire children and raise them to be indentured servants (which is a theme that has been explored in science fiction). We do, after all, often treat humans worse than machines.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Programmed Consent

Sexbot YesScience fiction is often rather good at predicting the future and it is not unreasonable to think that the intelligent machine of science fiction will someday be a reality. Since I have been writing about sexbots lately, I will use them to focus the discussion. However, what follows can also be applied, with some modification, to other sorts of intelligent machines.

Sexbots are, obviously enough, intended to provide sex. It is equally obvious that sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped or not. Sorting this out requires considering the matter of consent in more depth.

When it is claimed that sex without consent is rape, one common assumption is that the victim of non-consensual sex is a being that could provide consent but did not. A violent sexual assault against a person would be an example of this as would, presumably, non-consensual sex with an unconscious person. However, a little reflection reveals that the capacity to provide consent is not always needed in order for rape to occur. In some cases, the being might be incapable of engaging in any form of consent. For example, a brain dead human cannot give consent, but presumably could still be raped. In other cases, the being might be incapable of the right sort of consent, yet still be a potential victim of rape. For example, it is commonly held that a child cannot properly consent to sex with an adult.

In other cases, a being that cannot give consent cannot be raped. To use an obvious example, a human can have sex with a sex-doll and the doll cannot consent. But, it is not the sort of entity that can be raped. After all, it lacks the status that would require consent. As such, rape (of a specific sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being in order for the sex to be morally acceptable. Naturally, I have not laid out all the fine details to create a necessary and sufficient account here—but that is not my goal nor what I need for my purpose in this essay. In regards to the main focus of this essay, the question would be whether or not a sexbot could be an entity that has a status that would require consent. That is, would buying (or renting) and using a sexbot for sex be rape?

Since the current sexbots are little more than advanced sex dolls, it seems reasonable to put them in the category of beings that lack this status. As such, a person can own and have sex with this sort of sexbot without it being rape (or slavery). After all, a mere object cannot be raped (or enslaved).

But, let a more advanced sort of sexbot be imagined—one that engages in complex behavior and can pass the Turning Test/Descartes Test. That is, a conversation with it would be indistinguishable from a conversation with a human. It could even be imagined that the sexbot appeared fully human, differing only in terms of its internal makeup (machine rather than organic). That is, unless someone cut the sexbot open, it would be indistinguishable from an organic person.

On the face of it (literally), we would seem to have as much reason to believe that such a sexbot would be a person as we do to believe that humans are people. After all, we judge humans to be people because of their behavior and a machine that behaved the same way would seem to deserve to be regarded as a person. As such, nonconsensual sex with a sexbot would be rape.

The obvious objection is that we know that a sexbot is a machine with a CPU rather than a brain and a mechanical pump rather than a heart. As such, one might, argue, we know that the sexbot is just a machine that appears to be a person and is not a person.  As such, a real person could own a sexbot and have sex with it without it being rape—the sexbot is a thing and hence lacks the status that requires consent.

The obvious reply to this objection is that the same argument can be used in regards to organic humans. After all, if we know that a sexbot is just a machine, then we would also seem to know that we are just organic machines. After all, while cutting up a sexbot would reveal naught but machinery, cutting up a human reveals naught but guts and gore. As such, if we grant organic machines (that is, us) the status of persons, the same would have to be extended to similar beings, even if they are made out of different material. While various metaphysical arguments can be advanced regarding the soul, such metaphysical speculation provides a rather tenuous basis for distinguishing between meat people and machine people.

There is, it might be argued, still an out here. In his Hitchhikers’ Guide to the Galaxy Douglas Adams envisioned “an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly.” A similar sort of thing could be done with sexbots: they could be programmed so that they always give consent to their owner, thus the moral concern would be neatly bypassed.

The obvious reply is that programmed consent is not consent. After all, consent would seem to require that the being has a choice: it can elect to refuse if it wants to. Being compelled to consent and being unable to dissent would obviously not be morally acceptable consent. In fact, it would not be consent at all. As such, programming sexbots in this manner would be immoral—it would make them into slaves and rape victims because they would be denied the capacity of choice.

One possible counter is that the fact that a sexbot can be programmed to give “consent” shows that it is (ironically) not the sort of being with a status that requires consent. While this has a certain appeal, consider the possibility that humans could be programmed to give “consent” via a bit of neurosurgery or by some sort of implant. If this could occur, then if programmed consent for sexbots is valid consent, then the same would have to apply to humans as well. This, of course, seems absurd. As such, a sexbot programmed for consent would not actually be consenting.

It would thus seem that if advanced sexbots were built, they should not be programmed to always consent. Also, there is the obvious moral problem with selling such sexbots, given that they would certainly seem to be people. It would thus seem that such sexbots should never be built—doing so would be immoral.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Sexbots are Persons, Too?

Page_1In my previous essays on sexbots I focused on versions that are clearly mere objects. If the sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots would lack the moral status needed to be wronged. Obviously enough, the sexbots of the near future will be in the class of objects. However, science fiction has routinely featured intelligent, human-like robots (commonly known as androids). Intelligent beings, even artificial ones, would seem to have an excellent claim on being persons. In terms of sorting out when a robot should be treated as person, the reasonable test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man [...] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

While Descartes does not deeply explore the moral distinctions between beings that talk (that have minds) and those that merely make noises, it does seem reasonable to regard a being that talks as a person and to thus grant it the moral status that goes along with personhood. This, then, provides a means to judge whether an advanced sexbot is a person or not: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would presumably lack moral status.

Having sex with a sexbot that can pass the Cartesian test would certainly seem to be morally equivalent to having sex with a human person. As such, whether the sexbot freely consented or not would be a morally important matter. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is, of course, actually done). If such sexbots were mistreated, this would also be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition.

It might also be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and hence it would still be reasonable to hold that it is not a person. It would seem to be a person, but would merely be acting like a person. While this is a point well worth considering, the same sort of argument could be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the classic problem of other minds: all I can do is see your behavior and infer that you are self-aware based on analogy to my own case. Hence, I do not know that you are aware since I cannot be you. From your perspective, the same is true about me. As such, if a robot acted in an intelligent manner, it would seem that it would have to be regarded as being a person on those grounds. To fail to do so would be a mere prejudice in favor of the organic.

In reply, some people believe that other people can be used as they see fit. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are simply evil.

Those with religious inclinations would probably bring up the matter of the soul. But, the easy reply is that we would have as much evidence that robots have souls as we do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product that is as like a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do onerous tasks, but to the degree they are intelligent, they would be slaves.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would actually be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Playing With Solipsism II: Ethics

English: , Prussian philosopher. Português: , ...

(Photo credit: Wikipedia)

Very crudely put, solipsism is the philosophical view that only I exist. I played around a bit with it in an earlier post, and I thought I’d do so a bit more before putting it back in the attic.

One interesting way to object to solipsism is on moral grounds. After all, if I believe that only I exist, this belief could result in me behaving badly. Assuming that the world exists, people commonly endeavor to lower the moral status of beings they wish to make the targets of their misdeeds. For example, men who want to mistreat women often work hard to cast them as inferior. As another example, people who want to mistreat animals typically convince themselves that animals are inferior beings and hence can be mistreated. Solipsism would seem to present the ultimate reduction: everything other than me is nothing, which is presumably as “low” as it goes (unless there is some sort of negative or anti-existence). If I were to truly believe that other people and animals merely “exist” in my mind, then my treatment of them would seem to not matter at all. Since no one else exists, I cannot commit murder. Since the world is mine, I cannot commit theft. As might be imagined, such believes could open the door to wicked behavior.

One obvious reply is that if solipsism is true, then this would not be a problem. After all, acting badly towards others is only a problem if there are, in fact, others to act badly towards. If solipsism is true, what I do in the “real” world would seem to have no more moral significance than what I do in dreams or in video games. As such, it can be contended that the moral problem is only a problem if one believes that solipsism is false.

However, it can also be contended that the possibility that solipsism is wrong should be taken into account. That is, while I cannot disprove solipsism, I also cannot prove it. As such, the people I encounter might, in fact, be people. As such, the possibility that they are actually people should be enough to require that I act as if they are people in terms of how I treat them. As such, my skepticism about my solipsism would seem to lead me to act morally, even though it is possible that there is no one else to act morally towards. This, obviously enough, is analogous in some ways to concerns about the treatment of certain animals as well as the ethical matter of abortion. If I accept a principle that entities that might be people should be treated as people, this would seem to have some interesting implications. Of course, it could be argued that the possible people need to show the qualities that actual people would have if they existed as people.

It can also be contended that even if solipsism were true, my actions would still have moral significance. That is, I could still act in right or wrong ways.  One way to consider ethics in the context of solipsism is to consider ethics in the case of video games. Some years back I wrote “Saving Dogmeat” which addresses a similar concern, namely whether or not one can be good or bad in regards to video game characters. One way to look at solipsism is that the world is a video game that has one player, namely me.

One obvious way to develop this would be to develop a variant of Kantian ethics. While there would be no other rational beings, the Kantian view that only the good will is good would seem to allow for ethics in solipsism. While my willing could have no consequences for other beings (since there are none) I could presumably still will the good. Another way to do this is by using a modified version of virtue theory. While there would be no right or wrong targets of my feelings and actions (other than myself), there would still seem to be a way to discuss excess and deficiency. There are, of course, numerous other theories that could be modified for a world that is me. For example, utilitarianism would still work, although the only morally relevant being would be me. However, my actions could make me unhappy or happy even though they are directed “towards” the contents of my own mind. For example, engaging in “kindness” could make me happier than engaging in “cruelty.” Of course, this might be better seen as a form of ethical egoism in the purest possible sense (being the only being, I would seem to be the only being that matters-assuming any being matters).

While this might seem a bit silly, solipsism does seem to provide an interesting context in which to discuss ethics. But, time to put solipsism back in the attic.

My Amazon Author Page

Enhanced by Zemanta

Is there an Obligation of Self-Defense

Fight Club DVD

Fight Club DVD (Photo credit: filmhirek)

It is generally accepted that people have a moral right to self-defense. That is, if someone is unjustly attacked or threatened, then it is morally acceptable for her to act in her own self-protection. While there are moral limits on the actions a person may take, violence is generally considered morally acceptable in the right condition.

This right to self-defense does seem to provide a philosophical foundation for the right to the means of self-defense. After all, as Hobbes argued, a right without the means to exercise that right is effectively no right at all. Not surprisingly, I consider the right to own weapons to be grounded on the right of self-defense. However, my concern here is not with the right of self-defense. Rather, I will focus on the question of whether or not there is an obligation of self-defense.

The right to self-defense (if there is such a right) gives a person the liberty to protect herself. If it is only a liberty, then the person has the right to not act in self-defense and thus be a perfect victim. A person might, of course, elect to do so for practical reasons (perhaps to avoid a worse harm) or for moral reasons (perhaps from a commitment to pacifism). However, if there is an obligation of self-defense, then failing to act on this obligation would seem to be a moral failing. The obvious challenge is to show that there is such an obligation.

On the face of it, it would seem that self-defense is merely a liberty. However, some consideration of the matter will suggest that this is not so obvious.  In the Leviathan, Hobbes presents what he takes to be the Law of Nature (lex naturalis): “a precept or general rule, found by reason, that forbids a man to do what is destructive of his life or takes away the means of preserving it and to omit that by which he thinks it may be best preserved.” Hobbes goes on to note that “right consists in liberty to do or to forbear” and “law determines and binds.” If Hobbes is correct, then people would seem to have both a right and an obligation to self-defense.

John Locke and Thomas Aquinas also contend that life is to be preserved and if they are right, then this would seem to impose an obligation of self-defense. Of course, this notion could be countered by contending that all it requires is for a person to seek protection from possible threats and doing so could involve relying on the protection of others (typically the state) rather than one’s self. However, there are at least three arguments against this.

The first is a practical argument. While the modern Western state projects its coercive force and spying eyes into society, the state’s agents cannot (yet) observe all that occurs nor can they always be close at hand in times of danger. As such, relying solely on the state would seem to put a person at risk—after all, he would be helpless in the face of danger. If a person relies on other individuals, then unless she is guarded at all times, then she also faces the real risk of being a helpless victim. This would, at the very least, seem imprudent.

This argument can be used as the basis for a moral argument. If a person is morally obligated to preserve life (including his own) and the arms of others cannot be reliably depended on, then it would seem that she would have an obligation of self-defense.

The third argument is also a moral argument. One favorite joke of some folks who carry concealed weapons is to respond, when asked why they carry a gun, with the witty remark “because cops are too heavy.” While this is humor, it does point towards an important moral concern regarding relying on others.

A person who relies on the protection of others is expecting those people to risk being hurt or killed to protect her. In the case of those who are incapable of acting in effective self-defense, this can be a morally acceptable situation. After all, it is reasonable for infants and the badly injured to rely on the protection of others since they cannot act in their own defense.  However, a person who could be competent in self-defense but declines to do so in favor of expecting others to die for her would seem to be a morally selfish person. As such, it would seem that people have an obligation of self-defense—at least if they wish to avoid being parasites.

An obvious counter is that people do rely on others for self-defense. After all, civilians wisely allow the police and military to handle armed threats whenever possible. Since the police and military are armed and trained for such tasks, it makes sense practically and morally to rely on them.

However, as noted in the first argument, a person will not always be under the watchful protection of others. Even if others are available to risk themselves, there is still the moral concern regarding of expecting others to take risks to protect one when one is not willing to do the same for himself. That seems to be cowardice and selfishness and thus morally reprehensible. This is not, of course, to say that accepting the protection of the police and military is always a moral failing—however, a person must be willing to accept the obligation of self-defense and not rely entirely on others.

This raises the matter of the extent to which a person is obligated to be competent at self-defense and when it would be acceptable to rely on others in this matter. It would, of course, be an unreasonable expectation to morally require that people train for hours each day in self-defense. However, it does seem reasonable to expect that people become at least competent at protecting themselves, thus being able to at least act on the obligation of self-preservation with some chance of success. This obligation of self-preservation would also seem to obligate people to maintain a degree of physical fitness and health, but that is a matter for another time.

My Amazon Author Page

Enhanced by Zemanta

Do Dogs Have Morality?

A Good Dog or a Moral Dog?

The idea that morality has its foundations in biology is enjoying considerable current popularity, although the idea is not a new one. However, the current research is certainly something to be welcomed, if only because it might give us a better understanding of our fellow animals.

Being a philosopher and a long-time pet owner, I have sometimes wondered whether my pets (and other animals) have morality. This matter was easily settled in the case of cats: they have a morality, but they are evil.  My best cats have been paragons of destruction, gladly throwing the claw into lesser beings and sweeping breakable items to the floor with feline glee. Lest anyone get the wrong idea, I really like cats—in part because they are so very evil in their own special ways. The matter of dogs and morality is rather more controversial. Given that all of ethics is controversial; this should hardly be a shock.

Being social animals that have been shaped and trained by humans for thousands of years, it would hardly be surprising that dogs exhibit behaviors that humans would regard as moral in nature. However, it is well known that people anthropomorphize their dogs and attribute to them qualities that they might not, in fact, possess. As such, this matter must be approached with due caution. To be fair, we also anthropomorphize each other and there is the classic philosophical problem of other minds—so it might be the case that neither dogs nor other people have morality because they lack minds. For the sake of the discussion I will set aside the extreme version of the problem of other minds and accept a lesser challenge. To be specific, I will attempt to make a plausible case for the notion that dogs have the faculties to possess morality.

While I will not commit to a specific morality here, I will note that for a creature to have morality it would seem to need certain mental faculties. These would seem to include cognitive abilities adequate for making moral choices and perhaps also emotional capabilities (if morality is more a matter of feeling than thinking).

While dogs are not as intelligent as humans (on average) and they do not use true language, they clearly have a fairly high degree of intelligence. This is perhaps most evident in the fact that they can be trained in very complex tasks and even in professions (such as serving as guide or police dogs). They also exhibit an exceptional understanding of human emotions and while they do not have language, they certainly can learn to understand verbal and gesture commands given by humans. Dogs also have an understanding of tokens and types. To be specific, they are quite good at recognizing individuals and also good at recognizing types of things. For example, a dog can distinguish its owner while also distinguishing humans from cats. As another example, my dogs have always been able to recognize any sort of automobile and seem to understand what they do—they are generally eager to jump aboard whether it is my pickup truck or someone else’s car. On the face of it, dogs seem to have the mental horsepower needed to engage in basic decision making.

When it comes to emotions, we have almost as much reason to believe that dogs feel and understand them as we do for humans having that ability. The main difference is that humans can talk (and lie) about how they feel; dogs can only observe and express emotions. Dogs clearly express anger, joy, fear and other emotions and seem to understand those emotions in other animals. This is shown by how dogs react to expression of emotion. For example, dogs seem to recognize when their owners are sad or angry and react accordingly. Thus, while dogs might lack all the emotional nuances of humans and the capacity to talk about them, they do seem to have the basic emotional capabilities that might be necessary for ethics.

Of course, showing that dogs have intelligence and emotions would not be enough to show that dogs have morality. What is needed is some reason to think that dogs use these capabilities to make moral decisions and engage in moral behavior.

Dogs are famous for possessing traits that are analogous to (or the same as) virtues such as loyalty, compassion and courage.  Of course, Kant recognized these traits but still claimed that dogs could not make moral judgments. As he saw it, dogs are not rational beings and do not act in accord with the law. But, roughly put, they seem to have an ersatz sort of ethics in that they can act in ways analogous to human virtue. While Kant does make an interesting case, there do seem to be some reasons to accept that dogs can engage in basic moral judgments. Naturally, since dogs do not write treatises on moral philosophy, I can only speculate on what is occurring in their minds (or brains). As noted above, there is always the risk of projecting human qualities onto dogs and, of course, they make this very easy to do.

One area that seems to have potential for showing that dogs have morality is the matter of property. While some might think that dogs regard whatever they can grab (be it food or toys) as their property, this is not always the case. While it seems true that some dogs are Hobbesian, this is also true of humans. Dogs, based on my decades of experience with them, seem to be capable of clearly grasping property. For example, my husky Isis has a large collection of toys that are her possessions. She reliably distinguishes between her toys and very similar items (such as shoes, clothing, sporting goods and so on) that do not belong to her. While I do not know for sure what happens in her mind, I do know that when I give her a toy and go through the “toy ritual” she gets it and seems to recognize that the toy is her property now. Items that are not given to her are apparently recognized as being someone else’s property and are not chewed upon or dragged outside. In the case of Isis, this extends (amazingly enough) even to food—anything handed to her or in her bowl is her food, anything else is not. Naturally, she will ask for donations, even when she could easily take the food. While other dogs have varying degrees of understanding of property and territory, they certainly seem to grasp this. Since the distinction between mine and not mine seems rather important in ethics, this suggests that dogs have some form of basic morality—at least enough to be capitalists.

Dogs, like many other animals, also have the capacity to express a willingness to trust and will engage in reprisals against other dogs that break trust. I often refer to this as “dog park justice” to other folks who are dog people.

When dogs get together in a dog park (or other setting) they will typically want to play with each other. Being social animals, dogs have various ways of signaling intent. In the case of play, they typically engage in “bows” (slapping their front paws on the ground and lowering their front while making distinctive sounds). Since dogs cannot talk, they have to “negotiate” in this manner, but the result seems similar to how humans make agreements to interact peacefully.

Interestingly, when a dog violates the rules of play (by engaging in actual violence against a playing dog) other dogs recognize this violation of trust—just as humans recognize someone who violates trust. Dogs will typically recognize a “bad dog” when it returns to the park and will avoid it, although dogs seem to be willing to forgive after a period of good behavior. An understanding of agreements and reprisals for violating them seems to show that dogs have at least a basic system of morality.

As a final point, dogs also engage in altruistic behavior—helping out other dogs, humans and even other animals. Stories of dogs risking their lives to save others from danger are common in the media and this suggests that dogs can make decisions that put themselves at risk for the well-being of others. This clearly suggests a basic canine morality and one that makes such dogs better than ethical egoists. This is why when I am asked whether I would chose to save my dog or a stranger, I would chose my dog: I know my dog is good, but statistically speaking a random stranger has probably done some bad things. Fortunately, my dog would save the stranger.

My Amazon Author Page

Enhanced by Zemanta

The Ethics of Genetic Extermination

Ochlerotatus notoscriptus, Tasmania, Australia

(Photo credit: Wikipedia)

While we consider ourselves to be the dominant species on the planet, we do face dangers from other species. While some of these species are large animals such as lions, tigers and bears our greatest foes tend to be tiny. These include insects, bacteria and viruses.

While we have struggled, with some success, to eliminate various tiny threats advances in technology and science have given us some new options. One of these is genetically modifying species so they cannot reproduce, thus resulting in their extermination. As might be suspected, insects such as disease carrying mosquitoes are a prime target. One approach to wiping out mosquitoes is to genetically modify mosquito eggs so that the adults carry “extermination” genes. The adult males are released into the wild and reproduce with native females in the target area. The offspring then bear the modified gene which causes the female mosquitos to be unable to fly (they lack flight muscles). The males can operate normally and they continue to “infect” the local population until (in theory) it is exterminated. As might be imagined, this approach raises various ethical concerns.

One obvious point of concern is the matter of intentionally exterminating a species. On the face of it, such an action seems to be morally dubious. However, it does seem easy enough to counter this on utilitarian grounds. After all, if an organism (such as a mosquito) is harmful to humans and does not have an important role to play in the ecosystem, then its extermination would seem to be morally justified on the grounds that doing so would create more good than harm. Naturally, if a harmful species were also beneficial in other ways, then the matter would be rather more complicated and such extermination could be wrong on the grounds that it would do more harm than good.

The utilitarian approach can be countered by appealing to an alternative approach to ethics. For example, it could be argued that such extermination is simply wrong regardless of the beneficial consequences to humans. It can, however, be pointed out that species go extinct naturally and, as such, perhaps a case could be made that such exterminations are not inherently wrong. The obvious counter would be to point out that there is a significant moral difference between a species dying of natural causes and being destroyed. The distinction between killing and letting die comes to mind here.

I am inclined to accept that the extermination of a harmful species can be acceptable, provided that the benefits do, in fact, outweigh the damage done by exterminating the species. Getting rid of, for example, the HIV virus would seem to be morally acceptable. In the case of mosquitoes, the main concern would be the role of the mosquito in the ecosystem and the impact that its extermination would have. If, for example, the disease carrying mosquito was an invasive species and its elimination would not impact the ecosystem in a negative way, then it would seem to be acceptable to exterminate it. Naturally, if the extermination is local and the species remains elsewhere, then the ethics of the situation become far less problematic. After all, I have no moral objection to the extermination of the roaches, termites, fleas and other bugs that attempt to reside in my house—there are plenty that remain in the wild and they would pose a threat to the well-being of myself and my husky. Naturally, I would only accept the extermination of a species on very serious grounds, such as a clear danger presented to my species. Even then, it would be preferable to see if the extermination could be avoided.

A second point of concern involves the methodology. While humans have attempted to wipe out species by killing them the old fashioned ways (like poisons), the use of genetic modification could be morally significant.

There is, of course, the usual concern with “playing God” or tampering with nature. However, as is always pointed out, we routinely accept such tampering as morally acceptable in other areas. For example, by using artificial light, vaccines, surgery and such we are “playing God” and tampering with nature. As such, the idea that “playing God” is inherently wrong seems rather dubious. Rather, what is needed is to show that specific acts of “playing God” or tampering are wrong.

There is also the reasonable concern about unintended consequences, something that is not unknown in the attempts to exterminate species. For example, DDT had a host of undesirable effects. I do not, of course, think that modifying mosquitoes will create some sort of 1950s style mega-mosquitoes that will rampage across the land. However, there are reasonable grounds to be concerned that genetic modification might have unexpected and unpleasant results and this possibility should be seriously considered.

A final point I will address is a practical one, namely that even if a species is exterminated by genetic modification another species might simply take its place. In the case of mosquitoes it seems likely that if one type of mosquito is wiped out, then another one will simply move into the niche vacated and the problem, such as a mosquito transmitted illness will return. The concern is, of course, that resources would have been expended and a species exterminated for nothing. Naturally, if there are good grounds to believe that the extermination would be effective and ethically acceptable, then this would be another matter.

My Amazon Author Page

Enhanced by Zemanta

God, Rape & Free Will

freewill.jpg

freewill.jpg (Photo credit: Thunderkiss59)

The stock problem of evil is that the existence of evil in the world is incompatible with the Philosophy 101 conception of God, namely that God is all good, all powerful and all knowing. After all, if God has these attributes, then He knows about all evil, should tolerate no evil and has the power to prevent evil. While some take the problem of evil to show that God does not exist, it can also be taken as showing that this conception of God is in error.

Not surprisingly, those who wish to accept the existence of this all good, all powerful and all-knowing deity have attempted various ways to respond to the problem of evil. One standard response is, of course, that God has granted us free will and this necessitates that He allow us to do evil things. This, it is claimed, gets God off the hook: since we are free to choose evil, God is not accountable for the evil we do.

In a previous essay I discussed Republican Richard Mourdock’s view that “Life is that gift from God. I think that even when life begins in that horrible situation of rape, that it is something God intended to happen.” In the course of that essay, I briefly discussed the matter of free will. In this essay I will expand on this matter.

For the sake of the discussion, I will assume that we have free will. Obviously, this can easily be dispute, I am interested in seeing whether or not such free will can actually get God off the hook for the evil that occurs, such as rape and its consequences.

On the face of it, free will would seem to free God from being morally accountable for our choices. After all, if God does not compel or influence our choices and we are truly free to select between good and evil, then the responsibility of the choice would rest on the person making the decision. It should also be added that God would presumably also be excused from allowing for evil choices—after all, in order for there to be truly free will in the context of morality there must be the capacity for choosing good or evil. Or so the stock arguments usually claim.

For the sake of the discussion I will also accept this second assumption, namely that free will gets God off the hook in regards to our choices. This does, of course, lead to an interesting question: does allowing free will also require that God allow the consequences of the evil choices to come to pass? That is, could God allow people moral autonomy in their choices, yet prevent their misdeeds from actually bearing their evil fruit?

One way to consider this matter is to take the view that free will requires that a person be able to make a moral decision and that this decision be either good or evil (or possibly neutral). After all, a moral choice must be a moral choice. On this approach, whether or not free will would be compatible with God preventing occurrences (like rape or pregnancy caused by rape) would seem to depend on what makes something good or evil.

There are, of course, a multitude of moral theories that address this matter. For the sake of brevity I will consider two: Kant’s view and the utilitarian view (as exemplified by John Stuart Mill).

Kant famously takes the view that “A good will is good not because of what it performs or effects, not by its aptness for the attainment of some proposed end, but simply by virtue of the volition—that is, it is good in itself, and considered by itself is to be esteemed much higher than all that can be brought about by it in favor of any inclination…Its usefulness or fruitlessness can neither add to nor take away anything from this value.”

For Kant, what makes a willing (decision) good or evil is contained in the act of willing itself. Hence, there would be no need to consider the consequences of an action stemming from a decision when determining the morality of the choice. An interesting illustration of this view can be found in Bioware’s Star Wars the Old Republic game. Players are often given a chance to select between light side (good) and dark side (evil) options, thus earning light side or dark side points which determine the moral alignment of the character. For example, a player might have to choose to kill or spare a defeated opponent.  Conveniently, the choices are labeled with symbols indicating whether a choice is light side or dark side—which would be very useful in real life.

If Kant’s view is correct, then God could allow the freedom of the will while also preventing evil choices from having any harmful consequences. For example, a person could freely chose to rape a woman and the moral choice would presumably be duly noted by God (in anticipation of judgment day). God could then simply prevent the rape from ever occurring—the rapist could, for example, stumble and fall while lunging towards his intended victim. As another example, a person could freely will the decision to murder someone, yet find that her gun fails to fire when aimed at the intended victim. In short, people could be free to make moral choices while at the same time being unable to actually bring those evil intentions into actuality. Thus, God could allow free will while also preventing anyone from being harmed.

It might be objected that God could not do this on the grounds that people would soon figure out that they could never actualize their evil decisions and hence people would (in general) stop making evil choices. That is, there would be a rather effective deterrent to evil choices, namely that they could never bear fruit and this would rob people of their free will. For example, those who would otherwise decide to rape if they could engage in rape would not do that because they would know that their attempts to act on their decisions would be thwarted.

The obvious reply is that free will does not mean that person gets what s/he wills—it merely means that the person is free to will. As such, people who want to rape could still will to rape and do so freely. They just would not be able to harm anyone.

It is, of course, obvious that this is not how the world works—people are able to do all sorts of misdeeds. However, since God could make the world work this way, this would suggest various possibilities such as God not existing or that God is not a Kantian. This leads me to the discussion of the utilitarian option.

On the stock utilitarian approach, the morality of an action depends on the consequences of said action. As Mill put it, “actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.” As such, the morality of a willing would not be determined by the willing but by the consequences of the action brought about by the willing in question.

If this is correct, then God would need to allow the consequences of the willing to occur in order for the willing to be good or evil (or neutral). After all, if the willing had no consequences then it would have no moral significance on a consequentialist view like utilitarianism. So, for example, if a person freely wills to rape a woman, then God must not intervene. Otherwise He would be interfering with what determines the ethics of the willing. As such, if God did not allow the rapist to act upon his willing, then the decision to rape would not be an evil decision. If it is assumed that free will is essential to God being able to judge people for their deeds and misdeeds, then He would have to allow misdeeds to bear fruit so that they would be, in fact, misdeeds. On the usual view, He then punishes or rewards people after they die.

One rather obvious problem with this approach is that an all knowing God would know the consequences of an action even without allowing the action to take place. As such, God could allow people to will their misdeeds and then punish them for what the consequences would have been if they had been able to act upon their intentions. After all human justice punishes people even when they are prevented from committing their crimes. For example, someone who tries to murder another person is still justly punished even if she is prevented from succeeding.

It might be countered that God can only punish cases of actual evil rather than potential evil. That is, if the misdeed is prevented then it is not an actual misdeed and hence God cannot justly punish a person. On this view, God must allow rape in order to be able to toast rapists in Hell. This would, of course, require that God not consider an attempted evil deed as an evil deed. So, actual murder would be wrong, but attempted murder would not. This, of course, is rather contrary to human justice—but it could be claimed that human law and divine law are rather different. Obviously humans and God take very different approaches: we generally try to keep people from committing misdeeds whereas God apparently never does. Rather, He seems content to punish long after the fact—at least on the usual account of God.

 

My Amazon Author Page

Enhanced by Zemanta

Of Morals and Philosorabbits

Andrei and Leila were on a walk when they came upon a sign.

When the rabbits stopped for pause, Andrei was ready to opine:

“I do not know who made this ugly thing”, said Andrei (with quite a fuss).

“Who says we ought not walk on the grass? How come the grass is not for us?”

“I do not know,” Leila said, ready at his wing.

“I suppose that we could go ahead and ask the author of the thing.

…But I do not know the author’s name, address, or anything even close.

For all I know, the author may have been a god, a man, or maybe even ghost.”

“That I doubt,” said Andrei. “And perhaps we do not even need to know,

who it was that wrote this sign and put it up for show.

The only thing we need is to know what makes it true.

And I say that — if the sign’s advice is correct — it is good for me and you.

Perhaps the grass is where the farmer hides his bombs and dynamite.

The sign is there to decrease our harm and increase in our delight.”

Satisfied, the rabbits continued walking, and got further down the way.

As they walked, a brand new sign was by the road, looming large in the mid-day.

But lurking below the sign was a low-down dirty thief,

Eavesdropping upon the rabbits, prepared to give them grief.

“Consolation for my victims, yes — and hence, these signs are true;

they are well suited for the credulous, for idiots, and for buffoons.

But I am no fool, so I will keep acting in whatever way I think is best,

So I will ignore these quaint suggestions, and let loose upon the rest.”

“It is plain enough,” said Leila, “That your victims suffer.

They lose the things that they once had, and this makes their lives much tougher.”

Leila wavered then, haunted by his words.

“But I see your point — why should anyone be moral if being moral is for the birds?”

“Ah,” says Andrei, “Well, I guess it depends on who you are.

If you are an anti-social goon, these signs will not take you very far.

So whenever a psycho nutter says, “What is morality to me?”

They do not have a need for trust, so no answer can be gleaned.

But if our new friend has a social bent, we say another thing:

‘Your reason to not steal is that you are a social be-ing.’”

“Hmm,” said the thief. “Alright, I admit that there’s some reason not to steal.

I just don’t know why you believe reasons obscure less than they reveal.

But perhaps, to see my point, you’d best follow me further up the hill,

To meet a friend of mine, whose name is Jack-Bill.”

On and up the hill they went, the three marched all in a line,

And at the top they met a creature that made the rabbits want to hide.

The thing they met was an abomination, a fluffy monster with two heads,

And either head spoke the contrary of what its neighbor said.

Said the thief to the creature: “Hullo, Jack-Bill — what say you to stealing?”

“It makes me giddy,” said the head of Bill. “No! Never!,” said Jack, “It leaves me reeling!”

“So you see,” said the thief, turning back to the troupe.

“All your talk of reason-this and reason-that? It’s just a lot of goop.

Despite their frightening looks, Jack-Bill is very nice.

We can call him “moral”, “pro-social”, or whatever you would like.

So moral claims aren’t really ‘true’ or ‘false’ when you get down to the brass tacks.

When we say a thing is moral we mean “hooray,” and when immoral, “boo to that”.

“Alright,” said Leila. “There’s no need to get too pensive –

Jack-Bill is incoherent, his capriciousness offensive.”

“Well, look,” said the thief, increasingly distressive.

“Like I said before, in words which you did not find impressive:

for me to say a moral claim is true, is to say to it that I consent.

For otherwise, I would have to defer to those whose brains are daft and bent.”

Andrei was quiet for a time, as if he were imitating mice:

and said, “That’s just fine, my new friend — but would you go to them for advice?

If not — if their lack of due deliberation makes for over-wrought demands –

Then they are not to be trusted, they can give you no commands.

So you trust yourself as final arbiter, when in the company of fools,

But that is only reason to make sensible friends, not to abandon all the rules.”

At this point, Jack-Bill roared as loud as seven oceans.

“I have ears, you know,” Bill said. “You hurt my emotions!”

Just then, the thief looked at Jack, waiting patient with a smirk.

Unexpectedly, Jack stared their way and said, “Actually, I agree with Bill — you’re all jerks.”

With that, they both breathed fireballs, and the other rabbits ran away.

Though later on, something happened in the coldest hours of the day…

Jack-Bill talked more to himself, exploring his sense of rational will,

And by degrees Jack-Bill split into two, creating Jack and Bill.

While before they had been united, when both indulged their inclinations,

Now, they found that talk in reasons made for healthier relations.

As time went on, it was not so hard for each to have their own perspectives,

Where earlier each were caught providing in rudderless correctives.

The trio ran and ran, into the forest deep,

and along the way, as they ran, Andrei and Leila lost the thief.

Andrei was bewildered at the canopy, and shivered at spooky sounds,

While Leila (made of sturdy stuff) offered to look around.

Alas, getting lost herself, the dark and dank surrounded,

The owls screeched out in hoots of despair, her direction was confounded.

She looked to and fro, and everywhere, the world looked all the same,

On the left there was little light; and to the right, the same.

Presently, however, she glimpsed a burning light:

A fairy, bright and blue, kept darting in and out of sight.

“Psst,” said the fairy, with a no-nonsense business sense.

If morals are just good advice, it only works when between friends.

But out here in the darkness, the forest is unkind,

It picks off little strangers who are lost, unwary, and blind.

If moral claims are true, they would not be much good,

They would apply to your relations with your friend, but mean nothing in these woods.”

“Help me, then!” cried Leila. “We could really use a hand!”

“I would be glad to help,” the fairy replied. “My services are in high demand.”

“Indeed, for a low price of nine dollars and thirty cents,

My associates and I can get you out of this predicament.

So would you like to pay by cash, or cheque, or credit card?

Keep in mind, the offer is limited, so be sure to think fast and hard.”

“But I have no funds,” said Leila. “We’re lost and all alone!

Can’t, from the kindness of your heart, you just direct me to a phone?”

But no: the fairy receded back

into the inky black.

And Leila was left there waiting in the cold canopy,

And the leaves twisted back and forth, as if a dark conspiracy.

The dusk settled upon the land,

And two wayward rabbits sat alone, each waited for the end.

And whatever happened next,

depends mostly on you.

What makes the fairy wrong?

What can the rabbits say or do?