Tag Archives: Ethics

Neutral Good

My previous essays on alignments have focused on the evil ones (lawful evil, neutral evil and chaotic evil). Patrick Lin requested this essay. He professes to be a devotee of Neutral Evil to such a degree that he regards being lumped in with Ayn Rand as an insult. Presumably because he thinks she was too soft on the good.

In the Pathfinder version of the game, neutral good is characterized as follows:

A neutral good character is good, but not shackled by order. He sees good where he can, but knows evil can exist even in the most ordered place.

A neutral good character does anything he can, and works with anyone he can, for the greater good. Such a character is devoted to being good, and works in any way he can to achieve it. He may forgive an evil person if he thinks that person has reformed, and he believes that in everyone there is a little bit of good.

In a fantasy campaign realm, the player characters typical encounter neutral good types as allies who render aid and assistance. Even evil player characters are quite willing to accept the assistance of the neutral good, knowing that the neutral good types are more likely to try to persuade them to the side of good than smite them with righteous fury. Neutral good creatures are not very common in most fantasy worlds—good types tend to polarize towards law and chaos.

Not surprisingly, neutral good types are also not very common in the real world. A neutral good person has no special commitment to order or lack of order—what matters is the extent to which a specific order or lack of order contributes to the greater good. For those devoted to the preservation of order, or its destruction, this can be rather frustrating.

While the neutral evil person embraces the moral theory of ethical egoism (that each person should act solely in her self-interest), the neutral good person embraces altruism—the moral view that each person should act in the interest of others. In more informal terms, the neutral good person is not selfish. It is not uncommon for the neutral good position to be portrayed as stupidly altruistic. This stupid altruism is usually cast in terms of the altruist sacrificing everything for the sake of others or being willing to help anyone, regardless of who the person is or what she might be doing. While a neutral good person is willing to sacrifice for others and willing to help people, being neutral good does not require a person to be unwise or stupid. So, a person can be neutral good and still take into account her own needs. After all, the neutral good person considers the interests of everyone and she is part of that everyone. A person can also be selective in her assistance and still be neutral good. For example, helping an evil person do evil things would not be a good thing and hence a neutral good person would not be obligated to help—and would probably oppose the evil person.

Since a neutral good person works for the greater good, the moral theory of utilitarianism tends to fit this alignment. For the utilitarian, actions are good to the degree that they promote utility (what is of value) and bad to the degree that they do the opposite. Classic utilitarianism (that put forth by J.S. Mill) takes happiness to be good and actions are assessed in terms of the extent to which they create happiness for humans and, as far as the nature of things permit, sentient beings. Put in bumper sticker terms, both the utilitarian and the neutral good advocate the greatest good for the greatest number.

This commitment to the greater good can present some potential problems. For the utilitarian, one classic problem is that what seems rather bad can have great utility. For example, Ursula K. Le Guin’s classic short story “The Ones Who Walk Away from Omelas” puts into literary form the question raised by William James:

Or if the hypothesis were offered us of a world in which Messrs. Fourier’s and Bellamy’s and Morris’s utopias should all be outdone, and millions kept permanently happy on the one simple condition that a certain lost soul on the far-off edge of things should lead a life of lonely torture, what except a specifical and independent sort of emotion can it be which would make us immediately feel, even though an impulse arose within us to clutch at the happiness so offered, how hideous a thing would be its enjoyment when deliberately accepted as the fruit of such a bargain?

In Guin’s tale, the splendor, health and happiness that is the land of Omelas depends on the suffering of a person locked away in a dungeon from all kindness. The inhabitants of Omelas know full well the price they pay and some, upon learning of the person, walk away. Hence the title.

For the utilitarian, this scenario would seem to be morally correct: a small disutility on the part of the person leads to a vast amount of utility. Or, in terms of goodness, the greater good seems to be well served.

Because the suffering of one person creates such an overabundance of goodness for others, a neutral good character might tolerate the situation. After all, benefiting some almost always comes at the cost of denying or even harming others. It is, however, also reasonable to consider that a neutral good person would find the situation morally unacceptable. Such a person might not free the sufferer because doing so would harm so many other people, but she might elect to walk away.

A chaotic good type, who is committed to liberty and freedom, would certainly oppose the imprisonment of the innocent person—even for the greater good. A lawful good type might face the same challenge as the neutral good type: the order and well being of Omelas rests on the suffering of one person and this could be seen as an heroic sacrifice on the part of the sufferer. Lawful evil types would probably be fine with the scenario, although they would have some issues with the otherwise benevolent nature of Omelas. Truly subtle lawful evil types might delight in the situation and regard it as a magnificent case of self-delusion in which people think they are selecting the greater good but are merely choosing evil.

Neutral evil types would also be fine with it—provided that it was someone else in the dungeon. Chaotic evil types would not care about the sufferer, but would certainly seek to destroy Omelas. They might, ironically, try to do so by rescuing the sufferer and seeing to it that he is treated with kindness and compassion (thus breaking the conditions of Omelas’ exalted state).

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

DBS, Enhancement & Ethics

Placement of an electrode into the brain. The ...

Placement of an electrode into the brain. The head is stabilised in a frame for stereotactic surgery. (Photo credit: Wikipedia)

Deep Brain Stimulation (DBS) involves the surgical implantation of electrodes into a patient’s brain that, as the name indicates, stimulate the brain. Currently the procedure is used to treat movement disorders (such as Parkinson’s disease, dystonia and essential tremor) and Tourette’s syndrome. Research is currently underway for using the procedure to treat neuropsychiatric disorders (such as PTSD) and there is some indications that it can help with the memory loss inflicted by Alzheimers.

From a moral standpoint, the use of DBS in treating such conditions seems no more problematic than using surgery to repair a broken bone. If these were the only applications for DBS, then there would be no real moral concerns about the process. However, as is sometimes the case in medicine, there are potential applications that do raise moral concerns.

One matter for concern has actually been a philosophical problem for some time. To be specific, DBS can be used to stimulate the nucleus accumbens (a part of the brain associated with pleasure). While this can be used to treat depression, it can also (obviously) be used to create pleasure directly—the infamous pleasure machine scenario of so many Ethics 101 classes (the older version of which is the classic pig objection most famously considered by J.S. Mill in his work on Utilitarianism). Thanks to these stock discussions, the ethical ground of pleasure implants is well covered (although, as always, there are no truly decisive arguments).

While the sci-fi/philosophy scenario of people in pleasure comas is interesting, what is rather more interesting is the ethics of DBS as a life-enhancer. That is, getting the implant not to use to excess or in place of “real” pleasure, but to just make life a bit better. To use the obvious analogy, the excessive scenario is like drinking oneself into a stupor, while the life-upgrade would be like having a drink with dinner. On the face of it, it would be hard to object if the effect was simply to make a person feel a bit better about life—and it could even be argued that this would be preventative medicine. Just as person might be on medication to keep from developing high blood pressure or exercise to ward off diabetes, a person might get a brain boost to ward off potential depression. That said, there is the obvious concern of abusing the technology (and the iron law of technology states that any technology that can be abused, will be abused).

Another area of concern is the use of DBS for other enhancements. To use a specific example, if DBS can improve memory in Alzheimer’s patients, then it could do the same for healthy people. It is not difficult to imagine some people seeking to boost their memory or other abilities through this technology. This, of course, is part of the general topic of brain enhancements (which is part of the even more general topic of enhancements). As David Noonan has noted, DBS could become analogous to cosmetic/plastic surgery: what was once intended to treat serious injuries has become an elective enhancement surgery. Just as people seek to enhance their appearance by surgery, it seems reasonable to believe that they will do so to enhance their mental abilities. As long as there is money to be made here, many doctors will happily perform the surgeries—so it is probably a question of when rather than if DBS will be used for enhancement rather than for treatment.

From a moral standpoint, there is the same concern that has long held regarding cosmetic surgery, namely the risk of harm for the sake of enhancement. However, if enhancing one’s looks via surgery is morally acceptable, then enhancing one’s mood, memory and so on should certainly be acceptable as well. In fact, it could be argued that such substantial improvements are more laudable than merely improving appearance.

There is also the stock moral concern with fairness: those who can afford such enhancements will have yet another advantage over those less well off, thus widening the gap even more. This is, of course, a legitimate concern. But, aside from the nature of the specific advantage, nothing new morally. If it is acceptable for the wealthy to buy advantages in other ways, this should not seem to be any special exception.

There is, of course, two practical matters to consider. The first is whether or not DBS will prove effective in enhancement. The answer seems likely to be “yes.” The second is whether or not DBS will be tarnished by a disaster (or disasters). If something goes horribly wrong in a DBS procedure and this grabs media attention, this could slow the acceptance of DBS. That said, horrific tales involving cosmetic surgery did little to slow down its spread. So, someday soon people will go in to get a facelift, a memory lift and a mood lift. Better living through surgery.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Ethics & Free Will

Conscience and law

Conscience and law (Photo credit: Wikipedia)

Azim Shariff and Kathleen Vohs recently had their article, “What Happens to a Society That Does Not Believe in Free Will”, published in Scientific American. This article considers the causal impact of a disbelief in free will with a specific focus on law and ethics.

Philosophers have long addressed the general problem of free will as well as the specific connection between free will and ethics. Not surprisingly, studies conducted to determine the impact of disbelief in free will have the results that philosophers have long predicted.

One impact is that when people have doubts about free will they tend to have less support for retributive punishment. Retributive punishment, as the name indicates, is punishment aimed at making a person suffer for her misdeeds. Doubt in free will did not negatively impact a person’s support for punishment aimed at deterrence or rehabilitation.

While the authors do consider one reason for this, namely that those who doubt free will would regard wrongdoers as analogous to harmful natural phenomenon that need to dealt with rather than subject to vengeance, this view also matches a common view about moral accountability. To be specific, moral (and legal) accountability is generally proportional to the control a person has over events. To use a concrete example, consider the difference between these two cases. In the first case, Sally is driving well above the speed limit and is busy texting and sipping her latte. She doesn’t see the crossing guard frantically waving his sign and runs over the children in the cross walk. In case two, Jane is driving the speed limit and children suddenly run directly in front of her car. She brakes and swerves immediately, but she hits the children. Intuitively, Sally has acted in a way that was morally wrong—she should have been going the speed limit and she should have been paying attention. Jane, though she hit the children, did not act wrongly—she could not have avoided the children and hence is not morally responsible.

For those who doubt free will, every case is like Jane’s case: for the determinist, every action is determined and a person could not have chosen to do other than she did. On this view, while Jane’s accident seems unavoidable, so was Sally’s accident: Sally could not have done other than she did. As such, Sally is no more morally accountable than Jane. For someone who believes this, inflicting retributive punishment on Sally would be no more reasonable than seeking vengeance against Jane.

However, it would seem to make sense to punish Sally to deter others and to rehabilitate Sally so she will drive the speed limit and pay attention in the future. Of course, if these is no free will, then we would not chose to punish Sally, she would not chose to behave better and people would not decide to learn from her lesson. Events would happen as determined—she would be punished or not. She would do it again or not. Other people would do the same thing or not. Naturally enough, to speak of what we should decide to do in regards to punishments would seem to assume that we can chose—that is, that we have some degree of free will.

A second impact that Shariff and Vohs noted was that a person who doubts free will tends to behave worse than a person who does not have such a skeptical view. One specific area in which behavior worsens is that such skepticism seems to incline people to be more willing to harm others. Another specific area is that such skepticism also inclines others to lie or cheat. In general, the impact seems to be that the skepticism reduces a person’s willingness (or capacity) to resist impulsive reactions in favor of greater restraint and better behavior.

Once again, this certainly makes sense. Going back to the examples of Sally and Jane, Sally (unless she is a moral monster) would most likely feel remorse and guilt for hurting the children. Jane, though she would surely feel badly, would not feel moral guilt. This would certainly be reasonable: a person who hurts others should feel guilt if she could have done otherwise but should not feel moral guilt if she could not have done otherwise (although she certainly should feel sympathy). If someone doubts free will, then she will regard her own actions as being out of her control: she is not choosing to lie, or cheat or hurt others—these events are just happening. People might be hurt, but this is like a tree falling on them—it just happens. Interestingly, these studies show that people are consistent in applying the implications of their skepticism in regards to moral (and legal) accountability.

One rather important point is to consider what view we should have regarding free will. I take a practical view of this matter and believe in free will. As I see it, if I am right, then I am…right. If I am wrong, then I could not believe otherwise. So, choosing to believe I can choose is the rational choice: I am right or I am not at fault for being wrong.

I do agree with Kant that we cannot prove that we have free will. He believed that the best science of his day was deterministic and that the matter of free will was beyond our epistemic abilities. While science has marched on since Kant, free will is still unprovable. After all, deterministic, random and free-will universes would all seem the same to the people in them. Crudely put, there are no observations that would establish or disprove metaphysical free will. There are, of course, observations that can indicate that we are not free in certain respects—but completely disproving (or proving) free will would seem to beyond our abilities—as Kant contended.

Kant had a fairly practical solution: he argued that although free will cannot be proven, it is necessary for ethics. So, crudely put, if we want to have ethics (which we do), then we need to accept the existence of free will on moral grounds. The experiments described by Shariff and Vohs seems to support Kant: when people doubt free will, this has an impact on their ethics.

One aspect of this can be seen as positive—determining the extent to which people are in control of their actions is an important part of determining what is and is not a just punishment. After all, we do not want to inflict retribution on people who could not have done otherwise or, at the very least, we would want relevant circumstances to temper retribution with proper justice.  It also makes more sense to focus on deterrence and rehabilitation more than retribution. However just, retribution merely adds more suffering to the world while deterrence and rehabilitation reduces it.

The second aspect of this is negative—skepticism about free will seems to cause people to think that they have a license to do ill, thus leading to worse behavior. That is clearly undesirable. This then, provides an interesting and important challenge: balancing our view of determinism and freedom in order to avoid both unjust punishment and becoming unjust. This, of course, assumes that we have a choice. If we do not, we will just do what we do and giving advice is pointless. As I jokingly tell my students, a determinist giving advice about what we should do is like someone yelling advice to a person falling to certain death—he can yell all he wants about what to do, but it won’t matter.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Twitter Mining

Image representing Twitter as depicted in Crun...

Image via CrunchBase

In February, 2014 Twitter made all its tweets available to researchers. As might be suspected, this massive data is a potential treasure trove to researchers. While one might picture researchers going through the tweets for the obvious content (such as what people eat and drink), this data can be mined in some potentially surprising ways. For example, the spread of infectious diseases can be tracked via an analysis of tweets. This sort of data mining is not new—some years ago I wrote an essay on the ethics of mining data and used Target’s analysis of data to determine when customers were pregnant (so as to send targeted ads). What is new about this is that all the tweets are now available to researchers, thus providing a vast heap of data (and probably a lot of crap).

As might be imagined, there are some ethical concerns about the use of this data. While some might suspect that this creates a brave new world for ethics, this is not the case. While the availability of all the tweets is new and the scale is certainly large, this scenario is old hat for ethics. First, tweets are public communications that are on par morally with yelling statements in public places, posting statements on physical bulletin boards, putting an announcement in the paper and so on. While the tweets are electronic, this is not a morally relevant distinction. As such, researchers delving into the tweets is morally the same as a researcher looking at a bulletin board for data or spending time in public places to see the number of people who go to a specific store.

Second, tweets can (often) be linked to a specific person and this raises the stock concern about identifying specific people in the research. For example, identifying Jane Doe as being likely to have an STD based on an analysis of her tweets. While twitter provides another context in which this can occur, identifying specific people in research without their consent seems to be well established as being wrong. For example, while a researcher has every right to count the number of people going to a strip club via public spaces, to publish a list of the specific individuals visiting the club in her research would be morally dubious—at best. As another example, a researcher has every right to count the number of runners observed in public spaces. However, to publish their names without their consent in her research would also be morally dubious at best. Engaging in speculation about why they run and linking that to specific people would be even worse (“based on the algorithm used to analysis the running patterns, Jane Doe is using her running to cover up her affair with John Roe”).

One counter is, of course, that anyone with access to the data and the right sorts of algorithms could find out this information for herself. This would simply be an extension of the oldest method of research: making inferences from sensory data. In this case the data would be massive and the inferences would be handled by computers—but the basic method is the same. Presumably people do not have a privacy right against inferences based on publically available data (a subject I have written about before). Speculation would presumably not violate privacy rights, but could enter into the realm of slander—which is distinct from a privacy matter.

However, such inferences would seem to fall under privacy rights in regards to the professional ethics governing researchers—that is, researchers should not identify specific people without their consent whether they are making inferences or not. To use an analogy, if I infer that Jane Doe and John Roe’s public running patterns indicate they are having an affair, I have not violated their right to privacy (assuming this also covers affairs). However, if I were engaged in running research and published this in a journal article without their permission, then I would presumably be acting in violation of research ethics.

The obvious counter is that as long as a researcher is not engaged in slander (that is intentionally saying untrue things that harm a person), then there would be little grounds for moral condemnation. After all, as long as the data was publically gathered and the link between the data and the specific person is also in the public realm, then nothing wrong has been done. To use an analogy, if someone is in a public park wearing a nametag and engages in specific behavior, then it seems morally acceptable to report that. To use the obvious analogy, this would be similar to the ethics governing journalism: public behavior by identified individuals is fair game. Inferences are also fair game—provided that they do not constitute slander.

In closing, while Twitter has given researchers a new pile of data the company has not created any new moral territory.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Anyone Home?

English: man coming out of coma.

English: man coming out of coma. (Photo credit: Wikipedia)

As I tell my students, the metaphysical question of personal identity has important moral implications. One scenario I present is that of a human in what seems to be a persistent vegetative state. I say “human” rather than “person”, because the human body in question might no longer be a person. To use a common view, if a person is her soul and the soul has abandoned the shell, then the person is gone.

If the human is still a person, then it seems reasonable to believe that she has a different moral status than a mass of flesh that was once a person (or once served as the body of a person). This is not to say that a non-person human would have no moral status at all—I do not want to be interpreted as holding that view. Rather, my view is that personhood is a relevant factor in the morality of how an entity is treated.

To use a concrete example, consider a human in what seems to be a vegetative state. While the body is kept alive, people do not talk to the body and no attempt is made to entertain the body, such as playing music or audiobooks. If there is no person present or if there is a person present but she has no sensory access at all, then this treatment would seem to be acceptable—after all it would make no difference whether people talked to the body or not.

There is also the moral question of whether such a body should be kept alive—after all, if the person is gone, there would not seem to be a compelling reason to keep an empty shell alive. To use an extreme example, it would seem wrong to keep a headless body alive just because it can be kept alive. If the body is no longer a person (or no longer hosts a person), then this would be analogous to keeping the headless body alive.

But, if despite appearances, there is still a person present who is aware of what is going on around her, then the matter is significantly different. In this case, the person has been effectively isolated—which is certainly not good for a person.

In regards to keeping the body alive, if there is a person present, then the situation would be morally different. After all, the moral status of a person is different from that of a mass of merely living flesh. The moral challenge, then, is deciding what to do.

One option is, obviously enough, to treat all seemingly vegetative (as opposed to brain dead) bodies as if the person was still present. That is, the body would be accorded the moral status of a person and treated as such.

This is a morally safe option—it would presumably be better that some non-persons get treated as persons rather than risk persons being treated as non-persons. That said, it would still seem both useful and important to know.

One reason to know is purely practical: if people know that a person is present, then they would presumably be more inclined to take the effort to treat the person as a person. So, for example, if the family and medical staff know that Bill is still Bill and not just an empty shell, they would tend to be more diligent in treating Bill as a person.

Another reason to know is both practical and moral: should scenarios arise in which hard choices have to be made, knowing whether a person is present or not would be rather critical. That said, given that one might not know for sure that the body is not a person anymore it could be correct to keep treating the alleged shell as a person even when it seems likely that he is not. This brings up the obvious practical problem: how to tell when a person is present.

Most of the time we judge there is a person present based on appearance, using the assumption that a human is a person. Of course, there might be non-human people and there might be biological humans that are not people (headless bodies, for example). A somewhat more sophisticated approach is to use the Descartes’s test: things that use true language are people. Descartes, being a smart person, did not limit language to speaking or writing—he included making signs of the sort used to communicate with the deaf. In a practical sense, getting an intelligent response to an inquiry can be seen as a sign that a person is present.

In the case of a body in an apparent vegetative state applying this test is quite a challenge. After all, this state is marked by an inability to show awareness. In some cases, the apparent vegetative state is exactly what it appears to be. In other cases, a person might be in what is called “locked-in-syndrome.” The person is conscious, but can be mistaken for being minimally conscious or in a vegetative state. Since the person cannot, typically, respond by giving an external sign some other means is necessary.

One breakthrough in this area is due to Adrian M. Owen. Overs implying things considerably, he found that if a person is asked to visualize certain activities (playing tennis, for example), doing so will trigger different areas of the brain. This activity can be detected using the appropriate machines. So, a person can ask a question such as “did you go to college at Michigan State?” and request that the person visualize playing tennis for “yes” or visualize walking around her house for “no.” This method provides a way of determining that the person is still present with a reasonable degree of confidence. Naturally, a failure to respond would not prove that a person is not present—the person could still remain, yet be unable (or unwilling) to hear or respond.

One moral issue this method can held address is that of terminating life support. “Pulling the plug” on what might be a person without consent is, to say the least, morally problematic. If a person is still present and can be reached by Owen’s method, then thus would allow the person to agree to or request that she be taken off life support. Naturally, there would be practical questions about the accuracy of the method, but this is distinct from the more abstract ethical issue.

It must be noted that the consent of the person would not automatically make termination morally acceptable—after all, there are moral objections to letting a person die in this manner even when the person is fully and clearly conscious. Once it is established that the method adequately shows consent (or lack of consent), the broader moral issue of the right to die would need to be addressed.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

The Robots of Deon

The Robots of Dawn (1983)

The Robots of Dawn (1983) (Photo credit: Wikipedia)

The United States military has expressed interest in developing robots capable of moral reasoning and has provided grant money to some well-connected universities to address this problem (or to at least create the impression that the problem is being considered).

The notion of instilling robots with ethics is a common theme in science fiction, the most famous being Asimov’s Three Laws. The classic Forbidden Planet provides an early movie example of robotic ethics: Robby the robot has an electro-mechanical seizure if he is ordered to cause harm to a human being (or an id-monster created by the mind of his creator. Dr. Morbius). In contrast, the killer machines (like Saberhagan’s Berserkers) of science fiction tend to be free of the constraints of ethics.

While there are various reasons to imbue (or limit) robots with ethics (or at least engage in the pretense of doing so), one of these is public relations. Thanks to science fiction dating back at least to Frankenstein, people tend to worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics serves to reassure the public (or so it is hoped). Another reason is to make the public relations gimmick a reality—to actually place behavioral restraints on killbots so they will conform to the rules of war (and human morality). Presumably the military will also address the science fiction theme of the ethical killbot who refuses to kill on moral grounds.

While science fiction features ethical robots, the authors (like philosophers who discuss the ethics of robots) are extremely vague about how robot ethics actually works. In the case of truly intelligent robots, their ethics might work the way our ethics works—which is something that is still a mystery debated by philosophers and scientists to this day. We are not yet to the point of having such robots, so the current practical challenge is to develop ethics for the sort of autonomous or semi-autonomous robots we can build now.

While creating ethics for robots might seem daunting, the limitations of current robot technology means that robot ethics is essentially a matter of programming these machines to operate in specific ways defined by whatever ethical system is being employed as the guide. One way to look at programing such robots with ethics is that they are being programmed with safety features. To use a simple example, suppose that I regard shooting unarmed people as immoral. To make my killbot operate according to that ethical view, it would be programmed to recognize armed humans and have some code saying, in effect “if unarmedhuman = true, then firetokill= false” or, in normal English, if the human is unarmed, do not shoot her.

While a suitably programmed robot would act in a way that seemed ethical, the robot is obviously not engaged in ethical behavior. After all, it is merely a more complex version of the automatic door. The supermarket door, though it opens for you, is not polite. The shredder that catches your tie and chokes you is not evil.  Likewise, the killbot that does not shoot you in the face because its cameras show that you are unarmed is not ethical. The killbot that chops you into meaty chunks is not unethical. Following Kant, since the killbot’s programming is imposed and the killbot lacks the freedom to choose, it is not engaged in ethical (or unethical behavior), though the complexity of its behavior might make it seem so.

To be fair to the killbots, perhaps we humans are not ethical or unethical under these requirements for ethics—we could just be meat-bots operating under the illusion of ethics. Also, it is certainly sensible to focus on the practical aspect of the matter: if you are a civilian being targeted by a killbot, your concern is not whether it is an autonomous moral agent or merely a machine—your main worry is whether it will kill you or not. As such, the general practical problem is getting our killbots to behave in accord with our ethical values.

Achieving this goal involves three main steps. The first is determining which ethical values we wish to impose on our killbots. Since this is a practical matter and not an exercise in philosophical inquiry, this will presumably involve using the accepted ethics (and laws) governing warfare rather than trying to determine what is truly good (if anything). The second step is translating the ethics into behavioral terms. For example, the moral principle that makes killing civilians wrong would be translated into behavioral sets of allowed and forbidden behavior. This would require creating a definition of civilian (or perhaps just an unarmed person) that would allow recognition using the sensors of the robot. As another example, the moral principle that surrender should be accepted would require defining surrender behavior in a way the robot could recognize.  The third step would be coding that behavior in whatever programming language is used for the robot in question. For example, the robot would need to be programmed to engage in surrender-accepting behavior. Naturally, the programmers would need to worry about clever combatants trying to “deceive” the killbot to take advantage of its programming (like pretending to surrender so as to get close enough to destroy the killbot).

Since these robots would be following programmed rules, they would presumably be controlled by deontological ethics—that is, ethics based on following rules. Thus, they would be (with due apologies to Asimov), the Robots of Deon.

An interesting practical question is whether or not the “ethical” programming would allow for overrides or reprogramming. Since the robot’s “ethics” would just be behavior governing code, it could be changed and it is easy enough to imagine an ethics preferences in which a commander could selectively (or not so selectively) turn off behavioral limitations. And, of course, killbots could be simply programmed without such ethics (or programmed to be “evil”).

The largest impact of the government funding for this sort of research will be that properly connected academics will get surprisingly large amounts of cash to live the science-fiction dream of teaching robots to be good. That way the robots will feel a little bad when they kill us all.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Drone Ethics is Easy

English: AR Drone part

English: AR Drone part (Photo credit: Wikipedia)

When a new technology emerges it is not uncommon for people to claim that the technology is outpacing ethics and law. Because of the nature of law (at least in countries like the United States) it is very easy for technology to outpace the law. However, it is rather difficult for technology to truly outpace ethics.

One reason for this is that any adequate ethical theory (that is, a theory that meets the basic requirements such as possessing prescriptively, consistency, coherence and so on) will have the quality of expandability. That is, the theory can be applied to what is new, be that technology, circumstances or something else. An ethical (or moral) theory that lacks the capacity of expandability would, obviously enough, become useless immediately and thus would not be much of a theory.

It is, however, worth considering the possibility that a new technology could “break” an ethical theory by being such that the theory could not expand to cover the technology. However, this would show that the theory was inadequate rather than showing that the technology outpaced ethics.

Another reason that technology would have a hard time outpacing ethics is that an ethical argument by analogy can be applied to a new technology. That is, if the technology is like something that already exists and has been discussed in the context of ethics, the ethical discussion of the pre-existing thing can be applied to the new technology. This is, obviously enough, analogous to using ethical analogies to apply ethics to different specific situations (such as a specific act of cheating in a relationship).

Naturally, if a new technology is absolutely unlike anything else in human experience (even fiction), then the method of analogy would fail absolutely. However, it seems somewhat unlikely that such a technology could emerge. But, I like science fiction (and fantasy) and hence I am willing to entertain the possibility of that which is absolutely new. However, it would still seem that ethics could handle it—but perhaps something absolutely new would break all existing ethical theories, showing that they are all inadequate.

While a single example does not provide much in the way of proof, it can be used to illustrate. As such, I will use the matter of “personal” drones to illustrate how ethics is not outpaced by technology.

While remote controlled and automated devices have been around a long time, the expansion of technology has created what some might regard as something new for ethics: drones, driverless cars, and so on. However, drone ethics is easy. By this I do not mean that ethics is easy, it is just that applying ethics to new technology (such as drones) is not as hard as some might claim. Naturally, actually doing ethics is itself quite hard—but this applies to very old problems (the ethics of war) and very “new” problems (the ethics of killer robots in war).

Getting back to the example, a personal drone is the sort of drone that a typical civilian can own and operate—they tend to be much smaller, lower priced and easier to use relative to government drones. In many ways, these drones are slightly advanced versions of the remote control planes that are regarded as expensive toys. The drones of this sort that seem to most concern people are those that have cameras and can hover—perhaps outside a bedroom window.

Two of the areas of concern regarding such drones are safety and privacy. In terms of safety, the worry is that drones can collide with people (or other vehicles, such as manned aircraft) and injure them. Ethically, this falls under doing harm to people, be it with a knife, gun or drone. While a flying drone flies about, the ethics that have been used to handle flying model aircraft, cars, etc. can easily be applied here. So, this aspect of drones has hardly outpaced ethics.

Privacy can also be handled. Simplifying things for the sake of a brief discussion, drones essentially allow a person to (potentially) violate privacy in the usual two “visual” modes. One is to intrude into private property to violate a person’s privacy. In the case of the “old” way, a person can put a ladder against a person’s house and climb up to peek under the window shade and into the person’s bedroom or bathroom. In the “new” way, a person can fly a drone up to the window and peek in using a camera. While the person is not physically present in the case of the drone, his “agent” is present and is trespassing. Whether a person is using a ladder or a drone to gain access to the window does not change the ethics of the situation in regards to the peeking, assuming that people have a right to control access to their property.

A second way is to peek into “private space” from “public space.” In the case of the “old way” a person could stand on the public sidewalk and look into other peoples’ windows or yards—or use binoculars to do so. In the “new” way, a person can deploy his agent (the drone) in public space in order to do the same sort of thing.

One potential difference between the two situations is that a drone can fly and thus can get viewing angles that a person on the ground (or even with a ladder) could not get. For example, a drone might be in the airspace far above a person’s backyard, sending back images of the person sunbathing in the nude behind her very tall fence on her very large estate. However, this is not a new situation—paparazzi have used helicopters to get shots of celebrities and the ethics are the same. As such, ethics has not been outpaced by the drones in this regard.  This is not to say that the matter is solved—people are still debating the ethics of this sort of “spying”, but to say that it is not a case where technology has outpaced ethics.

What is mainly different about the drones is that they are now affordable and easy to use—so whereas only certain people could afford to hire a helicopter to get photos of celebrities, now camera-equipped drones are easily in reach of the hobbyist. So, it is not that the drone provides new capabilities that worries people—it is that it puts these capabilities in the hands of the many.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Men, Women, Business & Ethics

Journal of Business Ethics

Journal of Business Ethics (Photo credit: Wikipedia)

On 4/9/2014 NPR did a short report on the question of why there are fewer women in business than men. This difference begins in business school and, not surprisingly, continues forward. The report focused on an interesting hypothesis: in regards to ethics, men and women differ.

While people tend to claim that lying is immoral, both men and woman are more likely to lie to a woman when engaged in negotiation. The report also mentioned a test involving an ethical issue. In this scenario, the seller of a house does not want it sold to someone who will turn the property into a condo. However, a potential buyer wants to do just that. The findings were that men were more likely than women to lie to sell the house.

It was also found that men tend to be egocentric in their ethical reasoning. That is, if the man will be harmed by something, then it is regarded as unethical. If the man benefits, he is more likely to see it as a grey area. So, in the case of the house scenario, a man representing the buyer would tend to regard lying to the seller as acceptable—after all, he would thus get a sale. However, a man representing the seller would be more likely to regard being lied to as unethical.

In another test of ethics, people were asked about their willingness to include an inferior ingredient in a product that would hurt people but would allow a significant product. The men were more willing than the women to regard this as acceptable. In fact, the women tended to regard this sort of thing as outrageous.

These results provide two reasons why women would be less likely to be in business than men. The first is that men are apparently rather less troubled by unethical, but more profitable, decisions.  The idea that having “moral flexibility” (and getting away with it) provides advantage is a rather old one and was ably defended by Glaucon in Plato’s Republic. If a person with such moral flexibility needs to lie to gain an advantage, he can lie freely. If a bribe would serve his purpose, he can bribe. If a bribe would not suffice and someone needs to have a tragic “accident”, then he can see to it that the “accident” occurs. To use an analogy, a morally flexible person is like a craftsperson that has just the right tool for every occasion. Just as the well-equipped craftsperson has a considerable advantage over a less well equipped crafts person, the morally flexible person has a considerable advantage over those who are more constrained by ethics. If women are, in general, more constrained by ethics, then they would be less likely to remain in business because they would be at a competitive disadvantage. The ethical difference might also explain why women are less likely to go into business—it seems to be a general view that unethical activity is not uncommon in business, hence if women are generally more ethical than men, then they would be more inclined to avoid business.

It could be countered that Glaucon is in error and that being unethical (while getting away with it) does not provide advantages. Obviously, getting caught and significantly punished for unethical behavior is not advantageous—but it is not the unethical behavior that causes the problem. Rather, it is getting caught and punished. After all, Glaucon does note that being unjust is only advantageous when one can get away with it. Socrates does argue that being ethical is superior to being unethical, but he does not do so by arguing that the ethical person will have greater material success.

This is not to say that a person cannot be ethical and have material success. It is also not to say that a person cannot be ethically flexible and be a complete failure. The claim is that ethical flexibility provides a distinct advantage.

It could also be countered that there are unethical women and ethical men. The obvious reply is that this claim is true—it has not been asserted that all men are unethical or that all women are ethical. Rather, it seems that women are generally more ethical than men.

It might be countered that the ethical view assumed in this essay is flawed. For example, it could be countered that what matters is profit and the means to this end are thus justified. As such, using inferior ingredients in a medicine so as to make a profit at the expense of the patients would not be unethical, but laudable. After all, as Hobbes said, profit is the measure of right. As such, women might well be avoiding business because they are unethical on this view.

The second is that women are more likely to be lied to in negotiations. If true, this would certainly put women at a disadvantage in business negotiations relative to men since women would be more likely to be subject to attempts at deceit. This, of course, assumes that such deceit would be advantageous in negotiations. While there surely are cases in which deceit would be disadvantageous, it certainly seems that deceit can be a very useful technique.

If it is believed that having more women in business is desirable (which would not be accepted by everyone), then there seem to be two main options. The first is to endeavor to “cure” women of their ethics—that is, make them more like men. The second would be to endeavor to make business more ethical. This would presumably also help address the matter of lying to women.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Sexbots, Killbots & Virtual Dogs

Sexbots,_Killbots_&__Cover_for_KindleMy most recent  book, Sexbots, Killbots & Virtual Dogs, is now available as a Kindle book on Amazon. It will soon be available as a print book as well (the Kindle version is free with the print book on Amazon).

There is also a free promo for the Kindle book from April 1, 2014 to April 5, 2014. At free, it is worth every penny!

Book Description

While the story of Cain and Abel does not specify the murder weapon used by Cain, traditional illustrations often show Cain wielding the jawbone of an animal (perhaps an ass—which is what Samson is said to have employed as a weapon). Assuming the traditional illustrations and the story are right, this would be one of the first uses of technology by a human—and, like our subsequent use of technology, one of considerable ethical significance.

Whether the tale of Cain is true or not, humans have been employing technology since our beginning. As such, technology is nothing new. However, we are now at a point at which technology is advancing and changing faster than ever before—and this shows no signs of changing. Since technology so often has moral implications, it seems worthwhile to consider the ethics of new and possible future technology. This short book provides essays aimed at doing just that on subjects ranging from sexbots to virtual dogs to asteroid mining.

While written by a professional philosopher, these essays are aimed at a general audience and they do not assume that the reader is an expert at philosophy or technology.

The essays are also fairly short—they are designed to be the sort of things you can read at your convenience, perhaps while commuting to work or waiting in the checkout line.

Enhanced by Zemanta

Love, Voles & Spinoza

Benedict de Spinoza: moral problems and our em...

(Photo credit: Wikipedia)

In my previous essays I examined the idea that love is a mechanical matter as well as the implications this might have for ethics. In this essay, I will focus on the eternal truth that love hurts.

While there are exceptions, the end of a romantic relationship typically involves pain. As noted in my original essay on voles and love, Young found that when a prairie voles loses its partner, it becomes depressed. This was tested by dropping voles into beakers of water to determine how much the voles would struggle. Prairie voles who had just lost a partner struggled to a lesser degree than those who were not so bereft. The depressed voles, not surprisingly, showed a chemical difference from the non-depressed voles. When a depressed vole was “treated” for this depression, the vole struggled as strongly as the non-bereft vole.

Human beings also suffer from the hurt of love. For example, it is not uncommon for a human who has ended a relationship (be it divorce or a breakup) to fall into a vole-like depression and struggle less against the tests of life (though dropping humans into giant beakers to test this would presumably be unethical).

While some might derive an odd pleasure from stewing in a state of post-love depression, presumably this feeling is something that a rational person would want to end. The usual treatment, other than self-medication, is time: people usually tend to come out of the depression and then seek out a new opportunity for love. And depression.

Given the finding that voles can be treated for this depression, it would seem to follow that humans could also be treated for this as well. After all, if love is essentially a chemical romance grounded in strict materialism, then tweaking the brain just so would presumably fix that depression. Interestingly enough, the philosopher Spinoza offered an account of love (and emotions in general) that nicely match up with the mechanistic model being examined.

As Spinoza saw it, people are slaves to their affections and chained by who they love. This is an unwise approach to life because, as the voles in the experiment found out, the object of one’s love can die (or leave). This view of Spinoza nicely matches up: voles that bond with a partner become depressed when that partner is lost. In contrast, voles that do not form such bonds do not suffer that depression.

Interestingly enough, while Spinoza was a pantheist, his view of human beings is rather similar to that of the mechanist: he regarded humans are being within the laws of nature and was a determinist in that all that occurs does so from necessity—there is no chance or choice. This view guided him to the notion that human behavior and motivations can be examined as one might examine “lines, planes or bodies.” To be more specific, he took the view that emotions follow the same necessity as all other things, thus making the effects of the emotions predictable.  In short, Spinoza engaged in what can be regarded as a scientific examination of the emotions—although he did so without the technology available today and from a rather more metaphysical standpoint. However, the core idea that the emotions can be analyzed in terms of definitive laws is the same idea that is being followed currently in regards to the mechanics of emotion.

Getting back to the matter of the negative impact of lost love, Spinoza offered his own solution: as he saw it, all emotions are responses to what is in the past, present or future. For example, a person might feel regret because she believes she could have done something different in the past. As another example, a person might worry because he thinks that what he is doing now might not bear fruit in the future. These negative feelings rest, as Spinoza sees it, on the false belief that the past and present could be different and the future is not set. Once a person realizes that all that happens occurs of necessity (that is, nothing could have been any different and the future cannot be anything other than what it will be), then that person will suffer less from the emotions. Thus, for Spinoza, freedom from the enslaving chains of love would be the recognition and acceptance that what occurs is determined.

Putting this in the mechanistic terms of modern neuroscience, a Spinoza-like approach would be to realize that love is purely mechanical and that the pain and depression that comes from the loss of love are also purely mechanical. That is, the terrible, empty darkness that seems to devour the soul at the end of love is merely chemical and electrical events in the brain. Once a person recognizes and accepts this, if Spinoza is right, the pain should be reduced. With modern technology it is possible to do even more: whereas Spinoza could merely provide advice, modern science can eventually provide us with the means to simply adjust the brain and set things right—just as one would fix a malfunctioning car or PC.

One rather obvious problem is, of course, that if everything is necessary and determined, then Spinoza’s advice makes no sense: what is, must be and cannot be otherwise. To use an analogy, it would be like shouting advice at someone watching a cut scene in a video game. This is pointless, since the person cannot do anything to change what is occurring. For Spinoza, while we might think life is a like a game, it is like that cut scene: we are spectators and not players. So, if one is determined to wallow like a sad pig in the mud of depression, that is how it will be.

In terms of the mechanistic mind, advice would seem to be equally absurd—that is, to say what a person should do implies that a person has a choice. However, the mechanistic mind presumably just ticks away doing what it does, creating the illusion of choice. So, one brain might tick away and end up being treated while another brain might tick away in the chemical state of depression. They both eventually die and it matters not which is which.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta