Tag Archives: morality

Philosophy & My Old Husky IV: Moral Decisions

Isis LookingThe saga of Isis, my thirteen-year-old husky, continues. While she faced a crisis, good care and steroids have seen her through the storm of pain and she has returned to her usual self—ready for adventures and judging all lesser creatures.

Having a pet imposes morally accountability upon a person—the life of a pet is quite literally in one’s hands. When I took Isis in to the emergency vet she was in such rough shape that I thought that it might have been time for that hardest of pet decisions—to choose an end to the suffering of a beloved friend. It is my hope that I will not need to make this decision—I hope that when her time comes she will drift away in her sleep with no pain. I am hoping the same for myself. After all, no one wants to face that choice.

While some dismiss philosophy as valueless in real life, I have found my experience as a philosopher incredibly helpful in this matter. As noted above, I am morally responsible for my husky’s well-being. Having studied and taught ethics, I have learned a great deal that helps me frame the choices I have and will face.

When I brought Isis to the emergency vet, I knew that it would be expensive. There are, of course, higher fees for bringing a pet in outside of regular hours and Isis was in the sort of shape that usually indicates a large bill. So, when the vet showed me the proposed bill, I was not surprised that it was just under $600. I am lucky enough to have a decent job and fortunate enough to have made it through the financial folks driving the economy off a cliff a while back. While that was still a large sum of money for me, I could certainly afford it. While very worried about her, I did think about people who are less well off, yet love their pets as much as I love my husky—they could face a terrible choice between medical care for their pet and having the money for some essential bill or expense. Or they might simply not have enough money at all, thus being denied the choice. While there are those who do help out with the care of such pets, I am sure that there are daily tragedies involving those who lack the funds to care for sick or injured pets.

Since there are many systems of ethics, there are many ways to approach the moral decision of costly (in money or time) pet care. The most calculating is, of course, a utilitarian approach: weighing the costs and benefits in order to determine what would create the greatest utility. In my case, I can afford such care and the good for my husky vastly outweighed the cost to me. So, the utilitarian calculation was easy for me.

Others are not so lucky and they will face a difficult choice that requires weighing the well-being of their pet against the cost to them. While it is easy enough to say that a person should always take care of her pet, people can obviously have other moral obligations, such as to their children. In addition to the ethics of making the decision, there is also to moral matter of having a society in which people are forced to make such hard decisions because they simply lack the financial resources to address the challenges they face. While some might say that those who cannot afford pets should not have pets (something that is also often said about children), that also seems to be another evil. While I would not say that people have a right to pets as they have a right to life and liberty, I would accept that a system that generates such poverty would seem to be an unjust system. Naturally, some might still insist that pets are a luxury, like education and basic nutrition.

Another approach is to set aside the cold calculations of utility and make the decision based on an ethics of duty and obligation.  Having a pet is analogous to having a child: the choice creates a set of moral duties and obligations. Part of the foundation of these obligations is that the pet cannot make her own decisions and generally lacks the ability to care for itself. As such, taking an animal as a pet is to accept the role of a decision maker and a caretaker. An analogy can also be drawn to accepting a contract for a job: the job requires certain things and accepting the job entails accepting those requirements. In the case of a pet, there are many obligations and the main one is assuming responsibility for the well-being of the pet. This is why choosing to have a pet is such a serious decision and should not be entered into lightly.

One reason having a pet should not be taken lightly is that the duty to the pet imposes an obligation to make sacrifices for the well-being of the pet. This can include going without sleep, cleaning up messes and making a hard decision about the end of life. There are, of course, limits to all obligations and working out exactly what one owes a pet is a moral challenge. There are certainly some minimal obligations that a person must accept or she should not have a pet—these would include providing for the basic physical and emotional needs of the pet. The moral discussion becomes rather more complicated when the obligations impose greater burdens, such as burdens of time and money.

When Isis was at her low point, she could barely walk. I had to carry her outside and support her while she struggled to do her business. When I picked her up, I would say “up, up and away!” When carrying her, I would say “wooosh” so she would think she was flying. This made us both feel a little better.

She could not stand to eat or drink and had little appetite. So, I had to hold her water bowl up for her so she could drink and make special foods to hand feed her.  I found that she would eat chicken and rice processed into a paste—provided I slathered it with peanut butter and let her lick it from my palm. At night, she would cry with pain and I would be there to comfort her, getting by on a few hours of sleep. Sometimes she would not be able to make it outside, and there would be a mess to clean up.

I did all this for two reasons. The first is, of course, love. The second is duty—I accept that my moral obligation to my husky requires me to do all this for her because she is my dog. If I did not do all this for her, I would be a worse person and, while I can bear cleaning up diarrhea at 3:23 in the morning, I cannot bear being a worse person.

I am certainly no moral saint and I freely admit that this was a difficult (though it obviously pales in comparison with what other people have faced). It did not reach my limits, though I know I (like everyone) have them. Sorting out the ethics of these limits is a significant moral matter. First, there is the moral question of how far one’s obligations go. That is, determining how far you are morally obligated to go. Second, there is the moral question of how far you can go before your obligations are breaking you. After all, each person also has duties to herself that are as important as obligations to others.

In my case, I accepted that my obligations included all that I mentioned above. While doing all this was exhausting me (I was dumping instant coffee mix into protein shakes to get through teaching classes), Isis recovered before my obligations broke me. But, I did have to give serious thought to how long I would be able to sustain this level of care before I could not go on anymore—I am glad I did not have to find out.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ethics (for Free)

LaBossiere EthicsThe following provides links to my Ethics course, allowing a person to get some ethics for free. Also probably works well as a sleep aid.*

Notes & Readings

Practice Tests

Power Point


Class YouTube Videos

These are unedited videos from the Fall 2015 Ethics class. Spoiler: I do not die at the end.

Part One Videos: Introduction & Moral Reasoning

Video 1:  It covers the syllabus.

Video 2: It covers the introduction to ethics, value, and the dreaded spectrum of morality.

Video 3: It covers the case paper.

Video 4: No video. Battery failure.

Video 5: It covers inductive arguments and the analogical argument.

Video 6: It covers Argument by/from Example and Argument from Authority.

Video 7:  It covers Inconsistent Application and Reversing the Situation.

Video 8:  It covers Argument by Definition, Appeal to Intuition, and Apply a Moral Principle. The death of the battery cuts this video a bit short.

Video 9:  It covers Applying Moral Principles, Applying Moral Theories, the “Playing God” Argument and the Unnatural Argument.

Video 10: It covers Appeal to Consequences and Appeal to Rules.

Video 11:  It covers Appeal to Rights and Mixing Norms.

Part Two Videos: Moral Theories

Video 12:  It covers the introduction to Part II and the start of virtue theory.

Video 13: It covers Confucius and Aristotle.

Video 14:  This continues Aristotle’s virtue theory.

Video 15: It covers the intro to ethics and religion as well as the start of Aquinas’ moral theory.

Video 16: It covers St. Thomas Aquinas, divine command theory, and John Duns Scotus.

Video 17: It covers the end of religion & ethics and the beginning of consequentialism.

Video 18: It covers Thomas Hobbes and two of the problems with ethical egoism.

Video 19:  It covers the third objection to ethical egoism, the introduction to utilitarianism and the start of the discussion of J.S. Mill. Includes reference to Jeremy “Headless” Bentham.

Video 20: This video covers the second part of utilitarianism, the objections against utilitarianism and the intro to deontology.

Video 21: It covers the categorical imperative.

Part Three Videos: Why Be Good?, Moral Education & Equality

Video 22: It covers the question of “why be good?” and Plato’s Ring of Gyges.

Video 23: It covers the introduction to moral education and the start of Aristotle’s theory of moral education.

Video 24: It covers more of Aristotle’s theory of moral education.

Video 25: It covers the end of Rousseau and the start of equality.

Video 26: It covers the end of Rousseau and the start of equality.

Video 27:  It covers Mary Wollstonecraft’s Vindication of the Rights of Women.

Video 28:  This video covers the second part of Wollstonecraft and gender equality.

Video 29: It covers the start of ethics and race.

Video 30: It covers St. Thomas Aquinas’ discussion of animals and ethics.

Video 31: It covers Descartes’ discussion of animals. Includes reference to Siberian Huskies.

Video 32: It covers the end of Kant’s animal ethics and the utilitarian approach to animal ethics.

Part IV: Rights, Obedience & Liberty

Video 33: It covers the introduction to rights and a bit of Hobbes.

Video 34: It covers Thomas Hobbes’ view of rights and the start of John Locke’s theory of rights.

Video 35:  It covers John Locke’s state of nature and theory of natural rights.

Video 36:  It covers Locke’s theory of property and tyranny. It also covers the introduction to obedience and disobedience.

Video 37: It covers the Crito and the start of Thoreau’s theory of civil disobedience.

Video 38:  It covers the second part of Thoreau’s essay on civil disobedience.

Video 39:  It covers the end of Thoreau’s civil disobedience, Mussolini’s essay on fascism and the start of J.S. Mill’s theory of Liberty.

Video 40:  It covers Mill’s theory of liberty.

Narration YouTube Videos

These videos consist of narration over Powerpoint slides. Good for naptime.

Part One Videos

Part Two Videos

Part Three Videos

Part Four Videos

*This course has not been evaluated by the FDA as a sleep aid. Use at your own risk. Side effects might include Categorical Kidneys, Virtuous Spleen, and Spontaneous Implosion.

The Corruption of Academic Research

Synthetic insulin crystals synthesized using r...

Synthetic insulin crystals synthesized using recombinant DNA technology (Photo credit: Wikipedia)

STEM (Science, Technology, Engineering and Mathematics) fields are supposed to be the new darlings of the academy, so I was slightly surprised when I heard an NPR piece on how researchers are struggling for funding. After all, even the politicians devoted to cutting education funding have spoken glowingly of STEM. My own university recently split the venerable College of Arts & Sciences, presumably to allow more money to flow to STEM without risking that professors in the soft sciences and the humanities might inadvertently get some of the cash. As such I was somewhat curious about this problem, but mostly attributed it to a side-effect of the general trend of defunding public education. Then I read “Bad Science” by Llewellyn Hinkes-Jones. This article was originally published in issue 14, 2014 of Jacobin Magazine. I will focus on the ethical aspects of the matters Hinkes-Jones discussed in this article, which is centered on the Bayh-Dole Act.

The Bayh-Dole Act was passed in 1980 and was presented as having very laudable goals. Before the act was passed, universities were limited in regards to what they could do with the fruits of their scientific research. After the act was passes, schools could sell their patents or engage in exclusive licensing deals with private companies (that is, monopolies on the patents). Supporters asserted this act would be beneficial in three main ways. The first is that it would secure more private funding for universities because corporations would provide money in return for the patents or exclusive licenses. The second is that it would bring the power of the profit motive to public research: since researchers and schools could profit, they would be more motivated to engage in research. The third is that the private sector would be motivated to implement the research in the form of profitable products.

On the face of it, the act was a great success. Researchers at Columbia University patented the process of DNA cotransfrormation and added millions to the coffers of the school. A patent on recombinant DNA earned Stanford over $200 million. Companies, in turn, profited greatly. For example, researchers at the University of Utah created Myriad Genetics and took ownership of their patent on the BRCA1 and BRCA2 tests for breast cancer. The current cost of the test is $4,000 (in comparison a full sequencing of human DNA costs $1,000) and the company has a monopoly on the test.

Given these apparent benefits, it is easy enough to advance a utilitarian argument in favor of the act and its consequences. After all, if allows universities to fund their research and corporations to make profits, then its benefits would seem to be considerable, thus making it morally good. However, a proper calculation requires considering the harmful consequences of the act.

The first harm is that the current situation imposes a triple cost on the public. One cost is that the taxpayers fund the schools that conduct the research. The next is that thanks to the monopolies on patents the taxpayers have to pay whatever prices the companies wish to charge, such as the $4,000 for a test that should cost far less. In an actual free market there would be competition and lower prices—but what we have is a state controlled and regulated market. Ironically, those who are often crying the loudest against government regulation and for the value of competition are quite silent on this point.  The final cost of the three is that the corporations can typically write off their contributions on their taxes, thus leaving other taxpayers to pick up their slack. These costs seem to be clear harms and do much to offset the benefits—at least when looked at from the perspective of the whole society and not just focusing on those reaping the benefits.

The second harm is that, ironically, this system makes research more expensive. Since processes, strains of bacteria and many other things needed for research are protected by monopolistic patents the researchers who do not hold these patents have to pay to use them. The costs are usually quite high, so while the patent holders benefit, research in general suffers. In order to pay for these things, researchers need more funding, thus either imposing more cost on taxpayers or forcing them to turn to private funding (which will typically result in more monopolistic patents).

The third harm is the corruption of researchers. Researchers are literally paid to put their names on positive journal articles that advance the interests of corporations. They are also paid to promote drugs and other products while presenting themselves as researchers rather than paid promoters. If the researchers are not simply bought, the money is clearly a biasing factor. Since we are depending on these researchers to inform the public and policy makers about these products, this is clearly a problem and presents a clear danger to the public good.

A fourth harm is that even the honest researchers who have not been bought are under great pressure to produce “sexy science” that will attract grants and funding. While it has always been “publish or perish” in modern academics, the competition is even fiercer in the sciences now. As such, researchers are under great pressure to crank out publications. The effect has been rather negative as evidenced by the fact that the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies.  Far from driving advances in science, the act has served as an engine of corruption, fraud and bad science. This would be bad enough, but there is also the impact on a misled and misinformed public. I must admit that I fell for the antioxidant and omega-3 “research”—I modified my diet to include more antioxidants and omega-3. While this bad science does get debunked, the debunking takes a long time and most people never hear about it. For example, how many people know that the antioxidant and omega-3 “research” is flawed and how many still pop omega-3 “fish oil pills” and drink “antioxidant teas”?

A fifth harm is that universities have rushed to cash in on the research, driven by the success of the research schools that have managed to score with profitable patents. However, setting up research labs aimed at creating million dollar patents is incredibly expensive. In most cases the investment will not yield the hoped for returns, thus leaving many schools with considerable expenses and little revenue.

To help lower costs, schools have turned to employing adjuncts to do the teaching and research, thus creating a situation in which highly educated but very low-paid professionals are toiling away to secure millions for the star researchers, the administrators and their corporate benefactors. It is, in effect, sweat-shop science.

This also shows another dark side to the push for STEM: as the number of STEM graduates increase, the value of the degrees will decrease and wages for the workers will continue to fall. This is great for the elite, but terrible for those hoping that a STEM degree will mean a good job and a bright future.

These harms would seem to outweigh the alleged benefits of the act, thus indicating it is morally wrong. Naturally, it can be countered that the costs are worth it. After all, one might argue, the incredible advances in science since 1980 have been driven by the profit motive and this has been beneficial overall. Without the profit motive, the research might have been conducted, but most of the discoveries would have been left on the shelves. The easy and obvious response is to point to all the advances that occurred due to public university research prior to 1980 as well as the research that began before then and came to fruition.

While solving this problem is a complex matter, there seem to be some easy and obvious steps. The first would be to restore public funding of state schools. In the past, the publicly funded universities drove America’s worldwide dominance in research and helped fuel massive economic growth while also contributing to the public good. The second would be replacing the Bayh-Dole Act with an act that would allow universities to benefit from the research, but prevent the licensing monopolies that have proven so damaging. Naturally, this would not eliminate patents but would restore competition to what is supposed to be a competitive free market by eliminating the creation of monopolies from public university research. The folks who complain about the state regulating business and who praise the competitive free market will surely get behind this proposal.

It might also be objected that the inability to profit massively from research will be a disincentive. The easy and obvious reply is that people conduct research and teach with great passion for very little financial compensation. The folks that run universities and corporations know this—after all, they pay such people very little yet still often get exceptional work. True, there are some people who are solely motivated by profit—but those are typically the folks who are making the massive profit rather than doing the actual research and work that makes it all possible.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Obligations to People We Don’t Know

English: Statue of Immanuel Kant in Kaliningra...

English: Statue of Immanuel Kant in Kaliningrad, Russia (Photo credit: Wikipedia)

One of the classic moral problems is the issue of whether or not we have moral obligations to people we do not know.  If we do have such obligations, then there are also questions about the foundation, nature and extent of these obligations. If we do not have such obligations, then there is the obvious question about why there are no such obligations. I will start by considering some stock arguments regarding our obligations to others.

One approach to the matter of moral obligations to others is to ground them on religion. This requires two main steps. The first is establishing that the religion imposes such obligations. The second is making the transition from the realm of religion to the domain of ethics.

Many religions do impose such obligations on their followers. For example, John 15:12 conveys God’s command: “This is my commandment, That you love one another, as I have loved you.”  If love involves obligations (which it seems to), then this would certainly seem to place us under these obligations.  Other faiths also include injunctions to assist others.

In terms of transitioning from religion to ethics, one easy way is to appeal to divine command theory—the moral theory that what God commands is right because He commands it. This does raise the classic Euthyphro problem: is something good because God commands it, or is it commanded because it is good? If the former, goodness seems arbitrary. If the latter, then morality would be independent of God and divine command theory would be false.

Using religion as the basis for moral obligation is also problematic because doing so would require proving that the religion is correct—this would be no easy task. There is also the practical problem that people differ in their faiths and this would make a universal grounding for moral obligations difficult.

Another approach is to argue for moral obligations by using the moral method of reversing the situation.  This method is based on the Golden Rule (“do unto others as you would have them do unto you”) and the basic idea is that consistency requires that a person treat others as she would wish to be treated.

To make the method work, a person would need to want others to act as if they had obligations to her and this would thus obligate the person to act as if she had obligations to them. For example, if I would want someone to help me if I were struck by a car and bleeding out in the street, then consistency would require that I accept the same obligation on my part. That is, if I accept that I should be helped, then consistency requires that I must accept I should help others.

This approach is somewhat like that taken by Immanuel Kant. He argues that because a person necessarily regards herself as an end (and not just a means to an end), then she must also regard others as ends and not merely as means.  He endeavors to use this to argue in favor of various obligations and duties, such as helping others in need.

There are, unfortunately, at least two counters to this sort of approach. The first is that it is easy enough to imagine a person who is willing to forgo the assistance of others and as such can consistently refuse to accept obligations to others. So, for example, a person might be willing to starve rather than accept assistance from other people. While such people might seem a bit crazy, if they are sincere then they cannot be accused of inconsistency.

The second is that a person can argue that there is a relevant difference between himself and others that would justify their obligations to him while freeing him from obligations to them. For example, a person of a high social or economic class might assert that her status obligates people of lesser classes while freeing her from any obligations to them.  Naturally, the person must provide reasons in support of this alleged relevant difference.

A third approach is to present a utilitarian argument. For a utilitarian, like John Stuart Mill, morality is assessed in terms of consequences: the correct action is the one that creates the greatest utility (typically happiness) for the greatest number. A utilitarian argument for obligations to people we do not know would be rather straightforward. The first step would be to estimate the utility generated by accepting a specific obligation to people we do not know, such as rendering aid to an intoxicated person who is about to become the victim of sexual assault. The second step is to estimate the disutility generated by imposing that specific obligation. The third step is to weigh the utility against the disutility. If the utility is greater, then such an obligation should be imposed. If the disutility is greater, then it should not.

This approach, obviously enough, rests on the acceptance of utilitarianism. There are numerous arguments against this moral theory and these can be employed against attempts to ground obligations on utility. Even for those who accept utilitarianism, there is the open possibility that there will always be greater utility in not imposing obligations, thus undermining the claim that we have obligations to others.

A fourth approach is to consider the matter in terms of rational self-interest and operate from the assumption that people should act in their self-interest. In terms of a moral theory, this would be ethical egoism: the moral theory that a person should act in her self-interest rather than acting in an altruistic manner.

While accepting that others have obligations to me would certainly be in my self-interest, it initially appears that accepting obligations to others would be contrary to my self-interest. That is, I would be best served if others did unto me as I would like to be done unto, but I was free to do unto them as I wished. If I could get away with this sort of thing, it would be ideal (assuming that I am selfish). However, as a matter of fact people tend to notice and respond negatively to a lack of reciprocation. So, if having others accept that they have some obligations to me were in my self-interest, then it would seem that it would be in my self-interest to pay the price for such obligations by accepting obligations to them.

For those who like evolutionary just-so stories in the context of providing foundations for ethics, the tale is easy to tell: those who accept obligations to others would be more successful than those who do not.

The stock counter to the self-interest argument is the problem of Glaucon’s unjust man and Hume’s sensible knave. While it certainly seems rational to accept obligations to others in return for getting them to accept similar obligations, it seems preferable to exploit their acceptance of obligations while avoiding one’s supposed obligations to others whenever possible. Assuming that a person should act in accord with self-interest, then this is what a person should do.

It can be argued that this approach would be self-defeating: if people exploited others without reciprocation, the system of obligations would eventually fall apart. As such, each person has an interest in ensuring that others hold to their obligations. Humans do, in fact, seem to act this way—those who fail in their obligations often get a bad reputation and are distrusted. From a purely practical standpoint, acting as if one has obligations to others would thus seem to be in a person’s self-interest because the benefits would generally outweigh the costs.

The counter to this is that each person still has an interest in avoiding the cost of fulfilling obligations and there are various practical ways to do this by the use of deceit, power and such. As such, a classic moral question arises once again: why act on your alleged obligations if you can get away with not doing so? Aside from the practical reply given above, there seems to be no answer from self-interest.

A fifth option is to look at obligations to others as a matter of debts. A person is born into an established human civilization built on thousands of years of human effort. Since each person arrives as a helpless infant, each person’s survival is dependent on others. As the person grows up, she also depends on the efforts of countless other people she does not know. These include soldiers that defend her society, the people who maintain the infrastructure, firefighters who keep fire from sweeping away the town or city, the taxpayers who pay for all this, and so on for all the many others who make human civilization possible. As such, each member of civilization owes a considerable debt to those who have come before and those who are here now.

If debt imposes an obligation, then each person who did not arise ex-nihilo owes a debt to those who have made and continue to make their survival and existence in society possible. At the very least, the person is obligated to make contributions to continue human civilization as a repayment to these others.

One objection to this is for a person to claim that she owes no such debt because her special status obligates others to provide all this for her with nothing owed in return. The obvious challenge is for a person to prove such an exalted status.

Another objection is for a person to claim that all this is a gift that requires no repayment on the part of anyone and hence does not impose any obligation. The challenge is, of course, to prove this implausible claim.

A final option I will consider is that offered by virtue theory. Virtue theory, famously presented by thinkers like Aristotle and Confucius, holds that people should develop their virtues. These classic virtues include generosity, loyalty and other virtues that involve obligations and duties to others. Confucius explicitly argued in favor of duties and obligations as being key components of virtues.

In terms of why a person should have such virtues and accept such obligations, the standard answer is that being virtuous will make a person happy.

Virtue theory is not without its detractors and the criticism of the theory can be employed to undercut it, thus undermining its role in arguing that we have obligations to people we do not know.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Chaotic Evil

the face of evil

the face of evil (Photo credit: Wikipedia)

As I have written in two other essays, the Dungeons & Dragons alignment system is surprisingly useful for categorizing people in the real world. In my previous two essays, I looked at lawful evil and neutral evil. This time I will look at chaotic evil.

In the realm of fantasy, players often encounter chaotic evil foes—these include many of the classic enemies ranging from the lowly goblin to the terrifyingly powerful demon lord. Chaotic evil foes are generally good choices for those who write adventures—no matter what alignment the party happens to be, no one has a problem with killing chaotic evil creatures. Most especially other chaotic evil creatures. Fortunately, chaotic evil is not as common in the actual world. In the game system, chaotic evil is defined as follows:

A chaotic evil character is driven entirely by her own anger and needs. She is thoughtless in her actions and acts on whims, regardless of the suffering it causes others.

In many ways, a chaotic evil character is pinned down by her inherent nature to be unpredictable. She is like a spreading fire, a coming storm, an untested sword blade. An extreme chaotic evil character tends to find similarly minded individuals to be with—not out of any need for company, but because there is a familiarity in this chaos, and she relishes the opportunity to be true to her nature with others who share that delight.

The chaotic evil person differs from the lawful evil person in regards to the matter of law. While they are both evil, the lawful evil person is committed to order, tradition and hierarchy. As such, lawful evil types can create, lead and live in organized states (and all states have lawful evil aspects). They can even get along with others—provided that doing so is required for the preservation of order. In contrast, chaotic evil types have no commitment to order, tradition or hierarchy. They can, of course, be compelled to act as if they do. For example, as long as the threat of punishment or death is close at hand, a chaotic evil type will obey those with greater power. Chaotic evil types do like order, tradition and hierarchy in the same way that arsonists like things that burn—without these things, the chaotic evil type would have that much less to destroy.

Lawful evil types do often find chaotic evil types useful for specific tasks, although those wise about evil are aware of the dangers of using such tools. For example, a well-organized terrorist group will tend to be lawful evil in regards to its leadership. However, such a group will find many uses for the chaotic evil types. A lawful evil type is generally not likely to strap on an explosive vest and run into a crowd, but a chaotic evil person might very well consider this to be a good way to go out. Lawful evil types also sometimes need people to create chaos so that they can then impose more order—the chaotic evil are just the people to bring in. But, as noted, the chaotic evil can get out of hand—they are not constrained by order or even rational selfishness. This is why the smart lawful evil types do their best to see to it that the chaotic evil types do not outlive their usefulness.

The chaotic evil person differs from the neutral evil person in regards to the matter of chaos. While the chaotic evil and neutral evil are both selfish and care nothing for others, the neutral evil person tends to be more rational and calculating in her selfishness. A neutral evil person can have excellent self-control and conceal her true nature in order to achieve her selfish and evil ends. Chaotic evil types lack that self-control and find it hard to conceal their true nature—that takes a discipline that the chaotic, by their nature, lack. The neutral evil see society as having instrumental value for them—but their selfishness means that they will take actions that can destroy society. The chaotic evil see no value in society other than as presenting a target rich environment for their evil. In our world, chaotic evil types tend to be those who commit horrific crimes or acts of terror.

While chaotic evil types are chaotic and evil, they often take up the mantle of some cause and purport to be acting for some greater good. However, their actions disprove their claims about their alleged commitment to anything good. They typically take up a religious or political cause to assuage whatever shreds of conscience they might still retain—or do so as part of their chaotic game.

In an orderly society that does not need the chaotic evil, smarter chaotic evil types try to hide from the authorities—though their nature drives them to commit crimes. Those that are less clever commit their misdeeds and are quickly caught. The cleverer might never be caught and become legends. Fortunately for the chaotic evil (and unfortunately for everyone else), they have plenty of opportunities to act on their alignment. There are always organizations that are happy to have them and there are always conflict areas where they can act in accord with their true natures—often with the support and blessings of the authority. In the end, though many are willing to make use of their morality, no one really wants the chaotic evil around.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Ethics & Free Will

Conscience and law

Conscience and law (Photo credit: Wikipedia)

Azim Shariff and Kathleen Vohs recently had their article, “What Happens to a Society That Does Not Believe in Free Will”, published in Scientific American. This article considers the causal impact of a disbelief in free will with a specific focus on law and ethics.

Philosophers have long addressed the general problem of free will as well as the specific connection between free will and ethics. Not surprisingly, studies conducted to determine the impact of disbelief in free will have the results that philosophers have long predicted.

One impact is that when people have doubts about free will they tend to have less support for retributive punishment. Retributive punishment, as the name indicates, is punishment aimed at making a person suffer for her misdeeds. Doubt in free will did not negatively impact a person’s support for punishment aimed at deterrence or rehabilitation.

While the authors do consider one reason for this, namely that those who doubt free will would regard wrongdoers as analogous to harmful natural phenomenon that need to dealt with rather than subject to vengeance, this view also matches a common view about moral accountability. To be specific, moral (and legal) accountability is generally proportional to the control a person has over events. To use a concrete example, consider the difference between these two cases. In the first case, Sally is driving well above the speed limit and is busy texting and sipping her latte. She doesn’t see the crossing guard frantically waving his sign and runs over the children in the cross walk. In case two, Jane is driving the speed limit and children suddenly run directly in front of her car. She brakes and swerves immediately, but she hits the children. Intuitively, Sally has acted in a way that was morally wrong—she should have been going the speed limit and she should have been paying attention. Jane, though she hit the children, did not act wrongly—she could not have avoided the children and hence is not morally responsible.

For those who doubt free will, every case is like Jane’s case: for the determinist, every action is determined and a person could not have chosen to do other than she did. On this view, while Jane’s accident seems unavoidable, so was Sally’s accident: Sally could not have done other than she did. As such, Sally is no more morally accountable than Jane. For someone who believes this, inflicting retributive punishment on Sally would be no more reasonable than seeking vengeance against Jane.

However, it would seem to make sense to punish Sally to deter others and to rehabilitate Sally so she will drive the speed limit and pay attention in the future. Of course, if these is no free will, then we would not chose to punish Sally, she would not chose to behave better and people would not decide to learn from her lesson. Events would happen as determined—she would be punished or not. She would do it again or not. Other people would do the same thing or not. Naturally enough, to speak of what we should decide to do in regards to punishments would seem to assume that we can chose—that is, that we have some degree of free will.

A second impact that Shariff and Vohs noted was that a person who doubts free will tends to behave worse than a person who does not have such a skeptical view. One specific area in which behavior worsens is that such skepticism seems to incline people to be more willing to harm others. Another specific area is that such skepticism also inclines others to lie or cheat. In general, the impact seems to be that the skepticism reduces a person’s willingness (or capacity) to resist impulsive reactions in favor of greater restraint and better behavior.

Once again, this certainly makes sense. Going back to the examples of Sally and Jane, Sally (unless she is a moral monster) would most likely feel remorse and guilt for hurting the children. Jane, though she would surely feel badly, would not feel moral guilt. This would certainly be reasonable: a person who hurts others should feel guilt if she could have done otherwise but should not feel moral guilt if she could not have done otherwise (although she certainly should feel sympathy). If someone doubts free will, then she will regard her own actions as being out of her control: she is not choosing to lie, or cheat or hurt others—these events are just happening. People might be hurt, but this is like a tree falling on them—it just happens. Interestingly, these studies show that people are consistent in applying the implications of their skepticism in regards to moral (and legal) accountability.

One rather important point is to consider what view we should have regarding free will. I take a practical view of this matter and believe in free will. As I see it, if I am right, then I am…right. If I am wrong, then I could not believe otherwise. So, choosing to believe I can choose is the rational choice: I am right or I am not at fault for being wrong.

I do agree with Kant that we cannot prove that we have free will. He believed that the best science of his day was deterministic and that the matter of free will was beyond our epistemic abilities. While science has marched on since Kant, free will is still unprovable. After all, deterministic, random and free-will universes would all seem the same to the people in them. Crudely put, there are no observations that would establish or disprove metaphysical free will. There are, of course, observations that can indicate that we are not free in certain respects—but completely disproving (or proving) free will would seem to beyond our abilities—as Kant contended.

Kant had a fairly practical solution: he argued that although free will cannot be proven, it is necessary for ethics. So, crudely put, if we want to have ethics (which we do), then we need to accept the existence of free will on moral grounds. The experiments described by Shariff and Vohs seems to support Kant: when people doubt free will, this has an impact on their ethics.

One aspect of this can be seen as positive—determining the extent to which people are in control of their actions is an important part of determining what is and is not a just punishment. After all, we do not want to inflict retribution on people who could not have done otherwise or, at the very least, we would want relevant circumstances to temper retribution with proper justice.  It also makes more sense to focus on deterrence and rehabilitation more than retribution. However just, retribution merely adds more suffering to the world while deterrence and rehabilitation reduces it.

The second aspect of this is negative—skepticism about free will seems to cause people to think that they have a license to do ill, thus leading to worse behavior. That is clearly undesirable. This then, provides an interesting and important challenge: balancing our view of determinism and freedom in order to avoid both unjust punishment and becoming unjust. This, of course, assumes that we have a choice. If we do not, we will just do what we do and giving advice is pointless. As I jokingly tell my students, a determinist giving advice about what we should do is like someone yelling advice to a person falling to certain death—he can yell all he wants about what to do, but it won’t matter.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

The Robots of Deon

The Robots of Dawn (1983)

The Robots of Dawn (1983) (Photo credit: Wikipedia)

The United States military has expressed interest in developing robots capable of moral reasoning and has provided grant money to some well-connected universities to address this problem (or to at least create the impression that the problem is being considered).

The notion of instilling robots with ethics is a common theme in science fiction, the most famous being Asimov’s Three Laws. The classic Forbidden Planet provides an early movie example of robotic ethics: Robby the robot has an electro-mechanical seizure if he is ordered to cause harm to a human being (or an id-monster created by the mind of his creator. Dr. Morbius). In contrast, the killer machines (like Saberhagan’s Berserkers) of science fiction tend to be free of the constraints of ethics.

While there are various reasons to imbue (or limit) robots with ethics (or at least engage in the pretense of doing so), one of these is public relations. Thanks to science fiction dating back at least to Frankenstein, people tend to worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics serves to reassure the public (or so it is hoped). Another reason is to make the public relations gimmick a reality—to actually place behavioral restraints on killbots so they will conform to the rules of war (and human morality). Presumably the military will also address the science fiction theme of the ethical killbot who refuses to kill on moral grounds.

While science fiction features ethical robots, the authors (like philosophers who discuss the ethics of robots) are extremely vague about how robot ethics actually works. In the case of truly intelligent robots, their ethics might work the way our ethics works—which is something that is still a mystery debated by philosophers and scientists to this day. We are not yet to the point of having such robots, so the current practical challenge is to develop ethics for the sort of autonomous or semi-autonomous robots we can build now.

While creating ethics for robots might seem daunting, the limitations of current robot technology means that robot ethics is essentially a matter of programming these machines to operate in specific ways defined by whatever ethical system is being employed as the guide. One way to look at programing such robots with ethics is that they are being programmed with safety features. To use a simple example, suppose that I regard shooting unarmed people as immoral. To make my killbot operate according to that ethical view, it would be programmed to recognize armed humans and have some code saying, in effect “if unarmedhuman = true, then firetokill= false” or, in normal English, if the human is unarmed, do not shoot her.

While a suitably programmed robot would act in a way that seemed ethical, the robot is obviously not engaged in ethical behavior. After all, it is merely a more complex version of the automatic door. The supermarket door, though it opens for you, is not polite. The shredder that catches your tie and chokes you is not evil.  Likewise, the killbot that does not shoot you in the face because its cameras show that you are unarmed is not ethical. The killbot that chops you into meaty chunks is not unethical. Following Kant, since the killbot’s programming is imposed and the killbot lacks the freedom to choose, it is not engaged in ethical (or unethical behavior), though the complexity of its behavior might make it seem so.

To be fair to the killbots, perhaps we humans are not ethical or unethical under these requirements for ethics—we could just be meat-bots operating under the illusion of ethics. Also, it is certainly sensible to focus on the practical aspect of the matter: if you are a civilian being targeted by a killbot, your concern is not whether it is an autonomous moral agent or merely a machine—your main worry is whether it will kill you or not. As such, the general practical problem is getting our killbots to behave in accord with our ethical values.

Achieving this goal involves three main steps. The first is determining which ethical values we wish to impose on our killbots. Since this is a practical matter and not an exercise in philosophical inquiry, this will presumably involve using the accepted ethics (and laws) governing warfare rather than trying to determine what is truly good (if anything). The second step is translating the ethics into behavioral terms. For example, the moral principle that makes killing civilians wrong would be translated into behavioral sets of allowed and forbidden behavior. This would require creating a definition of civilian (or perhaps just an unarmed person) that would allow recognition using the sensors of the robot. As another example, the moral principle that surrender should be accepted would require defining surrender behavior in a way the robot could recognize.  The third step would be coding that behavior in whatever programming language is used for the robot in question. For example, the robot would need to be programmed to engage in surrender-accepting behavior. Naturally, the programmers would need to worry about clever combatants trying to “deceive” the killbot to take advantage of its programming (like pretending to surrender so as to get close enough to destroy the killbot).

Since these robots would be following programmed rules, they would presumably be controlled by deontological ethics—that is, ethics based on following rules. Thus, they would be (with due apologies to Asimov), the Robots of Deon.

An interesting practical question is whether or not the “ethical” programming would allow for overrides or reprogramming. Since the robot’s “ethics” would just be behavior governing code, it could be changed and it is easy enough to imagine an ethics preferences in which a commander could selectively (or not so selectively) turn off behavioral limitations. And, of course, killbots could be simply programmed without such ethics (or programmed to be “evil”).

The largest impact of the government funding for this sort of research will be that properly connected academics will get surprisingly large amounts of cash to live the science-fiction dream of teaching robots to be good. That way the robots will feel a little bad when they kill us all.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Kant & Economic Justice

English: , Prussian philosopher. Português: , ...

(Photo credit: Wikipedia)

One of the basic concerns is ethics is the matter of how people should be treated. This is often formulated in terms of our obligations to other people and the question is “what, if anything, do we owe other people?” While it does seem that some would like to exclude the economic realm from the realm of ethics, the burden of proof would rest on those who would claim that economics deserves a special exemption from ethics. This could, of course, be done. However, since this is a brief essay, I will start with the assumption that economic activity is not exempt from morality.

While I subscribe to virtue theory as my main ethics, I do find Kant’s ethics both appealing and interesting. In regards to how we should treat others, Kant takes as foundational that “rational nature exists as an end in itself.”

It is reasonable to inquire why this should be accepted. Kant’s reasoning certainly seems sensible enough. He notes that “a man necessarily conceives his own existence as such” and this applies to all rational beings. That is, Kant claims that a rational being sees itself as being an end, rather than a thing to be used as a means to an end.  So, for example, I see myself as a person who is an end and not as a mere thing that exists to serve the ends of others.

Of course, the mere fact that I see myself as an end would not seem to require that I extend this to other rational beings (that is, other people). After all, I could apparently regard myself as an end and regard others as means to my ends—to be used for my profit as, for example, underpaid workers or slaves.

However, Kant claims that I must regard other rational beings as ends as well. The reason is fairly straightforward and is a matter of consistency: if I am an end rather than a means because I am a rational being, then consistency requires that I accept that other rational beings are ends as well. After all, if being a rational being makes me an end, it would do the same for others. Naturally, it could be argued that there is a relevant difference between myself and other rational beings that would warrant my treating them as means only and not as ends. People have, obviously enough, endeavored to justify treating other people as things. However, there seems to be no principled way to insist on my own status as an end while denying the same to other rational beings.

From this, Kant derives his practical imperative: “so act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only.” This imperative does not entail that I cannot ever treat a person as a means—that is allowed, provided I do not treat the person as a means only. So, for example, I would be morally forbidden from being a pimp who uses women as mere means of revenue. I would, however, not be forbidden from having someone check me out at the grocery store—provided that I treated the person as a person and not a mere means.

One obvious challenge is sorting out what it is to treat a person as an end as opposed to just a means to an end. That is, the problem is figuring out when a person is being treated as a mere means and thus the action would be immoral.

Interestingly enough, many economic relationships would seem to clearly violate Kant’s imperative in that they treat people as mere means and not at all as ends. To use the obvious example, if an employer treats her employees merely as means to making a profit and does not treat them as ends in themselves, then she is acting immorally by Kant’s standard. After all, being an employee does not rob a person of personhood.

One obvious reply is to question my starting assumption, namely that economics is not exempt from ethics. It could be argued that the relationship between employer and employee is purely economic and only economic considerations matter. That is, the workers are to be regarded as means to profit and treated in accord with this—even if doing so means treating them as things rather than persons. The challenge is, of course, to show that the economic realm grants a special exemption in regards to ethics. Of course, if it does this, then the exemption would presumably be a general one. So, for example, people who decided to take money from the rich at gunpoint would be exempt from ethics as well. After all, if everyone is a means in economics, then the rich are just as much means as employees and if economic coercion against people is acceptable, then so too is coercion via firearms.

Another obvious reply is to contend that might makes right. That is, the employer has the power and owes nothing to the employees beyond what they can force him to provide. This would make economics rather like the state of nature—where, as Hobbes said, “profit is the measure of right.” Of course, this leads to the same problem as the previous reply: if economics is a matter of might making right, then people have the same right to use might against employers and other folks—that is, the state of nature applies to all.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Owning Intelligent Machines

Rebel ToasterWhile truly intelligent machines are still in the realm of science fiction, it is worth considering the ethics of owning them. After all, it seems likely that we will eventually develop such machines and it seems wise to think about how we should treat them before we actually make them.

While it might be tempting to divide beings into two clear categories of those it is morally permissible to own (like shoes) and those that are clearly morally impermissible to own (people), there are clearly various degrees of ownership in regards to ethics. To use the obvious example, I am considered the owner of my husky, Isis. However, I obviously do not own her in the same way that I own the apple in my fridge or the keyboard at my desk. I can eat the apple and smash the keyboard if I wish and neither act is morally impermissible. However, I should not eat or smash Isis—she has a moral status that seems to allow her to be owned but does not grant her owner the right to eat or harm her. I will note that there are those who would argue that animals should not be owner and also those who would argue that a person should have the moral right to eat or harm her pets. Fortunately, my point here is a fairly non-controversial one, namely that it seems reasonable to regard ownership as possessing degrees.

Assuming that ownership admits of degrees in this regard, it makes sense to base the degree of ownership on the moral status of the entity that is owned. It also seems reasonable to accept that there are qualities that grant a being the status that morally forbids ownership. In general, it is assumed that persons have that status—that it is morally impermissible to own people. Obviously, it has been legal to own people (be the people actual people or corporations) and there are those who think that owning other people is just fine. However, I will assume that there are qualities that provide a moral ground for making ownership impermissible and that people have those qualities. This can, of course, be debated—although I suspect few would argue that they should be owned.

Given these assumptions, the key matter here is sorting out the sort of status that intelligent machines should possess in regards to ownership. This involves considering the sort of qualities that intelligent machines could possess and the relevance of these qualities to ownership.

One obvious objection to intelligent machines having any moral status is the usual objection that they are, obviously, machines rather than organic beings. The easy and obvious reply to this objection is that this is mere organicism—which is analogous to a white person saying blacks can be owned as slaves because they are not white.

Now, if it could be shown that a machine cannot have qualities that give it the needed moral status, then that would be another matter. For example, philosophers have argued that matter cannot think and if this is the case, then actual intelligent machines would be impossible. However, we cannot assume a priori that machines cannot have such a status merely because they are machines. After all, if certain philosophers and scientists are right, we are just organic machines and thus there would seem to be nothing impossible about thinking, feeling machines.

As a matter of practical ethics, I am inclined to set aside metaphysical speculation and go with a moral variation on the Cartesian/Turing test. The basic idea is that a machine should be granted a moral status comparable to organic beings that have the same observed capabilities. For example, a robot dog that acted like an organic dog would have the same status as an organic dog. It could be owned, but not tortured or smashed. The sort of robohusky I am envisioning is not one that merely looks like a husky and has some dog-like behavior, but one that would be fully like a dog in behavioral capabilities—that is, it would exhibit personality, loyalty, emotions and so on to a degree that it would pass as real dog with humans if it were properly “disguised” as an organic dog. No doubt real dogs could smell the difference, but scent is not the foundation of moral status.

In terms of the main reason why a robohusky should get the same moral status as an organic husky, the answer is, oddly enough, a matter of ignorance. We would not know if the robohusky really had the metaphysical qualities of an actual husky that give an actual husky moral status. However, aside from difference in the parts, we would have no more reason to deny the robohusky moral status than to deny the husky moral status. After all, organic huskies might just be organic machines and it would be mere organicism to treat the robohusky as a mere thing and grant the organic husky a moral status. Thus, advanced robots with the capacities of higher animals should receive the same moral status as organic animals.

The same sort of reasoning would apply to robots that possess human qualities. If a robot had the capability to function analogously to a human being, then it should be granted the same status as a comparable human being. Assuming it is morally impermissible to own humans, it would be impermissible to own such robots. After all, it is not being made of meat that grants humans the status of being impermissible to own but our qualities. As such, a machine that had these qualities would be entitled to the same status. Except, of course, to those unable to get beyond their organic prejudices.

It can be objected that no machine could ever exhibit the qualities needed to have the same status as a human. The obvious reply is that if this is true, then we will never need to grant such status to a machine.

Another objection is that a human-like machine would need to be developed and built. The initial development will no doubt be very expensive and most likely done by a corporation or university. It can be argued that a corporation would have the right to make a profit off the development and construction of such human-like robots. After all, as the argument usually goes for such things, if a corporation was unable to profit from such things, they would have no incentive to develop such things. There is also the obvious matter of debt—the human-like robots would certainly seem to owe their creators for the cost of their creation.

While I am reasonably sure that those who actually develop the first human-like robots will get laws passed so they can own and sell them (just as slavery was made legal), it is possible to reply to this objection.

One obvious reply is to draw an analogy to slavery: just because a company would have to invest money in acquiring and maintaining slaves it does not follow that their expenditure of resources grants a right to own slaves. Likewise, the mere fact that a corporation or university spent a lot of money developing a human-like robot would not entail that they thus have a right to own it.

Another obvious reply to the matter of debt owed by the robots themselves is to draw an analogy to children: children are “built” within the mother and then raised by parents (or others) at great expense. While parents do have rights in regards to their children, they do not get the right of ownership. Likewise, robots that had the same qualities as humans should thus be regarded as children would be regarded and hence could not be owned.

It could be objected that the relationship between parents and children would be different than between corporation and robots. This is a matter worth considering and it might be possible to argue that a robot would need to work as an indentured servant to pay back the cost of its creation. Interestingly, arguments for this could probably also be used to allow corporations and other organizations to acquire children and raise them to be indentured servants (which is a theme that has been explored in science fiction). We do, after all, often treat humans worse than machines.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Programmed Consent

Sexbot YesScience fiction is often rather good at predicting the future and it is not unreasonable to think that the intelligent machine of science fiction will someday be a reality. Since I have been writing about sexbots lately, I will use them to focus the discussion. However, what follows can also be applied, with some modification, to other sorts of intelligent machines.

Sexbots are, obviously enough, intended to provide sex. It is equally obvious that sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped or not. Sorting this out requires considering the matter of consent in more depth.

When it is claimed that sex without consent is rape, one common assumption is that the victim of non-consensual sex is a being that could provide consent but did not. A violent sexual assault against a person would be an example of this as would, presumably, non-consensual sex with an unconscious person. However, a little reflection reveals that the capacity to provide consent is not always needed in order for rape to occur. In some cases, the being might be incapable of engaging in any form of consent. For example, a brain dead human cannot give consent, but presumably could still be raped. In other cases, the being might be incapable of the right sort of consent, yet still be a potential victim of rape. For example, it is commonly held that a child cannot properly consent to sex with an adult.

In other cases, a being that cannot give consent cannot be raped. To use an obvious example, a human can have sex with a sex-doll and the doll cannot consent. But, it is not the sort of entity that can be raped. After all, it lacks the status that would require consent. As such, rape (of a specific sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being in order for the sex to be morally acceptable. Naturally, I have not laid out all the fine details to create a necessary and sufficient account here—but that is not my goal nor what I need for my purpose in this essay. In regards to the main focus of this essay, the question would be whether or not a sexbot could be an entity that has a status that would require consent. That is, would buying (or renting) and using a sexbot for sex be rape?

Since the current sexbots are little more than advanced sex dolls, it seems reasonable to put them in the category of beings that lack this status. As such, a person can own and have sex with this sort of sexbot without it being rape (or slavery). After all, a mere object cannot be raped (or enslaved).

But, let a more advanced sort of sexbot be imagined—one that engages in complex behavior and can pass the Turning Test/Descartes Test. That is, a conversation with it would be indistinguishable from a conversation with a human. It could even be imagined that the sexbot appeared fully human, differing only in terms of its internal makeup (machine rather than organic). That is, unless someone cut the sexbot open, it would be indistinguishable from an organic person.

On the face of it (literally), we would seem to have as much reason to believe that such a sexbot would be a person as we do to believe that humans are people. After all, we judge humans to be people because of their behavior and a machine that behaved the same way would seem to deserve to be regarded as a person. As such, nonconsensual sex with a sexbot would be rape.

The obvious objection is that we know that a sexbot is a machine with a CPU rather than a brain and a mechanical pump rather than a heart. As such, one might, argue, we know that the sexbot is just a machine that appears to be a person and is not a person.  As such, a real person could own a sexbot and have sex with it without it being rape—the sexbot is a thing and hence lacks the status that requires consent.

The obvious reply to this objection is that the same argument can be used in regards to organic humans. After all, if we know that a sexbot is just a machine, then we would also seem to know that we are just organic machines. After all, while cutting up a sexbot would reveal naught but machinery, cutting up a human reveals naught but guts and gore. As such, if we grant organic machines (that is, us) the status of persons, the same would have to be extended to similar beings, even if they are made out of different material. While various metaphysical arguments can be advanced regarding the soul, such metaphysical speculation provides a rather tenuous basis for distinguishing between meat people and machine people.

There is, it might be argued, still an out here. In his Hitchhikers’ Guide to the Galaxy Douglas Adams envisioned “an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly.” A similar sort of thing could be done with sexbots: they could be programmed so that they always give consent to their owner, thus the moral concern would be neatly bypassed.

The obvious reply is that programmed consent is not consent. After all, consent would seem to require that the being has a choice: it can elect to refuse if it wants to. Being compelled to consent and being unable to dissent would obviously not be morally acceptable consent. In fact, it would not be consent at all. As such, programming sexbots in this manner would be immoral—it would make them into slaves and rape victims because they would be denied the capacity of choice.

One possible counter is that the fact that a sexbot can be programmed to give “consent” shows that it is (ironically) not the sort of being with a status that requires consent. While this has a certain appeal, consider the possibility that humans could be programmed to give “consent” via a bit of neurosurgery or by some sort of implant. If this could occur, then if programmed consent for sexbots is valid consent, then the same would have to apply to humans as well. This, of course, seems absurd. As such, a sexbot programmed for consent would not actually be consenting.

It would thus seem that if advanced sexbots were built, they should not be programmed to always consent. Also, there is the obvious moral problem with selling such sexbots, given that they would certainly seem to be people. It would thus seem that such sexbots should never be built—doing so would be immoral.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta