Kant & Sexbots

Robotina [005]

Robotina [005] (Photo credit: PVBroadz)

The Fox sci-fi buddy cop show Almost Human episode on sexbots inspired me to revisit the ethics of sexbots. While the advanced, human-like models of the show are still things of fiction, there is already considerable research and development devoted to creating sexbots. As such, it seems well worth considering the ethical issues involving sexbots real and fictional.

At this time, sexbots are clearly mere objects—while often made to look like humans, they do not have the qualities that would make them even person-like. As such, ethical concerns involving these sexbots would not involve concerns about wrongs done to such objects—presumably they cannot be wronged. One potentially interesting way to approach the matter of sexbots is to make use of Kant’s discussion of ethics and animals.

In his ethical theory Kant makes it quite clear that animals are means rather than ends. They are mere objects. Rational beings, in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) chose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims that we have no direct duties to animals. They are classified in with the other “objects of our inclinations” that derive value from the value we give them. Sexbots would, obviously, qualify as paradigm “objects of our inclinations.”

Interestingly enough, Kant argues that we should treat animals well. However, he does so while also trying to avoid ascribing animals themselves any moral status. Here is how he does it (or tries to do it).

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X would obligate us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in his old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot the dog?

Kant’s answer seems to be rather consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Interestingly enough, Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being rather gentle with a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are essentially practice for us: how we treat them is training for how we will treat human beings.

In the case of the current sexbots, they obviously lack any meaningful moral status of their own. They do not feel or think—they are mere machines that might happen to be made to look like a human. As such, they lack all the qualities that might give them a moral status of their own.

Oddly enough, sexbots could be taken as being comparable to animals, at least as Kant sees them. After all, animals are mere objects and have no moral status of their own. Likewise for sexbots. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Perhaps this would also apply to sexbots. That is, perhaps it makes no sense to talk about good or bad relative to such objects. Thus, a key matter to settle is whether sexbots are more like animals or more like stones—at least in regards to the matter at hand.

If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person engaging in said behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also extend to sexbots. For example, if engaging in certain activities with a sexbot would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with a sexbot would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior. It is also worth considering that perhaps people should not engage in any behavior with sexbots—that having sex of any kind with a bot would be damaging to the person’s humanity.

Interestingly enough (or boringly enough), this sort of argument is often employed to argue against people watching pornography. The gist of such arguments is that viewing pornography can condition people (typically men) to behave badly in real life or at least have a negative impact on their character. If pornography can have this effect, then it seems reasonable to be concerned about the potential impact of sexbots on people. After all, pornography casts a person in a passive role viewing other people acting as sexual objects, while a sexbot allows a person to have sex with an actual sexual object.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta
  1. Aside from the concept of damaging one’s humanity; does the conciousness of animals vs the non consciousness of sexbots play a part in how we treat them? Animals are, to some degree, conscious and aware, but unable to give consent or understand the concept of consent, and are protected; just as children, mentally disabled, people under the influence of drugs, illness/injury or alcohol are; sexbots and by extension all other sex toys which do not resemble humans but are used in same way do not as of yet have any level of consciousness at all. Do dildo’s deserve rights? Not trying to be facetious, but pointing out that consciousness itself is a huge part of the question.

  2. If sexbots develop to the point where they can mimic or peform human behaviour to a highly convincing degree, evincing both human levels of pleasure and or pain; would we then require separate or new levels of consent from them before using them? That of course heads directly into knowing when something is alive and aware and conscious of one’s own thinking and of one self; of course if robots of any sort were developed enough to become aware; they should then have the right to consent and the rights of persons.

  3. I think the breakdown between means and ends is in the concepts of intersubjectivity and extended mind. Things as objects can only be objects as concepts and any subjectivity of the object is imparted and not within the thing itself. Think pathetic fallacy. I do not know what others are thinking or feeling and must rely on my interpretation of signs and symbols to arrive at judgements. Extend this to ‘objects’ which are not subjects and we have the same situation because we are still dealing with signs and symbols because we cannot directly experience the thing in itself. In short, whether ends or means, we are still dealing only with our own experience, (mind), which we also must interpret through signs and symbols for the experience to be coherant. If I mistreat anything, I am counter to my own ethical orientation, whether that be a living being or an object and I am in contradiction with myself. If I yell at my cat because it is irritating me with its meowing, I am in contradiction with my ontological goal of caring for myself by caring for my cat. The same applies for mistreating objects. If I do not perform maintenance on my car, house, etc., I cannot say that I am achieving my ontological goals of being an ethical person.

  4. by the same token, then, do we owe any ethical behavior towards in-game AI in, say, a GTA-type video game? is our humanity tarnished by the mistreatment of virtual bots which seem to exhibit clear “feelings” of pleasure or pain? if we choose to play an evil character in a virtual world, is our real life humanity affected?

    i would argue “no”, as these are mere synthetic virtual, synthetic creations which are programmed for entertainment – human catharsis and imaginary outlet – thus being removing itself from need the ethical…

    animals (ie sentient organic lifeforms) do not equal synthetic creations programmed by humans for a singular purpose or entertainment – in the case of this article, sex… and like an AI in a video game, humans are not ethically bound to treat it any differently than any other toy…

    where this all gets dicey, i admit, is when an AI is indistinguishable from organic sentience EMOTIONALLY… which is why if “sexbots” should exist at all, the ethics lie in NOT creating it with more self awareness than is needed for its intended purpose – sexual entertainment…

  5. please excuse typos, typed that fast on my lunch break 😉

  6. I can’t add anything to what these fine people have said, just a bit of rhetoric; has technology ever simplified the moral calculus intrinsic to the human condition? Even the cost of a stronger society was oft borne by its neighbors!

Leave a Comment

NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>