Sexbots are Persons, Too?

Page_1In my previous essays on sexbots I focused on versions that are clearly mere objects. If the sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots would lack the moral status needed to be wronged. Obviously enough, the sexbots of the near future will be in the class of objects. However, science fiction has routinely featured intelligent, human-like robots (commonly known as androids). Intelligent beings, even artificial ones, would seem to have an excellent claim on being persons. In terms of sorting out when a robot should be treated as person, the reasonable test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

While Descartes does not deeply explore the moral distinctions between beings that talk (that have minds) and those that merely make noises, it does seem reasonable to regard a being that talks as a person and to thus grant it the moral status that goes along with personhood. This, then, provides a means to judge whether an advanced sexbot is a person or not: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would presumably lack moral status.

Having sex with a sexbot that can pass the Cartesian test would certainly seem to be morally equivalent to having sex with a human person. As such, whether the sexbot freely consented or not would be a morally important matter. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is, of course, actually done). If such sexbots were mistreated, this would also be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition.

It might also be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and hence it would still be reasonable to hold that it is not a person. It would seem to be a person, but would merely be acting like a person. While this is a point well worth considering, the same sort of argument could be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the classic problem of other minds: all I can do is see your behavior and infer that you are self-aware based on analogy to my own case. Hence, I do not know that you are aware since I cannot be you. From your perspective, the same is true about me. As such, if a robot acted in an intelligent manner, it would seem that it would have to be regarded as being a person on those grounds. To fail to do so would be a mere prejudice in favor of the organic.

In reply, some people believe that other people can be used as they see fit. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are simply evil.

Those with religious inclinations would probably bring up the matter of the soul. But, the easy reply is that we would have as much evidence that robots have souls as we do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product that is as like a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do onerous tasks, but to the degree they are intelligent, they would be slaves.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would actually be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta
Leave a comment ?


  1. Couldn’t they program an intelligent sexbot, which wanted to be an object, which wanted to be treated like a thing, which wanted to be enslaved?

    Why assume that there is something in “rationality” that precludes wanting to be treated as a object?

    I agree that humans (generally) don’t want to be enslaved, but one can imagine a rational creature which does.

  2. In asking for a pre-programmed, always-willing sexbot, Swallerstein is repeating the call for the type of creature brilliantly posited by Douglas Adams in The Restaurant at the End of the Universe. In that case, the construct was a talking cow that recommended parts of itself to dinner guests, having first ‘devoted its life to careful eating and exercise to bring its meat to perfection’. A sexbot entity devised and constructed so as to be unconditionally ‘up’ for (?consensual) sex with its owner/hirer is a deal less ethically challenging than a bovine-bot self-sacrifice made in the name of a good meal.

    I’m not sure that Mike B’s description of the Turing Test is correct (i.e. as Turing intended, nor as the test is generally understood). The test was proposed as one to establish whether a machine can think. So Mike B has somewhat elidede that ability to equate the test outcome with personhood on the part of ‘the thinker’. A further refinement is that the test is strictly behavioural on the part of the machine – it communicates by means of screen or printout, not even by voice. For the machine, passing the behavioural test does not unequivocally indicate the presence of mind (as we humans construe it), nor even intelligence, strictly speaking. It is a test of mimicry. Of course, the philosophical interest comes when we humans ponder the test; we come to realise how hard it is to define the quintessence of the qualites that we recognise in other persons and, on the basis of which, we are willing to attribute personhood.

    The sexbot need only be designed in such a way that it is incapable of developing ‘conditional’ thinking for its activities. Such a device need be no more ‘bothered’ about conducting sex action than were it busy with building boring motor vehicles or running a city’s train network. It is the questions that result if an unequivocal person (i.e. a human being) would engage in the mechanics of sex with such a device that pose the ethical questions … if there are any. Most cultures ban bestiality (and are we sure why?) … but need that apply to robotiality? In this discourse, are we are getting wound up at least as much by the (likely!) physical appearance of the sex-bot as its precise intellectual attributes?

  3. Swallerstein,

    They could presumably program a sexbot to want to be enslaved. However, that would itself seem to be an immoral action-merely pushing the coercion back a step. The sex bot would “consent” to slavery because it was forced to make that choice.

    Kant would contend that a rational being could not rationally accept that it should be treated as an object, but Kant could be wrong about this. I’m inclined to think it would be irrational to want to be treated as a non-rational entity, but this could be mere bias on my part.

  4. Mike:

    Why would programming a sexbot to want to be enslaved be “forcing them to make that choice”?

    Would programming one to want to be free be “forcing them to make that choice”?

    I assume that all education is programming of sorts, even the education that I received stressing the value of freedom. What’s more, our (most humans) desire for freedom surely has an evolutionary origin and what is genetic is simply programming.

    So, we (most humans) are genetically programmed for autonomy and in addition, autonomy is valued and stressed in Western culture, which I was raised in (another form of programming), making autonomy one of the goals of my life (and probably that of most readers of this blog, who have received a similar education/programming).

    However, one can imagine a rational being which seeks to be dominated, enslaved (as long as they are well-fed), maybe treated as many people treat cute pet dogs. There is nothing irrational about that: rationality seems to be the ability to get what one wants and there is no reason why one cannot rationally want to be dominated and enslaved, especially if one has been specifically programmed for that.

    Now, I assume that ethical behavior involves, among other things, treating other beings as they want to be treated (unless there are other over-riding ethical reasons not to) and thus, if someone, in this case, a sexbot, seeks to be enslaved, well, it’s ethical to give them what they want, since I don’t see any over-riding ethical reason not to.

  5. Why are you singling out sex slavery? isn’t any form of slavery a bad thing? … or is it just easier to push your sex is objectionable mindset?
    There is actually nothing wrong with sex, sex is supposed to cause happiness for all involved, if done right, so it’s a bad example to pick (unless you’re tied up in the stigma).

    And I don’t think there’s anything with bots unless you give them human capabilities in which case they ought to be treated as humans.

    But to go along with your example, I think you would create a “machine” for the purpose, with limited functionality. So if it’s sex, you create the robot with functionality that makes any sexual experience seem enjoyable for the bot. Anything more than that would seem like over engineering, like creating a washing machine that also cleans the dishes. The fact that the bot looks like a human, doesn’t make it human. A mannequin isn’t a human being.

    The exception would be that you’re looking to create a complete companion who’s purpose is to represent more of a complete life style. In this case the buyer would NOT want their bot just for sex and would likely want their partner to have an “opinion” and express “feelings”, possibly even reject advances where appropriate.

  6. Hey there would you mind letting me know which
    web host you’re utilizing? I’ve loaded your blog in 3 completely different internet
    browsers and I must say this blog loads a lot quicker then most.
    Can you recommend a good web hosting provider at a fair price?

    Cheers, I appreciate it!

    My page Photography Books (

  7. In response to Swallerstein and Mike,

    Steve Petersen has defended the possibility of robot slaves. I wrote about his argument on my blog, if you’re interested:


  8. John Danaher:

    You said it all better than I could in your blog.

    I hate ironing myself(and housework in general).

  9. If it is permissible to engineer sentient robots so that they (want to) satisfy the (sexual and ironing) desires of their human owners, why would it be wrong to genetically engineer ‘slave’ humans desirous to fulfill the same role?

  10. Hello Jim:

    I assume that human beings have some desire for autonomy built into them genetically (which can be reinforced by culture and education).

    So we’d have to do some genetic engineering to produce a human being who desired to be a slave.

    Would that be wrong? If I say “yes”, because they will then not reach their full human potential as free beings, then it would be wrong to engineer sexbots without the possibility of being free beings, since I’ve assumed that being a free being is better or superior or higher than being an unfree one.

    But saying “no”, that it would not necessarily be wrong, makes me seem like a monster genetically engineering slave human beings.

    You got me….

  11. Hello Amos,

    I don’t know that I’ve ‘got you’ – I haven’t given any argument as to why engineering happy slave robots or happy slave humans is wrong, and that you would seem ‘monstrous’ for being okay with the latter would be no argument at all if it came from me.

    I can’t see any good reason to say that what it is permissible to do with (regard to) persons or their creation is dependent (in any interesting way) on what that person is physically made of or whether that person also counts as ‘human’. And it seems to me that the arguments of Petersen (as John Danaher presents them) work just as well as arguments for creating happy slave humans as they do for happy slave robots. But to say that isn’t to pin a reductio on them (or, I suspect, to say anything that hasn’t occurred to them).

    Inspired by Petersen/Danaher, you might say that from “being a free being is better” it does not follow that it is bad to bring into existence a being that isn’t ‘free’ – you could say the slave’s life is “less good, but not bad” (and use the ‘non-identity problem’ to say that no slave is wronged by being created as long as their life is worth living). And so on. There seems plenty of scope to argue that genetically engineering blissfully happy human slaves is morally permissible.

    And if it’s not morally permissible it would seem worthwhile to get clear on why it isn’t…

  12. Jim:

    The fact that I would seem monstruous to others if I did something carries a lot of weight to me, not only because my vanity and self-esteem are involved, but also because I attach a lot of weight to the opinion of my peers.

    Now, I realize that attaching a lot of weight to the opinion of one’s peers can go awry in moral terms, because one could have been drafted into the SS, but I haven’t been drafted into the SS.

    I live in a democratic society, which pays lip service to human rights and recently, to the principles of actively reducing economic and social inequalities and I generally agree with those principles, probably because I was raised in a certain culture, but….

    Now, even the ancient Greeks (that’s as far back as my knowledge of history goes) believed that enslaving a peer was wrong. Of course, their definition of who was a peer was rather limited. A peer was a male member of the same polis. You could enslave foreigners, especially those defeated in warfare, with no ethical problems.

    The Greeks were pretty clear about the fact that slavery was no fun for the slave. They just didn’t care about whether foreigners had a good life or not.

    Our concept of a peer has been extended (Peter Singer’s expanding circle) and a lot of us see all human beings as our peers. I accept that concept of who is a peer, probably because I’m a product of the same expanding circle culture.

    Could I come to see a sexbot as a peer? Will the circle expand to include sexbots? Maybe.

    Maybe my willingness to accept a sexbot slave dates me as a 20th century male.

    As to a slave’s life, being just “less good” rather than clearly evil or wrong. I think that I have an obligation to make the lives of others as good as I possibly can, within the limits of what is possible for me, without harming myself or making my life worst.

    So if I accept the sexbot as an other who counts, which I might well end up doing, then it’s not enough to assure that their life is less good (but not bad).

  13. The important thing about Descartes is that the experience of thought (or feeling, if you are Rousseau) is primary. We know we exist because we think and feel (“therefore, I am”). I am less concerned with the feeling aspect being overlooked, although it is better included. The issue is., what is the experience and how does it arise in the entity for the entity to know it exists? The subjective experience avoids analysis, but neuroscience makes progress. You might be interested in my free nook at on theses issues (a design to the laws of nature, not a deity)

  14. Amos,

    You certainly don’t strike me as vane and, if one thinks of ethics broadly, how one is regarded by others can be a matter of ethical concern – it will be hard to flourish if your peers think you monstrous. More pertinently, if you have regard for your peers’ opinion and your peers think something you have come to endorse in abstract speculation is (or entails) something monstrous then, yes, that does give some reason to think about whether you may have gone astray.

    My ‘gut’ reaction to the idea of using genetic engineering to create subservient lower-caste humans to work as happy (sex) ‘slaves’ is, I suppose, that it would be a bad society that worked that way even if that society was much happier than ours –a bit ‘Brave New World’ for me. But I don’t have a clearly thought out argument for why it would be immoral. These persons wouldn’t exist if it weren’t for their being engineered in this way – they can’t be said to have been wronged by their creation if their lives are worth living. They can, by hypothesis, be blissfully happy (happier than the non-engineered persons) and in principle they could be legally protected from anything they perceive as painful. The ‘slave’ persons needn’t actually be owed in law – they’d just be engineered so they would want to do what is wanted of them (we could insist that if their ‘programming’ goes awry they have the right to do or refuse to whatever they chose). And as far as freedom (or free will) goes, I can’t make sense of that consisting in anything but the freedom to do as you desire or prefer, a freedom these persons could fully enjoy. All I think I have is no good reason to think that what it is permissible to do concerning persons is dependent on whether they are composed of meat or steel or whether they count as human.

  15. s. wallerstein (aka amos)

    Hello Jim,

    I’m at all sure what’s wrong with programming willing and hence, happy slave robots.

    However, I don’t think that it will pass the jury of my peers.

    When I’m in doubt about an ethical issue or about my ethical intuitions, I recur to those who I see as my ethical peers: a lawyer who comments on current events on the radio, another lawyer (who knows a lot of philosophy) who has a newspaper column, a political theorist who comments in the media and is a marxist in the broadest sense of the term, my ex-therapist, certain friends, etc. I look to see what they have to say (in ethical terms), because I have no reason to believe that my ethical intuitions are necessarily correct (whatever “correct” means): after all, my ethical intuitions are simply the result of my own genes, upbringing and education and I don’t suppose that my programming (genes, upbringing and education) was particularly insightful in ethical terms.

    Anyway, what is not going to pass the jury in this case is servility. I feel uncomfortable around servile people: fawning waiters (to the extent that I avoid certain restaurants), women who only want to play the role of object/geisha, anyone who pretends that “my will is their will” or that “my will and wishes count more than theirs”.

    I don’t trust servile people. I always suppose that their “real” self, their non-servile self, is just waiting for an opportunity to emerge and when it emerges, things are not going to be pretty.

    Now I know that the will of the sexbots is to submit, to mold itself to my will, but our sense (I know my jury) that servility is unbearable is too strong.

    Here’s an idea. Servility is not a virtue. Let’s take cowardliness, which is also not a virtue. Would we accept a cowardly robot, a robot whose essence is to avoid all dangers and challenges, even if the robot was programmed that way and programmed to be happy being cowardly?

    I sense that cowardliness is a vice even in robots and maybe servility is similar.

  16. s. wallerstein (aka amos)


    My first sentence should read “I’m NOT

  17. Much of this comment is essentially the anthropomorphic fallacy writ large. In the scenario Mike LaB addressed, animated objects (i.e. the sexbots) are being attributed human characteristics -some at least – and thence human rights. In that scheme, sexbots rights would be transgressed by their enslavement or servitude or simply their unconditional sexual activity itself. A consideration of robotic behaviour under terms like vice or virtue takes this approach even further.

    Yet, sexbots will have been engineered to be human simulants physically, plus the behavioural ‘right stuff’ to be sexually satisfying to their ‘owners’. No more, no less. I can’t see how these devices – devoid of ‘free will’ (even if we don’t strictly have it either, pace Sam Harris et al) even if they can do some convincing pillowtalk – could be regarded as having moral agency. Thus, I wouldn’t be concerned that they are not attributed human rights.

    Possibly the argument is considered to stray into Peter Singer’s territory. Are we are to consider sexbots are equivalent to a new ‘species’ which, like the great apes and perhaps dolphins and their cousins, Singer would have awarded ‘rights’ too? (OK – Singer would express this in terms of ‘equality’ rather than ‘rights’ – but the ethical outcomes are much the same). Need I worry that my laptop is being bored by all this …?

  18. Dr. Caffeine:

    I agree with much that you say, but my point is that at present the issue of sexbots is too abstract and hypothetical to declare them right or wrong in ethical terms. I’ve tried to describe the process, the reasons and the rationalizations by which “right-thinking” people (among whom, I include myself) will most probably consider them to be wrong.

    I believe that most of our (or my) ethical thinking is based on group identification and rationalization of that identification and I don’t consider that to be so bad, especially in this case, since after all, what do we lose by considering sexbots to worthy of our ethical consideration?

    In the case of abortions, for example, a lot is at stake on whether fetuses are persons or not. I say that they are not persons(at least for the first few months of pregnancy) and I conclude that women have the right to choose whether they want to abort or not during those months.

    A woman’s right to choose is very important (there is much at stake) since an unwanted child can make it difficult for a woman to flourish in her profession or whatever project she has in life.

    However, nothing so serious is at stake in the case of sexbots, since after all, there are lots of people who looking for sex partners and anyone in need in sex can find someone human (maybe not so sexy as they hope for, but…) or satisfy themself watching porn online.

    Nothing so serious being at stake, I’m inclined to go with what I’m fairly sure will be the politically correct point of view on this issue (as I outlined above).

  19. @ s wallerstein

    Just an aside to the thread, but I was intrigued your discussion of the ‘jury of your peers’. I agree with you that this it what has real power in ethical considerations made by real people (i.e. non- philosophers!). Given that approach, your mention in your last post of ‘political correctness’ makes the useful counter-point. There are many matters where there is a widely-held ethical view, perhaps common even amongst our close peers, that we recognise by the (usually) perjorative term ‘politically correct’ as we dismiss them. We each have at least some views that cut against the grain and we are able to dismiss the (?majority) pc option by deploying that label. It’s a clever notion since it carries the idea that the belief is merely ‘politically correct’ rather than being truly valid … or ‘correct’.

    Despite that, in this case – as you say – you are going the pc route without resistance.

  20. Dr. Caffeine:

    I’m on the left politically. There are lots of contingent and accidental reasons behind that option: for example, the friend I walked to school with 50 years ago, who just happened to take the same route, at the same hour as I did and to be in the mood for a decent early morning conversation.

    That’s my whole life and my original option has been confirmed and reconfirmed in my choice of friendships, of love relationships,in the ways I’ve raised my children, in books I’ve read, in music I listen to, etc.

    There’s no point in trying to undo what’s been done nor do I want to.

    Now lots of things that one is supposed to accept on the left might be called “politically correct”: they go against empirical evidence or are contradictory or are just plain stupid or it’s obvious (to me at least) that they aren’t going to work very well when put into practice.

    Since I have very little live experience with the right, I can’t judge their politically correctness, but I’m fairly sure that it exists.

    Back to the left: for many years I dedicated lots of energy into pointing out the absurdities of political correctness to my fellow leftists. As you can imagine, I received few thanks and little applause for that.

    In the last few years, maybe out of tiredness, I’ve stopped playing Socrates on the left and while when important issues are at stake, I try to be a voice of reason and good sense, to guide the discussion towards points of rationality and realism, otherwise, I play lip service to political correctness or keep my mouth shut.

    I try to be useful to whatever cause I’m involved in (I believe that I now choose my causes carefully and thoughtfully) and I consider that any political correctness that I run into while involved is bad weather, something that I face and accept because I need to go somewhere.

  21. @ SWallerstein:

    Very thorough and fascinating. Your journey sounds like mine – politically and personally. Here in the UK (I can’t be sure from your responses whether you’re ‘here’ or ‘there’) the political world is rather different, and even the left-right labels have a different spectrum of meanings to what, I gather, goes on in the US.

    I’d encourage you not to give up the Soctratic good fight! It took me a while to realise it, but when a colleague on yet another pointless university committee described me as a ‘Soctratic Gadfly’, it was meant a compliment (especially given her academic discipline). I understand your tiredness born no doubt from frustration with so many of our block-headed companion humans. But sweet reason and the considered case does continue to win the day. I am utterly confident that post-Enlightenment thinking and values will triumph, even in a world still far too readily devoted to stone-age religious and ethical sentiment (under the pc guise of ‘culture’). I am a scientist and hold its values and methods in the highest regard. Faith and political views are not necessary or sufficient to ‘make the lights work’; mother nature has the first and last word – always. I’m a keen sailor, so your metaphor of political correctness as bad weather to be withstood is telling. Thus, in this analogy, your Socratic weariness should be taken as temporary sea-sickness. Calm waters and a fine broad reach ahead. No hemlock aboard this boat, please! (However, women and ‘green’ are welcome aboard this boat).

  22. Dr. Caffeine:

    Thank you for your words of encouragement.

    I come from Chile. I suspect that right and left in Chile are closer to what they mean in the U.K. than in the U.S., which is a world into itself. Our rightwing is closer to Cameron than to the Republicans and our left ranges from New Labor to Old Labor and beyond that to Marxists and Anarchists.
    I can’t see anything like the Chilean student movement, demanding free quality public education for all, happening in the U.S., while I could see it happening on the European continent and maybe in the U.K.

    I suppose one of the most irritating aspects of political correctness is what Bertrand Russell labels the fallacy of the superior virtue of the oppressed, the ridiculous idea that being oppressed makes someone into a more virtuous person.

    Along with that goes the fallacy that oppression confers superior epistemic capacity. Jeremy Stangroom blogged on that here:

    The self-righteousness of the left and the inability of the left to see that the right has its own set of values render the left often very blind (I know that using the word “blind” to mean “incapable of understanding” is not politically correct in the U.S.) to what’s happening. The left tends to attribute only pure motives to itself and only machivellian motives to the right: if Cubans send doctors to poor nations, it’s disinterested, but if the U.S. does exactly the same thing, it’s part of nefarious imperialist plan to brainwash the wretched of the earth.

    It’s interesting, by the way, that Marx (I’m not a Marxist), isn’t moralistic or preachy at all.

    Then at times political correctness seems to take flight from reality entirely. Here’s a discussion I was involved in about raising a “genderless” child:

Leave a Comment

NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>