Owning Intelligent Machines

Rebel ToasterWhile truly intelligent machines are still in the realm of science fiction, it is worth considering the ethics of owning them. After all, it seems likely that we will eventually develop such machines and it seems wise to think about how we should treat them before we actually make them.

While it might be tempting to divide beings into two clear categories of those it is morally permissible to own (like shoes) and those that are clearly morally impermissible to own (people), there are clearly various degrees of ownership in regards to ethics. To use the obvious example, I am considered the owner of my husky, Isis. However, I obviously do not own her in the same way that I own the apple in my fridge or the keyboard at my desk. I can eat the apple and smash the keyboard if I wish and neither act is morally impermissible. However, I should not eat or smash Isis—she has a moral status that seems to allow her to be owned but does not grant her owner the right to eat or harm her. I will note that there are those who would argue that animals should not be owner and also those who would argue that a person should have the moral right to eat or harm her pets. Fortunately, my point here is a fairly non-controversial one, namely that it seems reasonable to regard ownership as possessing degrees.

Assuming that ownership admits of degrees in this regard, it makes sense to base the degree of ownership on the moral status of the entity that is owned. It also seems reasonable to accept that there are qualities that grant a being the status that morally forbids ownership. In general, it is assumed that persons have that status—that it is morally impermissible to own people. Obviously, it has been legal to own people (be the people actual people or corporations) and there are those who think that owning other people is just fine. However, I will assume that there are qualities that provide a moral ground for making ownership impermissible and that people have those qualities. This can, of course, be debated—although I suspect few would argue that they should be owned.

Given these assumptions, the key matter here is sorting out the sort of status that intelligent machines should possess in regards to ownership. This involves considering the sort of qualities that intelligent machines could possess and the relevance of these qualities to ownership.

One obvious objection to intelligent machines having any moral status is the usual objection that they are, obviously, machines rather than organic beings. The easy and obvious reply to this objection is that this is mere organicism—which is analogous to a white person saying blacks can be owned as slaves because they are not white.

Now, if it could be shown that a machine cannot have qualities that give it the needed moral status, then that would be another matter. For example, philosophers have argued that matter cannot think and if this is the case, then actual intelligent machines would be impossible. However, we cannot assume a priori that machines cannot have such a status merely because they are machines. After all, if certain philosophers and scientists are right, we are just organic machines and thus there would seem to be nothing impossible about thinking, feeling machines.

As a matter of practical ethics, I am inclined to set aside metaphysical speculation and go with a moral variation on the Cartesian/Turing test. The basic idea is that a machine should be granted a moral status comparable to organic beings that have the same observed capabilities. For example, a robot dog that acted like an organic dog would have the same status as an organic dog. It could be owned, but not tortured or smashed. The sort of robohusky I am envisioning is not one that merely looks like a husky and has some dog-like behavior, but one that would be fully like a dog in behavioral capabilities—that is, it would exhibit personality, loyalty, emotions and so on to a degree that it would pass as real dog with humans if it were properly “disguised” as an organic dog. No doubt real dogs could smell the difference, but scent is not the foundation of moral status.

In terms of the main reason why a robohusky should get the same moral status as an organic husky, the answer is, oddly enough, a matter of ignorance. We would not know if the robohusky really had the metaphysical qualities of an actual husky that give an actual husky moral status. However, aside from difference in the parts, we would have no more reason to deny the robohusky moral status than to deny the husky moral status. After all, organic huskies might just be organic machines and it would be mere organicism to treat the robohusky as a mere thing and grant the organic husky a moral status. Thus, advanced robots with the capacities of higher animals should receive the same moral status as organic animals.

The same sort of reasoning would apply to robots that possess human qualities. If a robot had the capability to function analogously to a human being, then it should be granted the same status as a comparable human being. Assuming it is morally impermissible to own humans, it would be impermissible to own such robots. After all, it is not being made of meat that grants humans the status of being impermissible to own but our qualities. As such, a machine that had these qualities would be entitled to the same status. Except, of course, to those unable to get beyond their organic prejudices.

It can be objected that no machine could ever exhibit the qualities needed to have the same status as a human. The obvious reply is that if this is true, then we will never need to grant such status to a machine.

Another objection is that a human-like machine would need to be developed and built. The initial development will no doubt be very expensive and most likely done by a corporation or university. It can be argued that a corporation would have the right to make a profit off the development and construction of such human-like robots. After all, as the argument usually goes for such things, if a corporation was unable to profit from such things, they would have no incentive to develop such things. There is also the obvious matter of debt—the human-like robots would certainly seem to owe their creators for the cost of their creation.

While I am reasonably sure that those who actually develop the first human-like robots will get laws passed so they can own and sell them (just as slavery was made legal), it is possible to reply to this objection.

One obvious reply is to draw an analogy to slavery: just because a company would have to invest money in acquiring and maintaining slaves it does not follow that their expenditure of resources grants a right to own slaves. Likewise, the mere fact that a corporation or university spent a lot of money developing a human-like robot would not entail that they thus have a right to own it.

Another obvious reply to the matter of debt owed by the robots themselves is to draw an analogy to children: children are “built” within the mother and then raised by parents (or others) at great expense. While parents do have rights in regards to their children, they do not get the right of ownership. Likewise, robots that had the same qualities as humans should thus be regarded as children would be regarded and hence could not be owned.

It could be objected that the relationship between parents and children would be different than between corporation and robots. This is a matter worth considering and it might be possible to argue that a robot would need to work as an indentured servant to pay back the cost of its creation. Interestingly, arguments for this could probably also be used to allow corporations and other organizations to acquire children and raise them to be indentured servants (which is a theme that has been explored in science fiction). We do, after all, often treat humans worse than machines.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta
Leave a comment ?

12 Comments.

  1. “I can … smash the keyboard if I wish and [this act] is [not] morally impermissible.”

    Some would try to argue that it is at least morally questionable, by virtue of your engagement in the act having some effect on your moral behaviour or status, as perceived by you if nobody else, at some possible imperceptible level. It ‘demeans’ you, they would have it.

    A similar sort of point is made here: “Portrayed gratuitous sex and/or violence is demeaning to the viewer” (http://blog.talkingphilosophy.com/?p=7717#comment-311211).

    I’m not sure why such conclusions are drawn, how such judgements are arrived at.

  2. If a human-like robot was created (programmed), in order for it to be human-like and not just a machine, it would have to have a heart. It might be difficult to program a heart. Science has shown that the human heart does not just pump blood; sixty to sixty-five percent of its cells are neurons, same as the brain and there is an on-going dialogue between the heart and the brain. To program a robot with a heart would have to allow for duality, since the heart has likes and dislikes, attachments and aversions. It could probably be programmed to like what it should, dislike what it should, and the same with attachments and aversions.

    However, if a situation changed and its programming backfired, all bets would be off. It would be a very advanced human-like robot that could perceive a situation, evaluate it correctly by using discrimination, and respond appropriately.

    The ethics of owning/using such a human-like robot would depend on how it is programmed and what purposes it is used for. For anyone who believes in karma, it would be safe to assume that the human-like robot would not incur any karma, no matter what it does. The same could not be expected to apply to its creators/owners.

  3. Doris Wrench Eisler

    I doubt that qualities and attributes and definitions were considered in the abolitionist movement: it was based purely on the innate recognition that slaves were human beings along with empathy towards their horrendous situation. You can’t define a human being and any attempt to do so leads to disaster: human beings and human nature are open categories and you either believe we all deserve respect and humane treatment in a general sense, or you do not.
    Our present sentiments were not shared by ancient Romans and others and aren’t even universal at this time. Same is true of animal rights. Robot rights is a purely theoretical idea because there are no machines at this time, rights for which one might be tempted to argue. If machines are developed (or develop themselves) that resemble human beings (and could possibly be hybrids) to the point mistakes in that regard can be made, or for other reasons, then it might be appropriate to say they have free will, are sentient and must be free to choose. It might also be a question of what they would have to say about it. As in other cases, it’s a matter of consensus in law and individual conscience otherwise. At this point the question is moot and circular, as in: if robots are identical to human beings, should they have the rights of human beings.

  4. If those machines are not similar enough to have morality, then that arguably would make a difference in terms of some of their status, rights, and so on, even if not in terms of ownership.

    On the other hand, if the machines are similar enough to humans to have morality, then maybe they’ll end up asking about owning intelligent animals, such as humans. For example, a similar enough robot (of a class calling themselves class-1 robots, for example) might reason:

    The same sort of reasoning would apply to animals that possess smart class-1 like qualities. If an animal (like a human) has the capability to function analogously to class-1 robotic being, then it should be granted the same status as a comparable class-1 robotic being. Assuming it’s morally impermissible to own class-1 robotic beings, it would be morally impermissible to own such animals. After all, it is not being made of silicon and stuff that grants class-1 robotic beings the status of being impermissible to own, but our qualities. As such, an animal that had these qualities would be entitled to the same status. Except, of course, to those unable to get beyond their inorganic prejudices.

    Then again, if those robots are similar enough to humans, would they have inorganic prejudices, or would they too tend to have organic prejudices, if they have any?

  5. That of course would depend on the specific characteristics of the robots, so it’s not possible to tell just by knowing that they’re intelligent, capable of talk, and so on.

    If they pass the Turing test, they either are very similar to humans, or very good at simulating human behavior (which might be doable by a much more intelligent being too).

  6. I forget the details, but in Douglas Adams’ Hitchhiker’s Guide to the Galaxy, the crew of one spacecraft enter another, seemingly deserted spacecraft, inside which they find a robot. They ask the robot who its owner is, to which the robot replies somewhat indignantly..

    “I’m mine!”

  7. If I want something I either purchase it or produce it myself, I consider that I am the owner of it. Might not the same apply to our offspring? Surely it is reasonable to say that a neonate is owned by its parents. There are many ways in which we treat our children that match to some extent, how we treat treasured inanimate objects. Admittedly as a child progresses towards adulthood the feeling of ownership diminishes and I think eventually vanishes. A young child surely feels that he or she is owned by the parents, but with advancing years this feeling begins to evaporate most certainly on the part of the child and most often on the part of the parents.

    This problem concerning robotics penetrates far into terra incognita of future science. In this connection I think it is important to remember that future human viewpoints, of what is right and wrong, acceptable or unacceptable will be probably considerably different from today. We are making judgements in today’s environment about something which may occur in the distant future when the scientific and social environment will be different. My guess is that the science of robotics will advance to the stage where a decision has to be made as to what threats may be presented by robots to the human race. Even now, robot devices are used in military situations. It does not take much imagination to conjure up a system in which robots created by humans, and making their own decisions, become a menace to humans. This state of affairs was anticipated by Isaac Asimov, who devised a system which he called the Laws of Robotics cf Wikipedia.
    Assuming that it becomes possible to build a robot which is analogous to a human presumably having self regard, and the ability to become fearful for itself, then it must in my opinion be treated as a human. Best not build such a robot in the first place, the danger of self reproduction also looms threateningly. There is enough racial intolerance in the world as it is, and I see a high probability that friction between humans and robots would occur.

  8. Slavery has two aspects, which can be separated.

    1. The slave is owned, is property.

    2. The slave is used, is exploited, is oppressed, is not allowed to flourish.

    Now in theory, a slave could be owned and not be exploited. Someone could treat their slaves according to the golden rule, while in theory they remain property. That is the way many pets are treated: they are owned, but are treated excellently and their flourishing is promoted and encouraged.

    However, in reality few people are going to buy a slave and then treat them well. Why do they buy pets and treat them well? That’s a good question, but my general impression is that many pet-owners prefer their pets to human beings.

    In addition, people can be exploited, utilized, and oppressed, without being slaves. The expression “wage slave” refers to the fact that many workers, while technically free, are treated as we imagine slaves are and in reality, aren’t so free because if they don’t work for low wages, they don’t eat.

    So one could imagine robots that are property and so well treated, as many pets are, that they might well prefer slave-status to venturing out in the cruel world where they might be technically free, but end up as robot wage slaves.

    I guess the ideal would be not to exploit or utilize or oppress any other being, be they human, animal or robot.

    I’m sure this statement is going to produce screams of protest, but my sense is that the tremendous indignation against slavery (humans being property) which I always see among students of philosophy is a probably unconscious pretext to not face the fact that so many other ostensibly free human beings are exploited, oppressed, utilized and live without the possibility of flourishing.

  9. s.wallerstein,

    Quite right. Historically, some slaves enjoyed great wealth and power-so there is a clear distinction between being owned and being exploited. In fact, the lives of certain slaves was vastly better than that of most free people (at least materially). For some, being free also means being free to starve to death.

  10. Don Bird,

    Some parents might think they own their children. If ownership of children by parents is morally permissible, then the same would certainly apply to intelligent robots that are manufactured. However, as you note, children do eventually become autonomous when they hit a certain stage of development. So, that would seem to apply to robots as well.

    Killer robots are a stock feature of sci-fi and, no doubt, when we start cranking out the killbots we will want to make sure that they kill only the right people. In “Second Variety” Dick considers a world in which the killbots get smart and turn against all humans (and against each other). We might end up building our own replacements-thus putting a nifty twist on evolution.

  11. Vina,

    If organic machinery can suffice to give us “heart”, then it seems reasonable to believe that non-organic machinery can do the same. Descartes actually grounded the emotions in the physiology of beings, making them mechanical in nature.

    Robots would presumably be as subject as humans are to karma.

  12. The mechanistic worldview of the 17th century cleared the way for scientific experiments and the industrial revolution by removing all taboos in relation to nature. Even though human physiology has been considered mechanical, humanism has protected it from a similar fate. If neuroscience does not find mind in the brain, ego and reason could also be perceived as mechanical, clearing the way for the humanities to be absorbed into bio-science, as it would not be necessary for the humanities to be a separate discipline. Bio-science is attempting to come up with a biology-based ethics, as without a concept of mind it would be required.

    A robot would do well in a programmed situation requiring academical knowledge, and might even surpass humans; however, it would require “heart” or an intuitive sense in real time in order to be human-like. Both philosophy and religion have been conflicted about the feeling function. Probably because of its dual nature; it can either be the door to all knowledge, or through energy given to thoughts; demonic; at least that is the way ancient philosophy and early Christianity saw it, other religions mostly concur. We might like to think that is mechanical, because in so thinking we do not have to own it.

    It would be necessary to go back to Aristotle and active and passive nous to consider whether a human-like robot would have karma. As a human creation it would not have any direct connection to active nous; the cosmic ratio, and would be the equivalent of a doppelganger, and for want of another word; soulless; therefore expected to be without karma.

Leave a Comment


NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>