Robot Love I: Other Minds

Thanks to improvements in medicine humans are living longer and can be kept alive well past the point at which they would naturally die. On the plus side, longer life is generally (but not always) good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age. This care can be a considerable burden on the caregivers. Not surprisingly, there has been an effort to develop a technological solution to this problem, specifically companion robots that serve as caregivers.

While the technology is currently fairly crude, there is clearly great potential here and there are numerous advantages to effective robot caregivers. The most obvious are that robot caregivers do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be ideal 24/7/365 caregivers. This makes them superior in many ways to human caregivers who get tired, get depressed, get angry and have many other responsibilities.

There are, of course, some concerns about the use of robot caregivers. Some relate to such matters as their safety and effectiveness while others focus on other concerns. In the case of caregiving robots that are intended to provide companionship and not just things like medical and housekeeping services, there are both practical and moral concerns.

In regards to companion robots, there are at least two practical concerns regarding the companion aspect. The first is whether or not a human will accept a robot as a companion. In general, the answer seems to be that most humans will do so.

The second is whether or not the software will be advanced enough to properly read a human’s emotions and behavior in order to generate a proper emotional response. This response might or might not include conversation—after all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test—that is, read and respond correctly to human emotions. Since humans often botch this, there would be a fairly broad tolerable margin of error here. These practical concerns can be addressed technologically—it is simply a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things—the comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion appeal to all the senses.

While the practical problems can be solved with the right technology, there are some moral concerns with the use of robot caregiver companions. Some relate to people handing off their moral duties to care for their family members, but these are not specific to robots. After all, a person can hand off the duties to another person and this would raise a similar issue.

In regards to those specific to a companion robot, there are moral concerns about the effectiveness of the care—that is, are the robots good enough that trusting the life of an elderly or sick human would be morally responsible? While that question is important, a rather intriguing moral concern is that the robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate (fake) human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit and such a deceit seems to be morally wrong.

One obvious response is that people would realize that the robot does not really experience emotions, yet still gain value from its “fake” companionship. To use an analogy, people often find stuffed animals to be emotional reassuring even though they are well aware that the stuffed animal is just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect—if someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real in order to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship that a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to actually feel the emotions that they display and that they actually understand. In philosophical terms, humans have (or are) minds and robots (of the sort that will be possible in the near future) do not have minds. They merely create the illusion of having a mind.

Interestingly enough, philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior). While such behaviorism has been largely abandoned, it does survive in a variety of jokes and crude references to showing people some “love behavior.”

The usual “solution” to the problem is to go with the obvious: I think that other people have minds by an argument from analogy. I am aware of my own mental states and my behavior and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in many and various forms of deceit. For example, a person can fake being in pain or make a claim about love that is untrue. Piercing these deceptions can sometimes be very difficult since humans are often rather good at deceit. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way he wants people to believe he is thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it is displaying in its behavior, because it does not think or feel. Or so it is believed. The reason that a person would think this seems reasonable: in the case of a robot, we can go in and look at the code and the hardware to see how it all works and we will not see any emotions or thought in there. The robot, however complicated, is just a material machine, incapable of thought or feeling.

Long before robots, there were thinkers who claimed that a human is a material entity and that a suitable understanding of the mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes thoughts and emotions and understand the hardware it “runs” on.

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par—both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body—that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit—going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does and this will yield all the benefits of having a human companion.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Leave a comment ?

20 Comments.

  1. The potential to say the right things at the right times while not feeling or caring makes robots sound more human all the time…at least to the degree we can categorize politicians and PR reps as human 🙂

    As we progress, I tend to think the personality can be provided by a real person remotely, while the physical aspects of care can be handled perfectly well by suitably advanced robots.

  2. Doris Wrench Eisler

    It seems that people do accept and appreciate the positive attentions of robots. We are programmed, it seems, to accept all kinds of things that aren’t strictly real, as for instance when
    romantic types feel encouraged or consoled by some popular song that has nothing to do with their reality. We are moved by dramatic representations of emotions and even believe, in some sense, the promises of politicians when past actions indicate they are not reliable: it’s just an act and we fall for it every time.
    A worker in Japan was just recently killed by a robot but this in itself is not significant. People die from inadequate safety measures or oversight, and regularly in hospitals from preventable mistakes.
    But what about the cost and maintenance of robots in a domestic setting? It does seem prohibitive for most people, or at least not less expensive than a human counterpart. But if the cost factor could be overcome, it might be a great thing for the morale of incapacitated people who are often, let’s face it, at the mercy of very imperfect or even incompetent people not suited to care-giving, but who end up in the field for a number of reasons.

  3. Indeed Doris, I agree 100%!

  4. Buck Field,

    Robopolitics…a whole new field. 🙂

  5. Doris Wrench Eisler,

    Good point. It is better to have a competent robot tending to an injured person than an incompetent or malicious person.

  6. ” On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age. ”

    The major ethical issue here, is are you prolonging life or prolonging dying. If someone is severely disabled in old age, is right to keep them living longer.

    In the next few years we’re about to see the kind of robots that were only dreamed of in science fiction. I think in terms of personality, they’ll be like Apple’s SIRI. Something that initially appears to have a human like personality, but ultimately betrays itself as lacking any kind of soul; the average goldfish has more humanity than SIRI, even though it can’t speak.

  7. I think there is a middle ground that you are not addressing here. As someone who does care for a chronically ill wife and a young daughter, I would appreciate the help of a robot. I’m not looking for something to completely remove me from the equation. But, if I’m taking my daughter to swim lessons, it would be comforting to know that something is there with my wife. Or if I need the occasional day off, just to recharge, it would be nice not to feel that I’m burdening a friend or relative. Robots would be perfect to augment the care giving of real people.

  8. Jeremy Macauley

    How do we even know that the robots would be thinking? I understand all the philosophical jargon that is posed throughout this article but we forget the basic human cognition of control. We want to be able to control everything around us and if we develop the technology for robots to help us function in life then we can certainly hinder the technology if it does become aware.

  9. Gene,

    I agree with you completely: robot assistants don’t seem problematic at all. Quite the contrary in fact. The worry, though, is that people will hand off all/most duties to machines.

  10. Jeremy Macauley,

    True, the problem of other minds does always remain. From a practical standpoint, if a robot can do all that we can do (talk, express what seem to be emotions and so on) then we would have as much reason to think its thinks and feels as we do to think that our fellow “meatbots” think and feel.

  11. Mike LaBossiere,

    “How do we even know that the robots would be thinking?”

    Well, to put it as Turing put it; if you can say what a submarine does is swim; then it swims. And if you can say what a computer does is think, then it thinks.

    What is thinking anyway.

    There was a time when you could define it by exclusion; subjects think, objects do not. The dog has an opinion, the bone does not.

    “I understand all the philosophical jargon that is posed throughout this article but we forget the basic human cognition of control. We want to be able to control everything around us and if we develop the technology for robots to help us function in life then we can certainly hinder the technology if it does become aware.”

    Presuming these robots can be controlled, who controls them? At present a drone can be sent to bomb a location without any human control. Can the people on the ground, control the drone, stop it from killing them.

    You could also take the example of the self driving car;. What’s to stop it from being used as a self driving bomb. If there are safe guards in place they can be hacked.

    Can you even switch off your mobile phone? And the answer is you can’t unless you rip the phone apart and remove the secondary battery – which is deliberately inaccessible to the owner of the phone. And then there are ultra low power chips, that were never designed to be switched off that can still broadcast.

  12. A drone cannot exist, much less fly without massive, dedicated, and sustained human control.

    Bob has a significant burden of proof to bear when he throws a rock and then claims the rock is out of control when it smacks Carl in the head.

    On a separate note: In a surgical setting, ( couldn’t care less about bedside manner, thinking, politics of the surgeon, or whether the software controlling my repairs were silicon or meat-based.

    I care about structural quality at the end of the procedure, and I’m willing to trade a significant amount of pain (physical or economic) to purchase it. 🙂

  13. Buck,

    True-current drone technology does require extensive human control. But DARPA is pouring cash into developing fully autonomous weapons.

  14. Mike LaBossiere,

    “True-current drone technology does require extensive human control. But DARPA is pouring cash into developing fully autonomous weapons.”

    You can already buy them on Ebay. The higher end retail drones (and we’re still not talking that much money), are already semiautonomous. If they lose communication with the controller, they’re programmed to fly home. And they can just be fed altitude and map locations to fly to, and land. The Swiss postal service has just announced they’re going to start using them to deliver some mail, to remote locations.

    Does technology currently exist, that would allow a drone to fly to a location autonomously, and then pick out human targets on the ground and kill them, without any human interaction beyond initialising its’ flight. The sad truth is that it does exist. And human drone pilots exist for some philosophical reason. The humans kill “bad guys”, the robot would just kill “guys”. But if you were 100 to even 50 feet in the air above a person, I don’t think my judgement would be any better than a coin flip.

  15. I think the simplified rock example is helpful. Once a Bill lets fly the rock on trajectory toward Carl’s head, imbuing it with autonomy strikes me as odd, much less claiming it is “fully autonomous”, which I don’t think can be supported for anything that exists since one of the costs of reality is obeying the rules for existence.

    Our brains have limits, so even though I may desperately want the ability to shut off my shadow / light biases, I simply cannot.

    See:
    https://whyevolutionistrue.wordpress.com/2011/01/08/do-we-perceive-reality-the-checker-shadow-illusion/

    This, and countless other limits I’m helpless to violate prevent me from being “completely” autonomous.

    I think whatever autonomy is, it runs an analog spectrum, and with it: a spectrum of moral issues.

  16. Buck Field,

    “I think the simplified rock example is helpful. Once a Bill lets fly the rock on trajectory toward Carl’s head, imbuing it with autonomy strikes me as odd, much less claiming it is “fully autonomous”, which I don’t think can be supported for anything that exists since one of the costs of reality is obeying the rules for existence”

    I’ve thought of the rock example. The difference with a fully autonomous drone, to use the rock analogy, is that Bill throws the rock, then the rock decides whether to hit Carl or not. Who is responsible then for hitting or not hitting Carl.

    A big problem military people have is taking responsibility for killing. A human spotter picks a target, then radios a map location back to a human controlled artillery. The person operating the artillery feeds in a map location, and then bombs the map location. This separation from the actual killing is not an accidental byproduct of the process – sometimes there are even more layers of separation. And an interesting fact arising from the dawn of the drones; our contemporary era, is that nearly all the layers of separation have been removed. Piloting the drone remotely has put the pilot at the greatest physical distance in the history of warfare; it kids recruited from amusement arcades in Las Vegas, but it’s also one of the most direct and closest forms of killing since the advent of modern warfare.

  17. Robot Companions - pingback on July 9, 2015 at 3:01 pm
  18. JMRC,

    Yes, the technology does exist to send a drone on an autonomous kill mission-but recognition software does have some limitations. It is claimed, however, that proper care is taken in identifying assassination targets. The usual narrative is that the target is observed for an extended period of time, then confirmed, then killed.

  19. ” It is claimed, however, that proper care is taken in identifying assassination targets. The usual narrative is that the target is observed for an extended period of time, then confirmed, then killed.”

    It is claimed…..

    However, enough drone pilots have come forward over recent years. And their stories are very uniform. Most of the time they had no idea who they were killing, or why.

  20. One advantage of autonomous killbots is that they do not talk to the press.

Leave a Comment


NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Trackbacks and Pingbacks: