The Robots of Deon

The Robots of Dawn (1983)

The Robots of Dawn (1983) (Photo credit: Wikipedia)

The United States military has expressed interest in developing robots capable of moral reasoning and has provided grant money to some well-connected universities to address this problem (or to at least create the impression that the problem is being considered).

The notion of instilling robots with ethics is a common theme in science fiction, the most famous being Asimov’s Three Laws. The classic Forbidden Planet provides an early movie example of robotic ethics: Robby the robot has an electro-mechanical seizure if he is ordered to cause harm to a human being (or an id-monster created by the mind of his creator. Dr. Morbius). In contrast, the killer machines (like Saberhagan’s Berserkers) of science fiction tend to be free of the constraints of ethics.

While there are various reasons to imbue (or limit) robots with ethics (or at least engage in the pretense of doing so), one of these is public relations. Thanks to science fiction dating back at least to Frankenstein, people tend to worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics serves to reassure the public (or so it is hoped). Another reason is to make the public relations gimmick a reality—to actually place behavioral restraints on killbots so they will conform to the rules of war (and human morality). Presumably the military will also address the science fiction theme of the ethical killbot who refuses to kill on moral grounds.

While science fiction features ethical robots, the authors (like philosophers who discuss the ethics of robots) are extremely vague about how robot ethics actually works. In the case of truly intelligent robots, their ethics might work the way our ethics works—which is something that is still a mystery debated by philosophers and scientists to this day. We are not yet to the point of having such robots, so the current practical challenge is to develop ethics for the sort of autonomous or semi-autonomous robots we can build now.

While creating ethics for robots might seem daunting, the limitations of current robot technology means that robot ethics is essentially a matter of programming these machines to operate in specific ways defined by whatever ethical system is being employed as the guide. One way to look at programing such robots with ethics is that they are being programmed with safety features. To use a simple example, suppose that I regard shooting unarmed people as immoral. To make my killbot operate according to that ethical view, it would be programmed to recognize armed humans and have some code saying, in effect “if unarmedhuman = true, then firetokill= false” or, in normal English, if the human is unarmed, do not shoot her.

While a suitably programmed robot would act in a way that seemed ethical, the robot is obviously not engaged in ethical behavior. After all, it is merely a more complex version of the automatic door. The supermarket door, though it opens for you, is not polite. The shredder that catches your tie and chokes you is not evil.  Likewise, the killbot that does not shoot you in the face because its cameras show that you are unarmed is not ethical. The killbot that chops you into meaty chunks is not unethical. Following Kant, since the killbot’s programming is imposed and the killbot lacks the freedom to choose, it is not engaged in ethical (or unethical behavior), though the complexity of its behavior might make it seem so.

To be fair to the killbots, perhaps we humans are not ethical or unethical under these requirements for ethics—we could just be meat-bots operating under the illusion of ethics. Also, it is certainly sensible to focus on the practical aspect of the matter: if you are a civilian being targeted by a killbot, your concern is not whether it is an autonomous moral agent or merely a machine—your main worry is whether it will kill you or not. As such, the general practical problem is getting our killbots to behave in accord with our ethical values.

Achieving this goal involves three main steps. The first is determining which ethical values we wish to impose on our killbots. Since this is a practical matter and not an exercise in philosophical inquiry, this will presumably involve using the accepted ethics (and laws) governing warfare rather than trying to determine what is truly good (if anything). The second step is translating the ethics into behavioral terms. For example, the moral principle that makes killing civilians wrong would be translated into behavioral sets of allowed and forbidden behavior. This would require creating a definition of civilian (or perhaps just an unarmed person) that would allow recognition using the sensors of the robot. As another example, the moral principle that surrender should be accepted would require defining surrender behavior in a way the robot could recognize.  The third step would be coding that behavior in whatever programming language is used for the robot in question. For example, the robot would need to be programmed to engage in surrender-accepting behavior. Naturally, the programmers would need to worry about clever combatants trying to “deceive” the killbot to take advantage of its programming (like pretending to surrender so as to get close enough to destroy the killbot).

Since these robots would be following programmed rules, they would presumably be controlled by deontological ethics—that is, ethics based on following rules. Thus, they would be (with due apologies to Asimov), the Robots of Deon.

An interesting practical question is whether or not the “ethical” programming would allow for overrides or reprogramming. Since the robot’s “ethics” would just be behavior governing code, it could be changed and it is easy enough to imagine an ethics preferences in which a commander could selectively (or not so selectively) turn off behavioral limitations. And, of course, killbots could be simply programmed without such ethics (or programmed to be “evil”).

The largest impact of the government funding for this sort of research will be that properly connected academics will get surprisingly large amounts of cash to live the science-fiction dream of teaching robots to be good. That way the robots will feel a little bad when they kill us all.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta
  1. Every difference makes a difference. I am sure every state of affairs is never replicated. Subtle differences in similar states of affairs may be vital to a decision which has to be made. The human being does seem to some extent, able to cope with this problem, but the fact still remains that the action taken by say A may be disapproved of by B and it may be very difficult to choose between the viewpoints of A and B. Very often something at the ‘back of’ of one’s mind’, as we describe it, leads one to make a decision; it is in the nature of a feeling that one cannot describe other than it was a state of mind. It was probably due to a lifetime’s experience in dealing with life, and other people in general. Moral activity is effected against this kind of background.
    Robots do not have minds as we currently understand the expression ‘mind’, they have vastly complex systems of computer programming. It is surely possible to program a robot to kill its enemy whom it recognises by the type of clothing worn by the enemy. Similarly it will not kill those who are dressed in another way. That is not what we understand as a moral decision.
    Such decisions are surely not made on moral basis but on a thoughtless mechanical basis.
    Computer programs like deep blue are capable of beating any chess player in the world at least once. The rules of chess are simple to understand and easily remembered. How ever the game of chess is a vastly different proposition. I suppose one could say that chess has a moral aspect to it in that there are certain forbidden rules which if broken disqualify the player. However the game of life has greater variety to it than chess it is easy to discover a wrong move in chess there is no question about it, but a so called wrong, in the game of life, is often highly disputable. I’m not sure that moral decisions against the background of life are computable other than when the decision is of a very simple nature. Morality embraces a massive area of human experience which at this juncture I feel is not amenable to computerisation. Most firearms have safety catches but surely it would be a nonsense to attribute a moral aspect to the actual firearm for this. The so called moral computer robot is similarly a misnomer.

  2. Dennis Sceviour

    If there is a sense of right and wrong, then it derives from intrinsic values. The “back of one’s mind” might be called an intrinsic value. An autonomous drone robot could be intrinsically taught to consider itself good first, and shoot everything else in every direction as bad (within a specific geographical territory). Any other external influence on the decision-making, such as radio commands, would render the ethics invalid and it would become a user tool once more. It is not clear how this differs much from deontological hand grenade morality. The initial decision to manufacture and release a drone would have to come from a human decision. Therefore, in the case of the military, the primary moral decision is still human and not computer generated.

    The analogy of a chess program is a poor example to simulate morality. Chess has had the same rules for 500 years. In real life, rules are in a state of flux and interpretation. The best use for artificial thinking studies is to offer new theoretical definitions in epistemology.

    What is the military looking for when it wants a machine to make moral decisions? A machine cannot be used as an excuse to avoid moral responsibility. The public will not buy it.

  3. ‘Lone Survivor’ home Andrew d Berg utilised autopsy studies, Fast CLOSURE instructors to help picture essentially the most authentic struggle shows off time With this real-life single survivor Marcus Luttrell around to learn his or her on-screen version, Draw Wahlberg, Berg invested in having all the info suitable, regardless the way grisly. &quot,Hollister; Single Survivor" home journeyed in struggle with an exceptionally unique arranged approach. As hurtful seeing that it becomes, his or

  4. We experience dug by ouselves out from a fabulous huge damaged spot, and therefore the country’s economy at this time continues to grow. Get formulated opportunities just for 40 consecutive times. Get formulated 7. 5 million dollars cutting edge opportunities. Manufacturing’s returning in manners increasing numbers of most people might not experience presumed. All the debts, that wasthat’s the root reasoning funding 2011 just for Republicans to activate in that brinksmanship, adjusted off quickl

Leave a Comment


NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>