Should Killer Robots be Banned?

The Terminator.

The Terminator. (Photo credit: Wikipedia)

You can’t say that civilization don’t advance, however, for in every war they kill you in a new way.

-Will Rogers

 

Humans have been using machines to kill each other for centuries and these machines have become ever more advanced and lethal. In more recent decades there has been considerable focus on developing autonomous weapons. That is, weapons that can locate and engage the enemy on their own without being directly controlled by human beings. The crude seeking torpedoes of World War II are an example of an early version of such a killer machine. Once fired, the torpedo would be guided by acoustic sensors to its target and then explode—it was a crude, suicidal mechanical shark. Of course, this weapon had very limited autonomy since humans decided when to fire it and at what target.

Thanks to advances in technology, far greater autonomy is now possible. One peaceful example of this is the famous self-driving cars. While some see them as privacy killing robots, they are not designed to harm people—quite the opposite, in fact. However, it is easy to see how the technology used to guide a car safely around people, animals and other vehicles could be used to guide an armed machine to its targets.

Not surprisingly, some people are rather concerned about the possibility of killer robots, or with less hyperbole, autonomous weapon systems. Recently there has been a push to ban such weapons by international treaty. While people are no doubt afraid of killer machines roaming about due to science fiction stories and movies, there are legitimate moral, legal and practical grounds for such a ban.

One concern is that while autonomous weapons might be capable of seeking out and engaging targets, they would lack the capability to make the legal and moral decisions needed to operate within the rules of war. As a specific example, there is the concern that a killer robot will not be able to distinguish between combatants and non-combatants as reliably as a human being. As such, autonomous weapon systems could be far more likely than human combatants to kill noncombatants due to improper classification.

One obvious reply is that while there are missions in which the ability to make such distinctions would be important, there are others where it would not be required on the part of the autonomous weapon. If a robot infantry unit were engaged in combat within a populated city, then it would certainly need to be able to make such a distinction. However, just a human bomber crew sent on a mission to destroy a factory would not be required to make such distinctions, an autonomous bomber would not need to have this ability. As such, this concern only has merit in cases in which such distinctions must be made and could be reasonably made by a human in the same situation. Thus, a sweeping ban on autonomous weapons would not be warranted by this concern.

A second obvious reply is that this is a technical problem that could be solved to a degree that would make an autonomous weapon at least as reliable as an average human soldier in making the distinction between combatants and non-combatants. It seems likely that this could be done given that the objective is a human level of reliability. After all, humans in combat do make mistakes in this matter so the bar is not terribly high.  As such, banning such weapons would seem to be premature—it would need to be shown that such weapons could not make this distinction as well as an average human in the same situation.

A second concern is based on the view that the decision to kill should be made by a human being and not by a machine. Such a view could be based on an abstract view about the moral right to make killing decisions or perhaps on the view that humans would be more merciful than machines.

One obvious reply is that autonomous weapons are still just weapons. Human leaders will, presumably, decide when they are deployed and give them their missions. This is analogous to a human firing a seeking missile—the weapon tracks and destroys the intended target, but the decision that someone should die was made by a human. Presumably humans would be designing the decision making software for the machines and they could program in a form of digital mercy—if desired.

There is, of course, the science fiction concern that the killer machines will become completely autonomous and fight their own wars (as in Terminator and “Second Variety”). The concern about rogue systems is worth considering, but is certainly a tenuous basis for a ban on autonomous weapons.

Another obvious reply is that while a machine would probably lack mercy, they would also lack anger and hate. As such, they might actually be less awful about killing than humans.

A third concern is based on the fact that autonomous machines are just machines without will or choice (which might also be true of humans). As such, wicked or irresponsible leaders could acquire autonomous weapons that will simply do what they are ordered to do, even if that involves slaughtering children.

The obvious, but depressing, reply to this is that such leaders seem to never want for people to do bidding, however awful that bidding might be. Even a cursory look at the history of war and terrorism shows that this is a terrible truth. As such, autonomous weapons do not seem to pose a special danger in this regard: anyone who could get an army of killer robots would almost certainly be able to get an army of killer humans.

There is, of course, a legitimate concern that autonomous weapons could be hacked and used by terrorists or other bad people. However, this would be the same as such people getting access to non-autonomous weapons and using them to hurt and kill people.

In general, the moral motivation of the people who oppose autonomous weapons is laudable. They presumable wish to cut down on death and suffering. However, this goal seems to be better served by the development of autonomous weapons. Some reasons for this are as follows.

First, since autonomous weapons are not crewed, their damage or destruction will not result in harm or death to people. If a manned fighter plane is destroyed, that is likely to result in harm or death to a person. However, if a robot fighter plane is shot down, no one dies. If both sides are using autonomous weapons, then the causality count would presumably be lower than in a conflict where the weapons are all manned. To use an analogy, automating war could be analogous to automating dangerous factory work.

Second, autonomous weapons can advance the existing trend in precision weapons. Just as “dumb” bombs that were dropped in massive raids gave way to laser guided bombs, autonomous weapons could provide an even greater level of precision. This would be, in part, due to the fact that there is no human crew at risk and hence the safety of the crew would no longer be a concern. For example, rather than having a manned aircraft drop a missile on target while jetting by at a high altitude, an autonomous craft could approach the target closely at a lower speed in order to ensure that the missile hits the right target.

Thus, while the proposal to ban such weapons is no doubt motivated by the best of intentions, the ban itself would not be morally justified.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta
Leave a comment ?

12 Comments.

  1. I am not sure which universe you occupy, but in the one I live in there are Killer Robots, we call them drones. They make war safer for the technologically advanced aggressors (like the USA) and indeed, as you wrote, “an autonomous craft could approach the target closely at a lower speed in order to ensure that the missile hits the right target.” In this case the right target is anyone the USA deems as a enemy. For a more balanced view see this article:
    http://www.theguardian.com/world/2013/oct/22/amnesty-us-officials-war-crimes-drones
    Even with the high degree to which, as you wrote: “autonomous weapons can advance the existing trend in precision weapons” some accidents still happen as this link shows:
    http://www.theatlantic.com/politics/archive/2013/10/8-year-old-girl-on-drones-when-they-fly-overhead-i-wonder-will-i-be-next/280753/

  2. Doris Wrench Eisler

    There is a moral argument to be made that a war in which one side is entirely immune from risk, or almost so, is both immoral and dangerous. The Vietnam war was ended by protests over the 57,000 or so US soldiers who died while there were 4,000,000 or so Vietnamese, Laotian and Cambodian mostly civilian deaths. If a country does not risk its own there are no barriers to genocide, or the humanitarian objections to it are greatly weakened. Highly mechanized war, even aside from robot use, is instinctively repulsive, and that would include the modern use of subs, aircraft carriers, bombers, missiles, drones, bunker busters, chemical weapons etc.: they are ignoble in that they are basically inhuman and unfair: human flesh is no match for them. That is a good reason to eliminate war.

  3. Mike,

    I would like to comment on part of your essay

    “The obvious, but depressing, reply to this is that such leaders seem to never want for people to do bidding, however awful that bidding might be. Even a cursory look at the history of war and terrorism shows that this is a terrible truth. As such, autonomous weapons do not seem to pose a special danger in this regard: anyone who could get an army of killer robots would almost certainly be able to get an army of killer humans.”

    As I read this, it brought to mind the recent uprisings in Egypt and Libya. If I recall correctly, the Egyptian military was constituted predominantly by Egyptian men, while the Libyan military was heavily fortified with mercenaries. The Egyptian soldiers refused to fire on their own people even when ordered by the government, while the Libyans had no such withholding. I would argue that an autonomous military equates to the mercenaries in Libya, only more so. There would be no hesitation from a machine. . .

  4. Well… an obvious problem with this question is that we lack empirical hard data. Out there in Blogland, do we have any volunteers to play victim? Where is Socrates when you really need him?

    Another interesting aspect of this issue involves the private use of killer-robots — kinda like the ring of Gyges (Republic, 2.359a–2.360d). A killer-robot, you see, could be programmed to self-destruct, thereby leaving no physical trial of the modus or motive. Hum… could be a real good ol’ boy booming business. Could even use Mexico as a test market, eh?

  5. Kevin,

    Your point is worth considering: human troops will not always do the bidding of those who claim to command them. However, as your Libya example shows, people can hire human mercenaries if their own forces are not willing to do the task. As such, while a “willingness” to do wrong is a point of concern, it is not unique to killer robots. My main point here would be that the worry about killer robots is somewhat overblown in that people can already easily find “killer robots” in human form that do not balk at doing their bidding.

  6. Doris Wrench Eisler,

    True-one stock argument against automating forces is the ironic argument that a war fought by robots would be without the sort of risk and cost that could cause citizens to protest the war. We are seeing this, to a degree, with Obama’s drone campaign-the drones kill people, but no Americans are at risk and hence only folks who worry about the morality of such drone kills are bothered.

    On the one hand, this is a legitimate worry. On the other hand, if the war is otherwise just, then having machines doing the fighting even for one side could lower the causalities and make the war less bad. Thus, the main concern seems to be whether the war is just or not rather than whether the troops are machines or flesh.

  7. The use of remote controlled weapons by civilians does raise concerns. However, they would be at least as traceable as things like bombs and guns. Law enforcement is often good at finding such things, even if the killer drone was flown into a lake after the kill or blown up (there would still be parts).

    One episode of Batman TAS featured killer remote control cars and the Cyberpunk role playing game had a nice assortment of automated weapons, including some I designed (such as the combat cyberforms-a good choice when you want someone dead but have other things to do).

  8. Dennis,

    Drones are ROVs and not fully autonomous weapons. They have to be guided by an operator, so they differ from a conventional aircraft only in the location of the pilot. While remote operated vehicles are often called robots, that is something of a misuse of the term that creates a confusion between autonomous systems (that conduct missions without direct human control) and remote controlled weapons (that are controlled by a human operator from a distance).

    I tend to prefer to use “ROV” for the remote operated vehicles and “robot” for the autonomous vehicles. The line does get blurred by vehicles that have both automatic modes and remote operated modes, so I also use the term “autonomous weapon” for weapons that can engage in combat missions without direct human control.

    Accidents will always happen. However, we cannot use the standard of perfection when making judgments-that would be to fall into the perfectionist fallacy. If autonomous weapons had less accidents than manned weapons, then that would be a good reason to use them.

  9. I read your article and it raises some interesting points, but I do have to say you have skipped some fairly significant points in order to exclusively focus on your killing robots. One massive problem you need to incorporate is the morality or war itself. In a civilization with the capacity to create these killing machines, is it not expected that we would be able to reach a for more intelligent end to these means. Assuming we have cleared this issue and that rabbit hole leads inevitably to war, there are some more concerns I have. I think you started to touch on, but eventually skirted over, the broad and wide scope of morality as a whole. We are still trying to figure out what makes us moral beings. For the sake of argument, let us conclude that tomorrow a great philosopher nails down metaethics and morality. The problem of moral subjectivism persists, meaning who decides what the robot interprets as moral and ethical. Considering this, a robot would also need to process perceived injustices and once they can do that how sure are we that they will agree with the war we have commissioned them for? Don’t stop following me here, it goes further. Once we have established this brand new race of moral and ethical robots (as far as their morality can be used to further or goals that is) we must now take into account their autonomy. At what point is it unethical for us to create the semiautonomous beings that have morality and are commissioned to fight wars on our behalf. So, I see what you’re getting at with your blog post here and the subject you are trying to delve into. Unfortunately, I think you have passed go and collected two hundred dollars without even taking out the Monopoly board.

  10. If you are the last man on earth with a tin of baked beans and are faced with a ravening horde of people armed with knives and you happen to have a machine gun I doubt you would stop to think about the morality of using it.

    That is a luxury only the non threatened can afford.

  11. Whether I would think about it or not is a psychological question not an ethical one. That is, “what would I do?” is rather different from “what should I do?”

    If I’m the last man on earth, the ravening horde of people would presumably be women (or children, if by “man” you mean “adult biological male”). I’d certainly not be inclined to murder them. Now, if they were going to cut me to pieces and I could not escape from them or disable them without killing them, then the right of self-preservation would make shooting them permissible. I’d still consider the ethics of the matter.

    While I have not faced ravening hordes in real life, I have been attacked while running and did not abandon ethics just because I was in danger.

  12. Clay,

    True, I did not address the more general issue of the ethics of war-mainly because of the practical concern of time.

    I’ve written elsewhere on the ethics of enslaving intelligent machines to do our bidding. Ironically, we want to make smart robots to do our crap jobs, but compelling a smart being to do our bidding would be slavery. And hence unethical.

Leave a Comment


NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>