Category Archives: Ethics

The Confederacy, License Plates & Free Speech

Louisiana Sons of Confederate Veterans special...

(Photo credit: Wikipedia)

Early in 2015 some folks in my adopted state of Florida wanted three Confederate veterans to become members of the Veterans’ Hall of Fame. Despite the efforts of the Florida Sons of Confederate Veterans, the initial attempt failed on the grounds that the Confederate veterans were not United States’ veterans. Not to be outdone, the Texas Sons of Confederate Veterans want to have an official Texas license plate featuring the Confederate battle flag. While custom license plates are allowed in the United States, the states generally review proposed plates. The Texas department of Motor Vehicles rejected the proposed plate on the grounds that “a significant portion of the public associate[s] the Confederate flag with organizations” expressing hatred for minorities. Those proposing the plate claim that this violates their rights. This has generated a legal battle that has made it to the US Supreme Court.

The legal issue, which has been cast as a battle over free speech, is certainly interesting. However, my main concern is with the ethics of the matter. This is, obviously enough, also a battle over rights.

Looked at in terms of the right of free expression, there are two main lines of contention. The first is against allowing the plate. One way to look at an approved license plate is that it is a means of conveying a message that the state agrees with. Those opposed to the plate have argued that if the state is forced to allow the plate to be issued, the state will be compelled to be associated with a message that the government does not wish to be associated with. In free speech terms, this could be seen as forcing the state to express or facilitate a view that it does not accept.

This does have a certain appeal since the state can be seen as representing the people (or, perhaps, the majority of the people). If a view is significantly offensive to a significant number of citizens (which is, I admit, vague), then the state could reasonably decline to accept a license plate expressing or associated with that view. So, to give some examples, the state could justly decline Nazi plates, pornographic plates, and plates featuring racist or sexist images. Given that the Confederate flag represents to many slavery and racism, it seems reasonable that the state not issue such a plate. Citizens can, of course, cover their cars in Confederate flags and thus express their views.

The second line of contention is in favor of the plate. One obvious line of reasoning is based on the right of free expression: citizens should have the right to express their views via license plates. These plates, one might contend, do not express the views of the state—they express the view of the person who purchased the plate.

In terms of the concerns about a plate being offensive, Granvel Block argued that not allowing a plate with the Confederate flag would be “as unreasonable” as the state forbidding the use of the University of Texas logo on a plate “because Texas A&M graduates didn’t care for it.” On the one hand, Block has made a reasonable point: if people disliking an image is a legitimate basis for forbidding its use on a plate, then any image could end up being forbidden. It would, as Block noted, be absurd to forbid schools from having custom plates because rival schools do not like them.

On the other hand, there seems to be an important difference between the logo of a public university and the battle flag of the Confederacy. While some Texas A&M graduates might not like the University of Texas, the University of Texas’ logo does not represent states that went to war against the United States in order to defend slavery. So, while the state should not forbid plates merely because some people do not like them, it does seem reasonable to forbid a plate that includes the flag representing, as state Senator Royce West said, “…a legalized system of involuntary servitude, dehumanization, rape, mass murder…”

The lawyer representing the Sons of Confederate Veterans, R. James George Jr., has presented an interesting line of reasoning. He notes, correctly, that Texas has a state holiday that honors veterans of the Confederacy, that there are monuments honoring Confederate veterans and that the gift shop in the capitol sells Confederate memorabilia. From this he infers that the Department of Motor Vehicles should follow the state legislature and approve the plate.

This argument, which is an appeal to consistency, does have some weight. After all, the state certainly seems to express its support for Confederate veterans (and even the Confederacy) and this license plate is consistent with this support. To refuse the license plate on the grounds that the state does not wish to express support for what the Confederate flag stands for is certainly inconsistent with having a state holiday for Confederate veterans—the state seems quite comfortable with this association.

There is, of course, the broader moral issue of whether or not the state should have a state holiday for Confederate veterans, etc. That said, any arguments given in support of what the state already does in regards to the Confederacy would seem to also support the acceptance of the plate—they seem to be linked. So, if the plate is to be rejected, these other practices must also be rejected on the same grounds. But, if these other practices are to be maintained, then the plate would seem to fit right in and thus, on this condition, also be accepted.

I am somewhat divided on this matter. One view I find appealing favors freedom of expression: any license plate design that does not interfere with identifying the license number and state should be allowed—consistent with copyright law, of course. This would be consistent and would not require the state to make any political or value judgments. It would, of course, need to be made clear that the plates do not necessarily express the official positions of the government.

The obvious problem with such total freedom is that people would create horrible plates featuring pornography, racism, sexism, and so on. This could be addressed by appealing to existing laws—the state would not approve or reject a plate as such, but a plate could be rejected for violating, for example, laws against making threats or inciting violence. The obvious worry is that laws would then be passed to restrict plates that some people did not like, such as plates endorsing atheism or claiming that climate change is real. But, this is not a problem unique to license plates. After all, it has been alleged that officials in my adopted state of Florida have banned the use of the term ‘climate change.’

Another view I find appealing is to avoid all controversy by getting rid of custom plates. Each state might have a neutral, approved image (such as a loon, orange or road runner) or the plates might simply have the number/letters and the state name. This would be consistent—no one gets a custom plate. To me, this would be no big deal. But, of course, I always just get the cheapest license plate option—which is the default state plate. However, some people regard the license plate as important and their view is worth considering.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robopunishment

Crime and Punishment

Crime and Punishment (Photo credit: Wikipedia)

While the notion of punishing machines for misdeeds has received some attention in science fiction, it seems worthwhile to take a brief philosophical look at this matter. This is because the future, or so some rather smart people claim, will see the rise of intelligent machines—machines that might take actions that would be considered misdeeds or crimes if committed by a human (such as the oft-predicted genocide).

In general, punishment is aimed at one of more of the following goals: retribution, rehabilitation, or deterrence. Each of these goals will be considered in turn in the context of machines.

Roughly put, punishment for the purpose of retribution is aimed at paying an agent back for wrongdoing. This can be seen as a form of balancing the books: the punishment inflicted on the agent is supposed to pay the debt it has incurred by its misdeed. Reparation can, to be a bit sloppy, be included under retaliation—at least in the sense of the repayment of a debt incurred by the commission of a misdeed.

While a machine can be damaged or destroyed, there is clearly the question about whether it can be the target of retribution. After all, while a human might kick her car for breaking down on her or smash his can opener for cutting his finger, it would be odd to consider this retributive punishment. This is because retribution would seem to require that a wrong has been done by an agent, which is different from the mere infliction of harm. Intuitively, a piece of glass can cut my foot, but it cannot wrong me.

If a machine can be an agent, which was discussed in an earlier essay, then it would seem to be able to do wrongful deeds and thus be a potential candidate for retribution. However, even if a machine had agency, there is still the question of whether or not retribution would really apply. After all, retribution requires more than just agency on the part of the target. It also seems to require that the target can suffer from the payback. On the face of it, a machine that could not suffer would not be subject to retribution—since retribution seems to be based on doing a “righteous wrong” to the target. To illustrate, suppose that an android injured a human, costing him his left eye. In retribution, the android’s left eye is removed. But, the android does not suffer—it does not feel any pain and is not bothered by the removal of its eye. As such, the retribution would be pointless—the books would not be balanced.

This could be countered by arguing that the target of the retribution need not suffer—what is required is merely the right sort of balancing of the books, so to speak. So, in the android case, removal of the android’s eye would suffice, even if the android did not suffer. This does have some appeal since retribution against humans does not always require that the human suffer. For example, a human might break another human’s iPad and have her iPad broken in turn, but not care at all. The requirements of retribution would seem to have been met, despite the lack of suffering.

Punishment for rehabilitation is intended to transform wrongdoers so that they will no longer be inclined to engage in the wrongful behavior that incurred the punishment. This differs from punishment aimed at deterrence—this aims at providing the target with a reason to not engage in the misdeed in the future. Rehabilitation is also aimed at the agent who did the misdeed, whereas punishment for the sake of deterrence often aims at affects others as well.

Obviously enough, a machine that lacks agency cannot be subject to rehabilitative punishment—it cannot “earn” such punishment by its misdeeds and, presumably, cannot have its behavioral inclinations corrected by such punishment.

To use an obvious example, if a computer crashes and destroys a file that a person had been working on for hours, punishing the computer in an attempt to rehabilitate it would be pointless. Not being an agent, it did not “earn” the punishment and punishment will not incline it to crash less in the future.

A machine that possesses agency could “earn” punishment by its misdeeds. It also seems possible to imagine a machine that could be rehabilitated by punishment. For example, one could imagine a robot dog that could be trained in the same way as a real dog—after leaking oil in the house or biting the robo-cat and being scolded, it would learn not to do those misdeeds again.

It could be argued that it would be better, both morally and practically, to build machines that would learn without punishment or to teach them without punishing them. After all, though organic beings seems to be wired in a way that requires that we be trained with pleasure and pain (as Aristotle would argue), there might be no reason that our machine creations would need to be the same way. But, perhaps, it is not just a matter of the organic—perhaps intelligence and agency require the capacity for pleasure and pain. Or perhaps not. Or it might simply be the only way that we know how to teach—we will be, by our nature, cruel teachers of our machine children.

Then again, we might be inclined to regard a machine that does misdeeds as being defective and in need of repair rather than punishment. If so, such machines would be “refurbished” or reprogrammed rather than rehabilitated by punishment. There are those who think the same of human beings—and this would raise the same sort of issues about how agents should be treated.

The purpose of deterrence is to motivate the agent who did the misdeed and/or other agents not to commit that deed. In the case of humans, people argue in favor of capital punishment because of its alleged deterrence value: if the state kills people for certain crimes, people are less likely to commit those crimes.

As with other forms of punishment, deterrence requires agency: the punished target must merit the punishment and the other targets must be capable of changing their actions in response to that punishment.

Deterrence, obviously enough, does not work in regards to non-agents. For example, if a computer crashes and wipes out a file a person has been laboring on for house, punishing it will not deter it. Smashing it in front of other computers will not deter them.

A machine that had agency could “earn” such punishment by its misdeeds and could, in theory, be deterred. The punishment could also deter other machines. For example, imagine a combat robot that performed poorly in its mission (or showed robo-cowardice). Punishing it could deter it from doing that again it could serve as a warning, and thus a deterrence, to other combat robots.

Punishment for the sake of deterrence raises the same sort of issues as punishment aimed at rehabilitation, such as the notion that it might be preferable to repair machines that engage in misdeeds rather than punishing them. The main differences are, of course, that deterrence is not aimed at making the target inclined to behave well, just to disincline it from behaving badly and that deterrence is also aimed at those who have not committed the misdeed.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Florida’s Bathroom Law

English: I photographed this picture from a pu...

(Photo credit: Wikipedia)

Being from Maine, I got accustomed to being asked about the cold, lobsters, moose and Stephen King. Living in Florida, I have become accustomed to being asked about why my adopted state is so insane. Most recently, I was asked about the bathroom bill making its way through the House.

The bathroom bill, officially known as HB 583, proposes that it should be a second-degree misdemeanor to “knowingly and willfully” enter a public facility restricted to members “of the other biological sex.” The bill proposes a maximum penalty of 60 days in jail and a $500 fine.

Some opponents of the bill contend that it is aimed at discriminating against transgender people. Some part of Florida have laws permitting people to use public facilities based on the gender they identify with rather than their biological sex.

Obviously enough, proponents of the bill are not claiming that they are motivated by a dislike of transgender people. Rather, the main argument used to support the bill centers on the claim that it is necessary to protect women and girls. The idea seems to be that women and girls will be assaulted or raped by males who will gain access to locker rooms and bathrooms by claiming they have a right to enter such places because they are transgender.

Opponents of the bill have pointed out the obvious reply to this argument: there are already laws against assault and rape. There are also laws against lewd and lascivious behavior. As such, there does not seem to be a need for this proposed law if its purpose is to protect women and girls from such misdeeds. To use an analogy, there is no need to pass a law making it a crime for a man to commit murder while dressed as a woman—murder is already illegal.

It could be countered that the bill is still useful because it would add yet another offense that a perpetrator could be charged with. While this does have a certain appeal, the idea of creating laws just to stack offenses seems morally problematic—it seems that a better policy would be to craft laws that adequately handle the “base” offenses.

It could also be claimed that the bill is needed in order to provide an initial line of defense. After all, one might argue, it would be better that a male never got into the bathroom or locker room to commit his misdeeds and this bill will prevent this from occurring.

The obvious reply is that the bill would work in this manner if the facilities are guarded by people capable of turning such masquerading males away at the door. This guards would presumably need to have the authority to check the “plumbing” of anyone desiring entry to the facility. After all, it is not always easy to discern between a male and a female by mere outward appearance. Of course, if such guards are going to be posted, then they might as well be posted inside the facilities themselves, thus providing much better protection. As such, if the goal is to make such facilities safe, then a better bill would mandate guards for such facilities.

Opponents of the bill do consider the dangers of assault. However, they contend that it is transgender people who are most likely to be harmed if they are compelled to use facilities for their biological sex. It would certainly be ironic if a bill (allegedly) aimed at protect people turned out to lead to more harm.

A second line of argumentation focuses on the privacy rights of biological women. “Women have an expectation of privacy,” said Anthony Verdugo of Christian Family Coalition Florida. “My wife does not want to be in a public facility with a man, and that is her right. … No statute in Florida right now specifically prohibits a person of one sex from entering a facility intended for use by a person of another sex.”

This does have a certain appeal. When I was in high school, I and some other runners were changing after a late practice and someone had “neglected” to tell us that basketball cheerleaders from another school would be coming through the corridor directly off the locker room. Being a typical immature nerd, I was rather embarrassed by this exposure. I do recall that one of my more “outgoing” fellow runners offered up a “free show” before being subdued with a rattail to the groin. As such, I do get that women and girls would not want males in their bathrooms or locker rooms “inspecting their goods.” That said, there are some rather obvious replies to this concern.

The first reply is that it seems likely that transgender biological males that identify as female would not be any more interested in checking out the “goods” of biological females than would biological females. But, obviously, there is the concern that such biological males might be bi-sexual or interested only in females. This leads to the second reply.

The second reply is that the law obviously does not protect females from biological females that are bi-sexual or homosexual. After all, a lesbian can openly go into the women’s locker room or bathroom. As such, the privacy of women (if privacy is taken to include the right to not be seen while naked by people who might be sexually attracted to one) is always potentially threatened.

Though some might now be considering bills aimed at lesbians and bi-sexuals in order to protect the privacy of straight women, there is really no need of these bills—or HB 583. After all, there are already laws against harassment and other such bad behavior.

It might be countered that merely being seen by a biological male in such places is sufficient to count as a violation of privacy, even if the male is well-behaved and not sexually interested. There are, after all, laws (allegedly) designed to protect women from the prying eyes of men, such as some parts of Sharia law. However, it would seem odd to say that a woman should be protected by law merely from the eyes of a male when the male identifies as a woman and is not engaged in what would be reasonably regarded as bad behavior (like staring through the gaps in a stall to check out a woman).

Switching gears a bit, in an interesting coincidence I was thinking about this essay when I found that the men’s bathroom at the FSU track was locked, but the women’s bathroom was open. The people in ROTC were doing their track workout at the same time and the male cadets were using the women’s bathroom—since the alternative was public urination. If this bill passed, the cadets would have been subject to arrest, jail and a fine for their crime.

For athletes, this sort of bathroom switching is not at all unusual. While training or at competitions, people often find the facilities closed or overburdened, so it is common for people to use whatever facilities are available—almost always with no problems or issues. For example, the Women’s Distance Festival is a classic race in Tallahassee that is open to men and women, but has a very large female turnout. On that day, the men get a porta-pottie and the men’s room is used by the women—which would be illegal if this bill passed. I have also lost count of the times that female runners have used the men’s room because the line to the women’s facilities was way too long. No one cared, no one was assaulted and no one was arrested. But if this bill became law, that sort of thing would be a crime.

My considered view of this bill is that there is no need for it. The sort of bad behavior that it is aimed to counter is already illegal and it would criminalize behavior that is not actually harmful (like the male ROTC cadets using the only open bathroom at the track).

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Androids, Autonomy & Agency

Blade Runner

Blade Runner (Photo credit: Wikipedia)

Philosophers have long speculated about the subjects of autonomy and agency, but the rise of autonomous systems have made these speculations ever more important.  Keeping things fairly simple, an autonomous system is one that is capable of operating independent of direct control. Autonomy comes in degrees in terms of the extent of the independence and the complexity of the operations. It is, obviously, the capacity for independent operation that distinguishes autonomous systems from those controlled externally.

Simple toys provide basic examples of the distinction. A wind-up mouse toy has a degree of autonomy: once wound and released, it can operate on its own until it runs down. A puppet, in contrast, has no autonomy—a puppeteer must control it. Robots provide examples of rather more complex autonomous systems. Google’s driverless car is an example of a relatively advanced autonomous machine—once programmed and deployed, it will be able to drive itself to its destination. A normal car is an example of a non-autonomous system—the driver controls it directly. Some machines allow for both autonomous and non-autonomous operation. For example, there are drones that follow a program guiding them to a target and then an operator can take direct control.

Autonomy, at least in this context, is quite distinct from agency. Autonomy is the capacity to operate (in some degree) independently of direct control. Agency, at least in this context, is the capacity to be morally responsible for one’s actions. There is clearly a connection between autonomy and moral agency: moral agency requires autonomy. After all, an entity whose actions are completely controlled externally would not be responsible for what it was made to do. A puppet is, obviously, not accountable for what the puppeteer makes it do.

While autonomy seems necessary for agency, it is clearly not sufficient—while all agents have some autonomy, not all autonomous entities are moral agents. A wind-up toy has a degree of autonomy, but has no agency. A robot drone following a pre-programed flight-plan has a degree of autonomy, but would lack agency—if it collided with a plane it would not be morally responsible. The usual reason why such a machine would not be an agent is that it lacks the capacity to decide. Or, put another way, it lacks freedom.  Since it cannot do otherwise, it is no more morally accountable than an earthquake or a super nova.

One obvious problem with basing agency on freedom (especially metaphysical freedom of the will) is that there is considerable debate about whether or not such freedom exists. There is also the epistemic problem of how one would know if an entity has such freedom.

As a practical matter, it is usually assumed that people have the freedom needed to make them into agents. Kant, rather famously, took this approach. What he regarded as the best science of his day indicated a deterministic universe devoid of metaphysical freedom. However, he contended that such freedom was needed for morality—so it should be accepted for this reason.

While humans are willing (generally) to attribute freedom and agency to other humans, there seem to be good reasons to not attribute freedom and agency to autonomous machines—even those that might be as complex as (or even more complex than) a human. The usual line of reasoning is that since such machines would be built and programmed by humans they would do what they do because they are what they are. This would be in clear contrast to the agency of humans: humans, it is alleged, do what they do because they choose to do what they do.

This distinction between humans and suitably complex machines would seem to be a mere prejudice favoring organic machines over mechanical machines. If a human was in a convincing robot costume and credibly presented as a robot while acting like a normal human, people would be inclined to deny that “it” had freedom and agency. If a robot was made to look and act just like a human, people would be inclined to grant it agency—at least until they learned it was “just” a machine. Then there would probably be an inclination to regard it as a very clever but unfree machine.  But, of course, it would not really be known whether the human or the machine had the freedom alleged needed for agency. Fortunately, it is possible to have agency even without free will (but with a form of freedom).

The German philosopher Leibiniz held the view that what each person will do is pre-established by her inner nature. On the face of it, this would seem to entail that there is no freedom: each person does what she does because of what she is—and she cannot do otherwise. Interestingly, Leibniz takes the view that people are free. However, he does not accept the common view that freedom requires actions that are unpredictable and spontaneous. Leibniz rejects this view in favor of the position that freedom is unimpeded self-development.

For Leibniz, being metaphysically without freedom would involve being controlled from the outside—like a puppet controlled by a puppeteer or a vehicle being operated by remote control.  In contrast, freedom is acting from one’s values and character (what Leibniz and Taoists call “inner nature”). If a person is acting from this inner nature and not external coercion—that is, the actions are the result of character, then that is all that can be meant by freedom. This view, which attempts to blend determinism and freedom, is known as compatibilism. On this sort of view, humans do have agency because they have the needed degree of freedom and autonomy.

If this model works for humans, it could also be applied to autonomous machines. To the degree that a machine is operating in accord to its “inner nature” and is not operating under the control of outside factors, it would have agency.

An obvious objection is that an autonomous machine, however complex, would have been built and programmed (in the broad sense of the term) by humans. As such, it would be controlled and not free. The easy and obvious reply is that humans are “built” by other humans (by mating) and are “programmed” by humans via education and socialization. As such, if humans can be moral agents, then it would seem that a machine could also be a moral agent.

From a moral standpoint, I would suggest a Moral Descartes’ Test (or, for those who prefer, a Moral Turing Test). Descartes argued that the sure proof of a being having a mind is its capacity to use true language. Turning later proposed a similar sort of test involving the ability of a computer to pass as human via text communication. In the moral test, the test would be a judgment of moral agency—can the machine be as convincing as a human in regards to its possession of agency? Naturally, a suitable means of concealing the fact that the being is a machine would be needed in order to prevent mere prejudice from infecting the judgment. The movie Blade Runner featured something similar, the Voight-Kampff test aimed at determining if the subject was a replicant or human. This test was based on the differences between humans and replicants in regards to emotions. In the case of moral agency, the test would have to be crafted to determine agency rather than to distinguish a human from machine, since the issue is not whether a machine is human but whether it has agency. A moral agent might have rather different emotions, etc. than a human. The challenge is, obviously enough, developing a proper test for moral agency. It would, of course, be rather interesting if humans could not pass it.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics IV: Cybernetics

Human flesh is weak and metal is strong. So, it is no surprise that military science fiction has often featured soldiers enhanced by cybernetics ranging from the minor to the extreme. An example of a minor cybernetic is an implanted radio. The most extreme example would be a full body conversion: the brain is removed from the original body and placed within a mechanical body. This body might look like a human (known as a Gemini full conversion in Cyberpunk) or be a vehicle such as a tank, as in Keith Laumer’s A Plague of Demons.

One obvious point of moral concern with cybernetics is the involuntary “upgrading” of soldiers, such as the sort practiced by the Cybermen of Doctor Who. While important, the issue of involuntary augmentation is not unique to cybernetics and was addressed in the second essay in this series. For the sake of this essay, it will be assumed that the soldiers volunteer for their cybernetics and are not coerced or deceived. This then shifts the moral concern to the ethics of the cybernetics themselves.

While the ethics of cybernetics is complicated, one way to handle matters is to split cybernetics into two broad categories. The first category consists of restorative cybernetics. The second consists of enhancement cybernetics.

Restorative cybernetics are devices used to restore (hopefully) normal functions to a wounded soldier. Examples would include cyberoptics (replacement eyes), cyberlimbs (replacements legs and arms), and cyberorgans (such as an artificial heart). Soldiers are already being fitted with such devices, although by the standards of science fiction they are still primitive. Given that these devices merely restore functionality and the ethics of prosthetics and similar replacements is well established, there seems to be no moral concern about using such technology in what is essentially a medical role. In fact, it could be argued that nations have a moral obligation to use such technology to restore their wounded soldiers.

While enhancement cybernetics might be used to restore functionality to a wounded soldier, enhancement cybernetics go beyond mere restoration. By definition, they are intended to improve on the original. These enhancements break down into two main classes. The first class consists of replacement cybernetics—these devices require the removal of the original part (be it an eye, limb or organ) and serve as replacements that improve on the original in some manner. For example, cyberoptics could provide a soldier with night vision, telescopic visions and immunity to being blinded by flares and flashes. As another example, cybernetic limbs could provide greater speed, strength and endurance. And, of course, a full conversion could provide a soldier with a vast array of superhuman abilities.

The obvious moral concern with these devices is that they require the removal of the original organic parts—something that certainly seems problematic, even if they do offer enhanced abilities. This could, of course, be offset if the original parts were preserved and restored when the soldier left the service. There is also the concern raised in science fiction about the mental effects of such removals and replacements—the Cyberpunk role playing game developed the notion of cyberpsychosis, a form of insanity caused by having flesh replaced by machines. Obviously, it is not yet known what negative effects (if any) such enhancements will have on people. As in any case of weighing harms and benefits, the likely approach would be utilitarian: are the advantages of the technology worth the cost to the soldier?

A second type of enhancement is an add-on which does not replace existing organic parts. Instead, as the name implies, an add-on involves the addition of a device to the body of the soldier. Add-on cybernetics differ from wearables and standard gear in that they are actually implanted in or attached to the soldier’s body. As such, removal can be rather problematic.

A fairly minor example would be something like an implanted radio. A rather extreme example would be the case of the comic book villain Doctor Octopus—his mechanical limbs are add-ons.  Other examples of add-ons include such things as implanted sensors, implanted armor, implanted weapons (such as in the comic book hero Wolverine), and other such augmentations.

Since these devices do not involve removal of healthy parts, they do avoid that moral concern. However, there are still legitimate concerns about the physical and mental harms that might be caused by such devices. It is easy enough to imagine implanted devices having serious side effects on soldiers. As noted above, these matters would probably be best addressed by utilitarian ethics—weighing the harms against the benefits.

Both types of enhancements also raise a moral concern about returning the soldier to the civilian population after her term of service. In the case of restorative grade devices, there is not as much concern—these soldiers would, ideally, function as they did before their injuries. However, the enhancements do present a potential problem since they, by definition, give the soldier capabilities that exceed that of normal humans. In some cases, re-integration would probably not be a problem. For example, a soldier with enhanced cyberoptics would presumably present no special problems. However, certain augmentations would present serious problems, such as implanted weapons or full conversions. Ideally, augmented soldiers could be restored to normal after their service has ended, but there could obviously be cases in which this was not done—either because of the cost or because the augmentation could not be reversed. This has been explored in science fiction—soldiers that can never stop being soldiers because they are machines of war. While this could be justified on utilitarian grounds (after all, war itself is often justified on such grounds), it is certainly a matter of concern—or will be.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robo Responsibility

It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.

If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.

While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.

While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.

Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.

While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.

The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.

It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.

To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.

As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.

Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.

Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.

Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.

Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Debating the Keystone XL Pipeline

The Keystone XL Pipeline has become a powerful symbol in American politics. Those that oppose it can take it as a symbol of all that is wrong: environmental dangers, global warming, big corporations, and other such evils. Those who support it can take it as a symbol of all that is good: jobs, profits, big corporations and other such goods. While I am no expert when it comes to pipelines, I thought it would be worthwhile to present a concise discussion of the matter.

The main substantial objections against the pipeline are environmental. One concern is that pipelines do suffer from leaks and these leaks can inflict considerable damage to the environment (including the water sources that are used by people). The material that will be transported by the Keystone XL pipeline is supposed to be rather damaging to the environment and rather problematic in terms of its cleanup.

Those who support the pipeline counter these objections by claiming that the pipelines are relatively safe—but this generally does not reassure people who have seen the impact of previous leaks. Another approach used by supporters is to point out that if the material is not transported by pipeline, companies will transport it by truck and by train. These methods, some claim, are more dangerous than the pipelines. Recent explosions of trains carrying such material do tend to serve as evidence for this claim. There is also the claim that using trucks and trains as a means of transport will create more CO2 output and hence the pipeline is a better choice in regards to the environment.

Some of those who oppose the pipeline contend that the higher cost of using trucks and trains will deter companies from using them (especially with oil prices so low). So, if the pipeline is not constructed, there would not be the predicted increase in CO2 levels from the use of these means of transportation. The obvious counter to this is that companies are already using trucks and trains to transport this material, so they already seem to be willing to pay the higher cost. It can also be pointed out that there are already a lot of pipelines so that one more would not make that much difference.

In addition to the leaks, there is also the concern about the environmental impact of acquiring the material to be transported by the pipeline and the impact of using the fossil fuels created from this material. Those opposed to the pipeline point out how it will contribute to global warming and pollution.

Those who support the pipeline tend to deny climate change or accept climate change but deny that humans cause it, or accept that humans cause it but contend that there is nothing that we can do that would be effective (mainly because China and other countries will just keep polluting). Another approach is to argue that the economic benefits outweigh any alleged harms.

Proponents of the pipeline claim that it will create a massive number of jobs. Opponents point out that while there will be some job creation when it is built (construction workers will be needed), the number of long term jobs will be very low. The opponents seem to be right—leaving out cleanup jobs, it does not take a lot of people to maintain a modern pipeline. Also, it is not like businesses will open up along the pipeline once it is constructed—it is not like the oil needs hotels or food. It is, of course, true that the pipeline can be a moneymaker for the companies—but it does seem unlikely that this pipeline will have a significant impact on the economy. After all, it would just be one more pipeline among many.

As might be guessed, some of the debate is over the matters of fact discussed above, such the environmental impact of building or not building the pipeline. Because many of the parties presenting the (alleged) facts have a stake in the matter, this makes getting objective information a bit of a problem. After all, those who have a financial or ideological interest in the pipeline will tend to present numbers that support the pipeline—that it creates many jobs and will not have much negative impact. Those who oppose it will tend to do the opposite—their numbers will tend to tell against the pipeline. This is not to claim that people are lying, but to simply point out the obvious influences of biases.

Even if the factual disputes could be settled, the matter is rather more than a factual disagreement—it is also a dispute over values. Environmental issues are generally political in the United States, with the right usually taking stances for business and against the environment and the left taking pro-environment and anti-business stances. The Keystone XL pipeline is no exception and has, in fact, become a symbol of general issues in regards to the environment and business.

As noted above, those who support the pipeline (with some interesting exceptions) generally reject or downplay the environmental concerns in favor of their ideological leaning. Those that oppose it generally reject or downplay the economic concerns in favor of their ideological leaning.

While I am pro-environment, I do not have a strong rational opposition to the pipeline. The main reasons are that there are already many pipelines, that the absence of the pipeline would not lower fossil fuel consumption, and that companies would most likely expand the use of trains and trucks (which would create more pollution and potentially create greater risks). However, if I were convinced that not having the pipeline would be better than having it, I would certainly change my position.

There is, of course, also the matter of symbolism—that one should fight or support something based on its symbolic value. It could be contended that the pipeline is just such an important symbol and that being pro-environment obligates a person to fight it, regardless of the facts. Likewise, someone who is pro-business would be obligated to support it, regardless to the facts.

While I do appreciate the value of symbols, the idea of supporting or opposing something regardless of the facts strikes me as both irrational and immoral.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ransoms & Hostages

1979 Associated Press photograph showing hosta...

While some countries will pay ransoms to free hostages, the United States has a public policy of not doing this. Thanks to ISIS, the issue of whether ransoms should be paid to terrorists groups or not has returned to the spotlight.

One reason to not pay a ransom for hostages is a matter of principle. This principle could be that bad behavior should not be rewarded or that hostage taking should be punished (or both).

One of the best arguments against paying ransoms for hostages is both a practical and a utilitarian moral argument. The gist of the argument is that paying ransoms gives hostage takers an incentive to take hostages. This incentive will mean that more people will be taken hostage. The cost of not paying is, of course, the possibility that the hostage takers will harm or kill their initial hostages. However, the argument goes, if hostage takers realize that they will not be paid a ransom, they will not have an incentive to take more hostages. This will, presumably, reduce the chances that the hostage takers will take hostages. The calculation is, of course, that the harm done to the existing hostages will be outweighed by the benefits of not having people taken hostage in the future.

This argument assumes, obviously enough, that the hostage takers are primarily motivated by the ransom payment. If they are taking hostages primarily for other reasons, such as for status, to make a statement or to get media attention, then not paying them a ransom will not significantly reduce their incentive to take hostages. This leads to a second reason to not pay ransoms.

In addition to the incentive argument, there is also the funding argument. While a terrorist group might have reasons other than money to take hostages, they certainly benefit from getting such ransoms. The money they receive can be used to fund additional operations, such as taking more hostages. Obviously enough, if ransoms are not paid, then such groups do lose this avenue of funding which can impact their operations. Since paying a ransom would be funding terrorism, this provides both a moral a practical reason not to pay ransoms.

While these arguments have a rational appeal, they are typically countered by a more emotional appeal. A stock approach to arguing that ransoms should be paid is the “in their shoes” appeal. The method is very straightforward and simply involves asking a person whether or not she would want a ransom to be paid for her (or a loved one). Not surprising, most people would want the ransom to be paid, assuming doing so would save her (or her loved one). Sometimes the appeal is made explicitly in terms of emotions: “how would you feel if your loved one died because the government refuses to pay ransoms?” Obviously, any person would feel awful.

This method does have considerable appeal. The “in their shoes” appeal can be seem similar to the golden rule approach (do unto others as you would have them do unto you). To be specific, the appeal is not to do unto others, but to base a policy on how one would want to be treated in that situation. If I would not want the policy applied to me (that is, I would want to be ransomed or have my loved one ransomed), then I should be morally opposed to the policy as a matter of consistency. This certainly makes sense: if I would not want a policy applied in my case, then I should (in general) not support that policy.

One obvious counter is that there seems to be a distinction between what a policy should be and whether or not a person would want that policy applied to herself. For example, some universities have a policy that if a student misses more than three classes, the student fails the course. Naturally, no student wants that policy to be applied to her (and most professors would not have wanted it applied to them when they were students), but this hardly suffices to show that the policy is wrong. As another example, a company might have a policy of not providing health insurance to part time employees. While the CEO would certainly not like the policy if she were part time, it does not follow that the policy must be a bad one. As such, policies need to be assessed not just in terms of how a persons feels about them, but in terms of their merit or lack thereof.

Another obvious counter is to use the same approach, only with a modification. In response to the question “how would you feel if you were the hostage or she were a loved one?” one could ask “how would you feel if you or a loved one were taken hostage in an operation funded by ransom money? Or “how would you feel if you or a loved one were taken hostage because the hostage takers learned that people would pay ransoms for hostages?” The answer would be, of course, that one would feel bad about that. However, while how one would feel about this can be useful in discussing the matter, it is not decisive. Settling the matter rationally does require considering more than just how people would feel—it requires looking at the matter with a degree of objectivity. That is, not just asking how people would feel, but what would be right and what would yield the best results in the practical sense.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Are Anti-Vaccination People Stupid?

Poster from before the 1979 eradication of sma...

Poster from before the 1979 eradication of smallpox, promoting vaccination. (Photo credit: Wikipedia)

The United States recently saw an outbreak of the measles (644 cases in 27 states) with the overwhelming majority of victims being people who had not been vaccinated. Critics of the anti-vaccination movement have pointed to this as clear proof that the movement is not only misinformed but also actually dangerous. Not surprisingly, those who take the anti-vaccination position are often derided as stupid. After all, there is no evidence that vaccines cause the harms that the anti-vaccination people refer to when justifying their position. For example, one common claim is that vaccines cause autism, but this seems to be clearly untrue. There is also the fact that vaccinations have been rather conclusively shown to prevent diseases (though not perfectly, of course).

It is, of course, tempting for those who disagree with the anti-vaccination people to dismiss them uniformly as stupid people who lack the brains to understand science. This, however, is a mistake. One reason it is a mistake is purely pragmatic: those who are pro-vaccination want the anti-vaccination people to change their minds and calling them stupid, mocking and insulting them will merely cause them to entrench. Another reason it is a mistake is that the anti-vaccination people are not, in general, stupid. There are, in fact, grounds for people to be skeptical or concerned about matters of health and science. To show this, I will briefly present some points of concern.

One point of rational concern is the fact that scientific research has been plagued with a disturbing amount of corruption, fraud and errors. For example, the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies. As such, it is hardly stupid to be concerned that scientific research might not be accurate. Somewhat ironically, the study that started the belief that vaccines cause autism is a paradigm example of bad science. However, it is not stupid to consider that the studies that show vaccines are safe might have flaws as well.

Another matter of concern is the influence of corporate lobbyists on matters relating to health. For example, the dietary guidelines and recommendations set forth by the United States Government should be set on the basis of the best science. However, the reality is that these matters are influenced quite strongly by industry lobbyists, such as the dairy industry. Given the influence of the corporate lobbyists, it is not foolish to think that the recommendations and guidelines given by the state might not be quite right.

A third point of concern is the fact that the dietary and health guidelines and recommendations undo what seems to be relentless and unwarranted change. For example, the government has warned us of the dangers of cholesterol for decades, but this recommendation is being changed. It would, of course, be one thing if the changes were the result of steady improvements in knowledge. However, the recommendations often seem to lack a proper foundation. John P.A. Ioannidis, a professor of medicine and statistics at Stanford, has noted “Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome. In this literature of epidemic proportions, how many results are correct?” Given such criticism from experts in the field, it hardly seems stupid of people to have doubts and concerns.

There is also the fact that people do suffer adverse drug reactions that can lead to serious medical issues and even death. While the reported numbers vary (one FDA page puts the number of deaths at 100,000 per year) this is certainly a matter of concern. In an interesting coincidence, I was thinking about this essay while watching the Daily Show on Hulu this morning and one of my “ad experiences” was for Januvia, a diabetes drug. As required by law, the ad mentioned all the side effects of the drug and these include some rather serious things, including death. Given that the FDA has approved drugs with dangerous side effects, it is hardly stupid to be concerned about the potential side effects from any medicine or vaccine.

Given the above points, it would certainly not be stupid to be concerned about vaccines. At this point, the reader might suspect that I am about to defend an anti-vaccine position. I will not—in fact, I am a pro-vaccination person. This might seem somewhat surprising given the points I just made. However, I can rationally reconcile these points with my position on vaccines.

The above points do show that there are rational grounds for taking a general critical and skeptical approach to matters of health, medicine and science. However, this general skepticism needs to be properly rational. That is, it should not be a rejection of science but rather the adoption of a critical approach to these matters in which one considers the best available evidence, assesses experts by the proper standards (those of a good argument from authority), and so on. Also, it is rather important to note that the general skepticism does not automatically justify accepting or rejecting specific claims. For example, the fact that there have been flawed studies does not prove that the specific studies about vaccines as flawed. As another example, the fact that lobbyists influence the dietary recommendations does not prove that vaccines are harmful drugs being pushed on Americans by greedy corporations. As a final example, the fact that some medicines have serious and dangerous side effects does not prove that the measles vaccine is dangerous or causes autism. Just as one should be rationally skeptical about pro-vaccination claims one should also be rationally skeptical about ant-vaccination claims.

To use an obvious analogy, it is rational to have a general skepticism about the honesty and goodness of people. After all, people do lie and there are bad people. However, this general skepticism does not automatically prove that a specific person is dishonest or evil—that is a matter that must be addressed on the individual level.

To use another analogy, it is rational to have a general concern about engineering. After all, there have been plenty of engineering disasters. However, this general concern does not warrant believing that a specific engineering project is defective or that engineering itself is defective. The specific project would need to be examined and engineering is, in general, the most rational approach to building stuff.

So, the people who are anti-vaccine are not, in general, stupid. However, they do seem to be making the mistake of not rationally considering the specific vaccines and the evidence for their safety and efficacy. It is quite rational to be concerned about medicine in general, just as it is rational to be concerned about the honesty of people in general. However, just as one should not infer that a friend is a liar because there are people who lie, one should not infer that a vaccine must be bad because there is bad science and bad medicine.

Convincing anti-vaccination people to accept vaccination is certainly challenging. One reason is that the issue has become politicized into a battle of values and identity. This is partially due to the fact that the anti-vaccine people have been mocked and attacked, thus leading them to entrench and double down. Another reason is that, as argued above, they do have well-founded concerns about the trustworthiness of the state, the accuracy of scientific studies, and the goodness of corporations. A third reason is that people tend to give more weight to the negative and also tend to weigh potential loss more than potential gain. As such, people would tend to give more weight to negative reasons against vaccines and fear the alleged dangers of vaccines more than they would value their benefits.

Given the importance of vaccinations, it is rather critical that the anti-vaccination movement be addressed. Calling people stupid, mocking them and attacking them are certainly not effective ways of convincing people that vaccines are generally safe and effective. A more rational and hopefully more effective approach is to address their legitimate concerns and consider their fears. After all, the goal should be the health of people and not scoring points.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics III: Pharmaceuticals

Steve Rogers' physical transformation, from a ...

Steve Rogers’ physical transformation, from a reprint of Captain America Comics #1 (May 1941). Art by Joe Simon and Jack Kirby. (Photo credit: Wikipedia)

Humans have many limitations that make them less than ideal as weapons of war. For example, we get tired and need sleep. As such, it is no surprise that militaries have sought various ways to augment humans to counter these weaknesses. For example, militaries routinely make use of caffeine and amphetamines to keep their soldiers awake and alert. There have also been experiments

In science fiction, militaries go far beyond these sorts of drugs and develop far more potent pharmaceuticals. These chemicals tend to split into two broad categories. The first consists of short-term enhancements (what gamers refer to as “buffs”) that address a human weakness or provide augmented abilities. In the real world, the above-mentioned caffeine and amphetamines are short-term drugs. In fiction, the classic sci-fi role-playing game Traveller featured the aptly (though generically) named combat drug. This drug would boost the user’s strength and endurance for about ten minutes. Other fictional drugs have far more dramatic effects, such as the Venom drug used by the super villain Bane. Given that militaries already use short-term enhancers, it is certainly reasonable to think they are and will be interested in more advanced enhancers of the sort considered in science fiction.

The second category is that of the long-term enhancers. These are chemicals that enable or provide long-lasting effects. An obvious real-world example is steroids: these allow the user to develop greater muscle mass and increased strength. In fiction, the most famous example is probably the super-soldier serum that was used to transform Steve Rogers into Captain America.

Since the advantages of improved soldiers are obvious, it seems reasonable to think that militaries would be rather interested in the development of effective (and safe) long-term enhancers. It does, of course, seem unlikely that there will be a super-soldier serum in the near future, but chemicals aimed at improving attention span, alertness, memory, intelligence, endurance, pain tolerance and such would be of great interest to militaries.

As might be suspected, these chemical enhancers do raise moral concerns that are certainly worth considering. While some might see discussing enhancers that do not yet (as far as we know) exist as a waste of time, there does seem to be a real advantage in considering ethical issues in advance—this is analogous to planning for a problem before it happens rather than waiting for it to occur and then dealing with it.

One obvious point of concern, especially given the record of unethical experimentation, is that enhancers will be used on soldiers without their informed consent. Since this is a general issue, I addressed it in its own essay and reached the obvious conclusion: in general, informed consent is morally required. As such, the following discussion assumes that the soldiers using the enhancers have been honestly informed of the nature of the enhancers and have given their consent.

When discussing the ethics of enhancers, it might be useful to consider real world cases in which enhancers are used. One obvious example is that of professional sports. While Major League Baseball has seen many cases of athletes using such enhancers, they are used worldwide and in many sports, from running to gymnastics. In the case of sports, one of the main reasons certain enhancers, such as steroids, are considered unethical is that they provide the athlete with an unfair advantage.

While this is a legitimate concern in sports, it does not apply to war. After all, there is no moral requirement for a fair competition in battle. Rather, one important goal is to gain every advantage over the enemy in order to win. As such, the fact that enhancers would provide an “unfair” advantage in war does not make them immoral. One can, of course, discuss the relative morality of the sides involved in the war, but this is another matter.

A second reason why the use of enhancers is regarded as wrong in sports is that they typically have rather harmful side effects. Steroids, for example, do rather awful things to the human body and brain. Given that even aspirin has potentially harmful side effects, it seems rather likely that military-grade enhancers will have various harmful side effects. These might include addiction, psychological issues, organ damage, death, and perhaps even new side effects yet to be observed in medicine. Given the potential for harm, a rather obvious way to approach the ethics of this matter is utilitarianism. That is, the benefits of the enhancers would need to be weighed against the harm caused by their use.

This assessment could be done with a narrow limit: the harms of the enhancer could be weighed against the benefits provided to the soldier. For example, an enhancer that boosted a combat pilot’s alertness and significantly increased her reaction speed while having the potential to cause short-term insomnia and diarrhea would seem to be morally (and pragmatically) fine given the relatively low harms for significant gains. As another example, a drug that greatly boosted a soldier’s long-term endurance while creating a significant risk of a stroke or heart attack would seem to be morally and pragmatically problematic.

The assessment could also be done more broadly by taking into account ever-wider considerations. For example, the harms of an enhancer could be weighed against the importance of a specific mission and the contribution the enhancer would make to the success of the mission. So, if a powerful drug with terrible side-effects was critical to an important mission, its use could be morally justified in the same way that taking any risk for such an objective can be justified. As another example, the harms of an enhancer could be weighed against the contribution its general use would make to the war. So, a drug that increased the effectiveness of soldiers, yet cut their life expectancy, could be justified by its ability to shorten a war. As a final example, there is also the broader moral concern about the ethics of the conflict itself. So, the use of a dangerous enhancer by soldiers fighting for a morally good cause could be justified by that cause (using the notion that the consequences justify the means).

There are, of course, those who reject using utilitarian calculations as the basis for moral assessment. For example, there are those who believe (often on religious grounds) that the use of pharmaceuticals is always wrong (be they used for enhancement, recreation or treatment). Obviously enough, if the use of pharmaceuticals is wrong in general, then their specific application in the military context would also be wrong. The challenge is, of course, to show that the use of pharmaceuticals is simply wrong, regardless of the consequences.

In general, it would seem that the military use of enhancers should be assessed morally on utilitarian grounds, weighing the benefits of the enhancers against the harm done to the soldiers.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter