Tag Archives: Death

Philosophy & My Old Husky V: Goodbye Good Girl

isis-2016Isis, my husky, joined the pack in 2004. She was a year old and her soul was filled with a wildness and a love of destruction. I channeled that wildness into running and that (mostly) took care of her love of destruction as well. We ran together for years, until she could no longer run. Then we walked on our adventures—a stately saunter rather than a mad dash. One day in March, 2016 she collapsed and I thought that was the end. But steroids granted her a reprieve and our adventures continued. But, time ends all things.

As the months went by, she hit a plateau of recovery and then began a decline. She could not walk as far, she had to be supported while doing her business and she was sometimes confused about where she was. This worsened as November progressed—she required ever more support, walked ever less distance, and had trouble distinguishing between the outside and inside of the house. Since she was my dog and I was her human, I accepted all this. I stocked up on carpet cleaner and ran the steam cleaner regularly. Since she could not handle the smooth floors, I put down yoga mats for her—I had tried carpet runners, but they drink up the urine. Yoga mats can be hosed off, dried and put back in place.

Though she suffered a physical and mental decline, her will remained unimpaired. When she decided that she wanted to walk someplace, she would struggle with her weakened legs and force her way through vegetation and up hills. If she could not make it up a hill on her own, she would turn her head to look at me and would not move again until I supported her and allowed her to power up that hill. She had the spirit of a true runner; never giving up in the face of a challenge. In the face of time, however, will and love are not enough.

She suffered a sudden decline and completely lost her ability to walk. I would carry her to do her business, but even with my support she had great difficulty. On November 22, things got even worse and neither of us slept that night. I wanted her to make it through Thanksgiving (she loved turkey), but on the morning of the 23rd I saw the pain in her eyes and knew what had to be done. Courtney, a friend of mine from Maine, had sent us some Christmas dog bones and a dog toy. I unwrapped those and hand fed her, placing the toy between her paws. After we had our early Christmas, I carried her to the truck and drove to Oakwood Animal Hospital. While no one really knows what is in the heart of another, I could tell that she had absolute trust in me as I carried her into the office. She knew that I would, as I have always done, do the right thing for her.

Her regular vet was on duty and, after we talked, Isis was put on an IV. As the vet, vet tech and I comforted her and cried, she passed away gently and peacefully. This was the hardest decision of my life, choosing the death of my friend.

Since I teach ethics, I have thought a great deal about this sort of decision. But, the theoretical context of the classroom is rather different from the harsh reality of deciding whether your friend should keep living. While some doubt the use of philosophy, thinking about this matter proved to be very helpful and even comforting in making the decision.

While people are said to own dogs, I never saw our relationship as matter of owning property. Rather, we had reached a mutual understanding and formed a team. Huskies are supervillains when it comes to escape, so they can (and do) end their relationships with humans when they wish. By accepting her, I took on many moral responsibilities. Some of these are analogous to those to my human friends, others are more analogous to those of a parent to a child. These included the usual obligations of keeping her healthy and safe; but they also included the obligation to ensure her wellbeing and happiness.

When she collapsed in March, I had to make the decision whether to try treatment or let her go then. While she was suffering, the medical evidence indicated that she had a chance to recover. Knowing her stubborn will, I believed that she would want to take that chance and power through the pain. I could not be certain of what she wanted; but I went with what I thought she would want. It turned out it was the right call; she recovered and returned to enjoying life.

As I got to know her, I learned that she had a look that meant “I need you to do something for me.” In the past, this usually meant playing with her, getting her a snack or letting her into the backyard to menace the lesser creatures (to a husky, almost all other creatures are lesser).  These things made her happy, and I was pleased to oblige—after all, I had a moral responsibility to her wellbeing because she was my dog and I was her human.

When she had declined to her worst, she stared at me intently with that look. Since she could not talk, she could not say what she wanted. She, I believed, wanted an end to her pain. I might just think that to feel better about my decision—perhaps she was doing nothing of the sort. But, I knew that to keep her alive and suffering would not be to act for her wellbeing or happiness. Medicine is quite good these days, I probably could have kept her going a few months more with painkillers and other medications. But that would be a dull and drugged life, not a life suitable for a soul so full of wildness and a love of destruction. I wanted her to end as my beloved wolf and not dissipate to nothing in a sea of pharmaceuticals. So, I said goodbye to my good girl.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Policebots

English: YORKTOWN, VA (July 12, 2009) A child ...

Peaceful protest is an integral part of the American political system. Sadly, murder is also an integral part of our society. The two collided in Dallas, Texas: after a peaceful protest, five police officers were murdered. While some might see it as ironic that the police rushed to protect the people protesting police violence, this actually serves as a reminder of how the police are supposed to function in a democratic society. This stands in stark contrast with the unnecessary deaths inflicted on citizens by bad officers—deaths that have given rise to many protests.

While violence and protests are both subjects worthy of in depth discussion, my focus will be on the ethical questions raised by the use of a robot to deliver the explosive device that was used to kill one of the attackers. While this matter has been addressed by philosophers more famous than I, I thought it worthwhile to say a bit about the matter.

While the police robot is called a robot, it is more accurate to describe it as a remotely operated vehicle. After all, the term “robot” is often taken as implying autonomy on the part of the machine. The police robot is remote controlled, like a sophisticated version of the remote controlled toys. In fact, a similar result could have been obtained by putting an explosive charge on a robust enough RC toy and rolling it within range of the target.

Since there is a human operator directly controlling the machine, it would seem that the ethics of the matter are the same as if more conventional machines of death (such as rifles or handguns) had been used to kill the shooter. On the face of it, the only difference is in how the situation is seen: a killer robot delivering a bomb sounds more ominous and controversial than an officer using a firearm. The use of remote controlled vehicles to kill targets is obviously nothing new—the basic technology has been around since at least WWII and the United States has killed many people with our drones.

If this had been the first case of an autonomous police robot sent to kill (like an ED-209), then the issue would be rather different. However, it is reasonable enough to regard this as the same old ethics of killing, only with a slight twist in regards to the delivery system. That said, it can be argued that the use of a remote controlled machine does add a new moral twist.

Keith Abney has raised a very reasonable point: if a robot could be sent to kill a target, it could also be sent to use non-lethal force to subdue the target. In the case of human officers, the usual moral justification of lethal force is that it is the best option for protecting themselves and others from a threat. If the threat presented by a suspect can be effectively addressed in a non-lethal manner, then that is the option that should be used. The moral foundation for this is set by the role of police in society: they are to protect the public and expected to take every legitimate effort to deliver suspects for trial in the criminal justice system. They are not supposed to function as soldiers that are engaging an enemy that is to be defeated—they are supposed to function as agents of the criminal justice system. There are, of course, cases in which suspects cannot be safely captured—these are situations in which the use of deadly force is justified, usually by imminent threat to the officer or citizens. A robot (or, more accurately, a remote controlled machine) can radically change the equation.

While a police robot is an expensive piece of hardware, it is not a human being (or even an artificial being). As such, it only has the moral status of property. In contrast, even the worst human criminal is a human being and thus has a moral status above that of a mere object. As such, if a robot is sent to engage a human suspect, then in many circumstances there would be no moral justification for using lethal force. After all, the officer operating the machine is in no danger as she steers the robot towards the target. This should change the ethics of the use of force to match other cases in which a suspect needs to be subdued, but presents no danger to the officer attempting arrest. In such cases, the machine should be outfitted with less-than-lethal options. While television and movies make safely disabling a human seem easy enough, it is actually rather challenging. For example, a rifle butt to the head is often portrayed as safely knocking a person out, when in reality it would cause serious injury or even death. Tasers, gas weapons and rubber bullets also can cause injury or death. However, the less-than-lethal options are less likely to kill a suspect and thus allow her to be captured for trial—which is the point of law enforcement. Robots could, as they often are in science fiction, be designed to withstand gunfire and physically grab a suspect. While this is likely to result in injury (such as broken bones) and could kill, it would be far less likely to kill than a bomb. An excellent example of a situation in which a robot would be ideal would be to capture an armed suspect barricaded in his house or apartment.

It must be noted that there will be cases in which the use of lethal force via a robot is justified. These would include cases in which the suspect presents a clear and present danger to officers or civilians and the best chance of ending the threat is the use of such force. An example of this might be a hostage situation in which the hostage taker is likely to kill hostages while the robot is trying to subdue him with less-than-lethal force.

While police robots have long been the stuff of science fiction, they do present a potential technological solution to the moral and practical problem of keeping officers and suspects alive. While an officer might be legitimately reluctant to stake her life on less-than-lethal options when directly engaged with a suspect, an officer operating a robot faces no such risk. As such, if the deployment of less-than-lethal options via a robot would not put the public at unnecessary risk, then it would be morally right to use such means.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Pain, Pills & Will

A Pain That I'm Used To

(Photo credit: Wikipedia)

There are many ways to die, but the public concern tends to focus on whatever is illuminated in the media spotlight. 2012 saw considerable focus on guns and some modest attention on a somewhat unexpected and perhaps ironic killer, namely pain medication. In the United States, about 20,000 people die each year (about one every 19 minutes) due to pain medication. This typically occurs from what is called “stacking”: a person will take multiple pain medications and sometimes add alcohol to the mix resulting in death. While some people might elect to use this as a method of suicide, most of the deaths appear to be accidental—that is, the person had no intention of ending his life.

The number of deaths is so high in part because of the volume of painkillers being consumed in the United States. Americans consume 80% of the world’s painkillers and the consumption jumped 600% from 1997 to 2007. Of course, one rather important matter is the reasons why there is such an excessive consumption of pain pills.

One reason is that doctors have been complicit in the increased use of pain medications. While there have been some efforts to cut back on prescribing pain medication, medical professionals were generally willing to write prescriptions for pain medication even in cases when such medicine was not medically necessary. This is similar to the over-prescribing of antibiotics that has come back to haunt us with drug resistant strains of bacteria. In some cases doctors no doubt simply prescribed the drugs to appease patients. In other cases profit was perhaps a motive. Fortunately, there have been serious efforts to address this matter in the medical community.

A second reason is that pharmaceutical companies did a good job selling their pain medications and encouraged doctors to prescribe them and patients to use them. While the industry had no intention of killing its customers, the pushing of pain medication has had that effect.

Of course, the doctors and pharmaceutical companies do not bear the main blame. While the companies supplied the product and the doctors provided the prescriptions, the patients had to want the drugs and use the drugs in order for this problem to reach the level of an epidemic.

The main causal factor would seem to be that the American attitude towards pain changed and resulted in the above mentioned 600% increase in the consumption of pain killers. In the past, Americans seemed more willing to tolerate pain and less willing to use heavy duty pain medications to treat relatively minor pains. These attitudes changed and now Americans are generally less willing to tolerate pain and more willing to turn to prescription pain killers. I regard this as a moral failing on the part of Americans.

As an athlete, I am no stranger to pain. I have suffered the usual assortment of injuries that go along with being a competitive runner and a martial artist. I also received some advanced education in pain when a fall tore my quadriceps tendon. As might be imagined, I have received numerous prescriptions for pain medication. However, I have used pain medications incredibly sparingly and if I do get a prescription filled, I usually end up properly disposing of the vast majority of the medication. I do admit that I did make use of pain medication when recovering from my tendon tear—the surgery involved a seven inch incision in my leg that cut down until the tendon was exposed. The doctor had to retrieve the tendon, drill holes through my knee cap to re-attach the tendon and then close the incision. As might be imagined, this was a source of considerable pain. However, I only used the pain medicine when I needed to sleep at night—I found that the pain tended to keep me awake at first. Some people did ask me if I had any problem resisting the lure of the pain medication (and a few people, jokingly I hope, asked for my extras). I had no trouble at all. Naturally, given that so many people are abusing pain medication, I did wonder about the differences between myself and my fellows who are hooked on pain medication—sometimes to the point of death.

A key part of the explanation is my system of values. When I was a kid, I was rather weak in regards to pain. I infer this is true of most people. However, my father and others endeavored to teach me that a boy should be tough in the face of pain. When I started running, I learned a lot about pain (I first started running in basketball shoes and got huge, bleeding blisters). My main lesson was that an athlete did not let pain defeat him and certainly did not let down the team just because something hurt. When I started martial arts, I learned a lot more about pain and how to endure it. This training instilled me with the belief that one should endure pain and that to give in to it would be dishonorable and wrong. This also includes the idea that the use of painkillers is undesirable. This was balanced by the accompanying belief, namely that a person should not needlessly injure his body. As might be suspected, I learned to distinguish between mere pain and actual damage occurring to my body.

Of course, the above just explains why I believe what I do—it does not serve to provide a moral argument for enduring pain and resisting the abuse of pain medication. What is wanted are reasons to think that my view is morally commendable and that the alternative is to be condemned. Not surprisingly, I will turn to Aristotle here.

Following Aristotle, one becomes better able to endure pain by habituation. In my case, running and martial arts built my tolerance for pain, allowing me to handle the pain ever more effectively, both mentally and physically. Because of this, when I fell from my roof and tore my quadriceps tendon, I was able to drive myself to the doctor—I had one working leg, which is all I needed. This ability to endure pain also serves me well in lesser situations, such as racing, enduring committee meetings and grading papers.

This, of course, provides a practical reason to learn to endure pain—a person is much more capable of facing problems involving pain when she is properly trained in the matter. Someone who lacks this training and ability will be at a disadvantage when facing situations involving pain and this could prove harmful or even fatal. Naturally, a person who relies on pain medication to deal with pain will not be training themselves to endure. Rather, she will be training herself to give in to pain and become dependent on medication that will become increasingly ineffective. In fact, some people end up becoming even more sensitive to pain because of their pain medication.

From a moral standpoint, a person who does not learn to endure pain properly and instead turns unnecessarily to pain medication is doing harm to himself and this can even lead to an untimely death. Naturally, as Aristotle would argue, there is also an excess when it comes to dealing with pain: a person who forces herself to endure pain beyond her limits or when doing so causes actually damage is not acting wisely or virtuously, but self-destructively. This can be used in a utilitarian argument to establish the wrongness of relying on pain medication unnecessarily as well as the wrongness of enduring pain stupidly. Obviously, it can also be used in the context of virtue theory: a person who turns to medication too quickly is defective in terms of deficiency; one who harms herself by suffering beyond the point of reason is defective in terms of excess.

Currently, Americans are, in general, suffering from a moral deficiency in regards to the matter of pain tolerance and it is killing us at an alarming rate. As might be suspected, there have been attempts to address the matter through laws and regulations regarding pain medication prescriptions. This supplies people with a will surrogate—if a person cannot get pain medication, then she will have to endure the pain. Of course, people are rather adept at getting drugs illegally and hence such laws and regulations are of limited effectiveness.

What is also needed is a change in values. As noted above, Americans are generally less willing to tolerate even minor pains and are generally willing to turn towards powerful pain medication. Since this was not always the case, it seems clear that this could be changed via proper training and values. What people need is, as discussed in an earlier essay, training of the will to endure pain that should be endured and resist the easy fix of medication.

In closing, I am obligated to add that there are cases in which the use of pain medication is legitimate. After all, the body and will are not limitless in their capacities and there are times when pain should be killed rather than endured. Obvious cases include severe injuries and illnesses. The challenge then, is sorting out what pain should be endured and what should not. Since I am a crazy runner, I tend to err on the side of enduring pain—sometimes foolishly so. As such, I would probably not be the best person to address this matter.

My Amazon Author Page

Enhanced by Zemanta

Proving Heaven

Rosa Celeste: Dante and Beatrice gaze upon the...

(Photo credit: Wikipedia)

I have always included a section on the afterlife in my Introduction to Philosophy class. As bit of grim humor, I tell my students that this is one philosophical problem that has a definite answer—unfortunately getting that answer requires dying.

Not surprisingly, students often point to examples of experiences in which people are technically dead, but are restored to life. People who survive these encounters with death often speak of strange experiences that they sometimes take as evidence for the afterlife.

One of the best publicized examples of this is the case of Dr. Eben Alexander, a Harvard neurosurgeon. After being put into a coma by bacterial meningitis, he had a death and revival experience which he has extensively publicized. He has also written up his experience as a book, the aptly named Proof of Heaven.

While Dr. Alexander’s case was given extensive media coverage because he is a Harvard neurosurgeon, his case is otherwise not significantly different from other such cases and can be assessed as they have been assessed. Naturally, it is worth noting that his medical training does give him credibility as an expert on neurosurgery. However, as an observer of the afterlife he would seem to be no more (or less) of an expert than anyone else. That is, his expertise in neurosurgery would not seem to apply to metaphysical experiences of the sort alleged to have occurred.

One stock criticism of the near-death experience is that a person who is revived is not properly dead. After all, they are revived shortly after death rather than resurrected or raised from the dead. As such, there is the rather legitimate question of whether or not they are even dead in a manner that would allow them to experience an afterlife, should it exist. They might just be “mostly dead” rather than “properly dead” and hence any experiences they have would not be experiences of the afterlife.

A second stock criticism is that the person who reports on near death experiences is not experiencing an afterlife, but is in a state of dreaming or hallucination that is mistaken for the afterlife on the basis that they were “mostly dead.” Critics routinely point to the similarities between near death experiences and drug experiences and the case of Dr. Alexander is no exception. It certainly makes sense that a dying brain would experience dream or drug like experiences that have no connection to the afterlife.

The cutting edge of these criticisms is to be found in Occam’s razor: the experiences can be explained adequately without postulating a metaphysical afterlife. As such, the explanation that the experiences are occurring within a dying (but still living) brain is the better explanation.

Aside from Dr. Alexander’s fame, there seems to be no real difference between his experiences and those reported by many other people before him.  Given that these cases do not provide proof of heaven, then neither does his case.

Naturally, I would like to believe in the sort of wonderful afterlife claimed by Dr. Alexander. However, wishful thinking is not proof.

My Amazon author page.

Enhanced by Zemanta

Close Encounters of the Cancer Kind: Is Philosophy a Preparation for Death?

There is nothing like a diagnosis of stage four inoperable lung cancer with bone metastases to give one a shock. I have known since I took logic as a young man that “Human beings are mortal. Socrates is a human being. Therefore, Socrates is mortal.” However, I was not Socrates, and as far as I was concerned that syllogism was just an example of a valid argument. However, when you put your own name in place of “Socrates” things look very different. Now I am an oldish philosopher (67), and suddenly the real possibility of my own death in the fairly near future has become a reality. Mortality approaches.

I know that philosophers concern themselves mostly with abstract and very general questions in epistemology, metaphysics, logic, ethics, etc.. By and large they do not approach philosophical questions from a personal perspective. Even death can be approached as an intellectual or conceptual problem. However, when Santa gave me my cancer diagnosis for Christmas 2011, abstract philosophy and my personal experience unavoidably came together. I now wonder if I can write in a very personal way about the universal truth that we are all going to die, what this means, and if there is anything of general import that I can express about what is happening in my own case. This breaks some common views of what philosophy is, but I do not have time to care about that now. So I am addressing you from a personal perspective, from my frame of life, and I ask your indulgence.

Let me state my tentative conclusion at the start. I do feel that having studied philosophy seriously for 46 years allowed me to keep my calm when the doctor gave me my diagnosis after a routine CT scan. For a second, I sat there feeling nothing at all. However, the next thought that came to me was gratitude for the life I have lived. Maybe other people do not feel this. Kubler Ross famously discusses five stages of grief and loss: denial, anger, bargaining, depression, and acceptance. I seemed to skip the first four. This is not to say that I instantly reached acceptance, but I did come first to gratitude. Now, after six months of living with lung cancer, I am trying to understand what acceptance of death may amount to.

Each of us can only judge and describe the world from our own time frame. If I had been much younger, my response to the diagnosis might have conformed more to Dr. Ross’s formula. The world looks very differently at different stages of life. Nevertheless, how one has looked, thought, and felt about life and death throughout one’s life has to make a difference at the end. In my case, the lens through which I have considered life has always been philosophical. Snatches of philosophical thoughts have lodged in my mind since I was was young. These are like seeds that took root deep in my mind and have matured and grown over the years. Now I feel that they are bearing fruit, helping me to live a new and deeper life. One nugget stands out to complete this first meditation on life and death.

Plato’s famously stated that “Philosophy is a preparation for death.” The Greek word that Plato uses for ‘preparation’ is ‘Melete’ and the root meaning is ‘care’ or ‘attention’. It can also mean ‘meditation,’ ‘practice’ or ‘exercise’. So are philosophers supposed to ‘practice’ dying, or simply to recollect the fact of mortality as they live their lives? What difference will that make?

I confess a great love of Plato and his amazing Socrates. However, I cannot go along with his tentative conclusions. We know what Socrates argues in the Phaedo. The reason that practicing philosophy is a preparation for death is that Socrates believes that the soul and the body are separable, that the soul is immortal, and that a very different after-life awaits those who have lived a good or evil life. Therefore, it behooves us to separate our own soul from our body as much as possible while we live and to detach ourselves from the preoccupations of mundane life.

The reason that I admire Socrates in the Phaedo is that after giving his ‘proofs’ of the immortality of the soul, he has the greatness to admit that his arguments are only the reasons he personally accepts to advance his position. He does not claim that they absolutely prove the soul is immortal. It is a postulate of Socrates’ practical metaphysics. In fact, he says that if he is wrong, and death is total extinction, then he will never know he is wrong, and his folly will be buried with him.

So in what sense can the study of philosophy be a preparation for death if one does not accept metaphysical dualism? I do not accept any such thing, but I still feel that my study of philosophy has helped me prepare for my present state. Does this mean that the study of any topic in philosophy will have this effect? I do not think so. I am not at all sure that one would prepare for death very well by spending 40 years working in the salt-mines of post-Gettier epistemology, nor in picking over all he convoluted arguments in mereology and inductive logic.

To see how the study of philosophy might be of value in preparing to die, we have to go back to the root meaning of ‘philosophy’ as the ‘love of wisdom’. Wisdom is not a topic that comes up very much in contemporary philosophy. It was more to the fore in the ancient world, where wisdom, ethics, and the question of living a good human life were brought together in a philosophical approach to living. For me, loving wisdom has to do with taking up the largest possible perspective in which to live one’s life, going all the way back to the Big Bang, including all of space and time, the natural history of the universe, the geology of the earth, and the total history of animals and human beings on this planet spinning through a gigantic universe. It covers all the natural cycles of life and death and sees everything as part of this comprehensive whole. Somehow, living in this context has helped me see life and death as part of a seamless process. Death shadows life as naturally as the shadow one casts on the ground on a sunny day. There is no point in denying it, and no point in worrying about it. Perhaps acceptance lies in this direction.

Responsibility & Suicide

Tyler Clementi, a student at Rutgers, committed suicide after his roommate Dharun Ravi and another student,  Molly Wei, allegedly posted a video of Clementi’s sexual encounter with another male.

Ravi and Lei have been charged with invasion of privacy and not with Clementi’s death. From a legal standpoint, this is to be expected. After all, establishing a legal causal link between the release of the video and his death would be rather difficult.

My main interest in the matter is not the legal aspect of the case, but rather the moral aspect. That is, the degree to which Ravi and Lei might be morally responsible for Clementi’s death. I am qualifying this because Ravi and Lei have not been convicted and hence they are merely accused of the crime at this point. This is an example of the broader matter of the responsibility a person has for actions that others take based on his own actions. In the specific case at hand, the problem is determining to what degree those involved in the distribution of the video are responsible for Clementi’s death.

While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.

Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X.  This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be an intuitive plausible reason that I am responsible for the person’s death.

If Wei and Ravi did, in fact, post the video in question and Clementi did, in fact, kill himself because of the video being posted, then it seems likely that Cleminiti would be alive today but for the posting. As such, Wei and Ravi would thus seem to be (potentially) responsible for his death and thus morally culpable.

However, there are clearly degrees of culpability. While the video being posted might have been a causal factor in the suicide, the causal link is far weaker than it would have been if, say, the accused had pushed Clementi off the bridge. Also, merely playing a causal role is not enough to ground moral (or legal) accountability. To use an obvious example, the video could not have been made without cameras. However, to hold the maker of the cameras responsible would be absurd. What is needed is, obviously enough, a degree and kind of causal role that grounds moral responsibility

One obvious reason as to why the accused have only a degree of culpability is that suicide is a matter of choice. While this choice was probably influenced by the release of the video, Clementi would not be dead if he had not decided to kill himself(assuming he did so).  This would certainly seem to reduce the moral responsibility of the the two people who allegedly posted the video. In contrast, if the two people had pushed him from the bridge against his will, then he would have no morally significant causal role in his own death and the moral responsibility would be fully upon them.

It might be argued that the two people who allegedly posted the video should have known what was going to happen and hence this makes them more responsible for the death. However, this seems implausible. It is reasonable to expect that a person would be outraged by such a posting or perhaps even horribly embarrassed. As such, they can be accused of invading his privacy and even with acting with an intent to create emotional harm.  However, since suicide is not a likely reaction to such an action, those who posted it cannot be reasonably expected to have believed that Clementi would kill himself.

To use an analogy, while people should not throw snowballs at other people, a person who throws one generally cannot be taken as throwing the snowball with an intent to kill. After all, snowballs generally do not do that. Naturally, the snowball analogy is not a perfect fit-if someone is killed by a thrown snowball, then the causal connection in the death is much stronger than in the case of a video that allegedly contributed to a suicide.

Of course, if the accused did know that Clementi was likely to respond by committing suicide, then the matter changes. To use the snowball analogy, if someone throws a snowball at someone who is likely to die from being hit by one, then they can be reasonably regarded as intending to cause the person’s death-or at least not being overly concerned with that possibility. Of course, this analogy breaks at a certain point-after all, suicide is a chosen behavior and dying when hit with a snowball is not. As such, even if the accused did know that Clementi was likely to kill himself, his death would still be ultimately a matter of his own choice. This factor of choice seems to be rather morally significant in the matter at hand.

Overall, it seems clear that creating and posting such a video was wrong. However, it also seems clear that the moral culpability of the accused is very limited in regards to the suicide. At most, the actions of the accused could be seen as a contributory cause in regards to the motivation to commit suicide.

Enhanced by Zemanta

Eating the Happy Dead

Meat

Image by yum9me via Flickr

In my previous post I mentioned that reading an  article in Newsweek entitled “Vegetarians Who Eat Meat”,  got me thinking about two issues. The first is whether a person can be a vegetarian and also eat meat. The second is whether the way the meat animal is raised impacts the morality of eating it. I addressed the first issue in that post and I now turn to the second issue.

Some folks who were (or still claim to be ) vegetarians have returned to eating meat and justify their consumption by making a moral argument. The gist of the argument is that the morality of eating meat rests not on the eating of meat but on how the animal was treated prior to becoming meat. To be more specific, the idea is that if the animal is lovingly raised in an environmentally sustainable way, then the consumption of its dead flesh is morally acceptable. In contrast, eating meat raised in the usual way (such as factory farming) is not acceptable.

There does seem to be some merit to this argument. If it is assumed that the unhappiness and happiness of animals matters, then a stock utilitarian argument can be trotted out. Treating food animals well generates more pleasure for the animals and, in contrast, treating them badly generates more pain. If pain and pleasure are the currency of morality, then treating food animals well would be morally better than treating them badly.

From this it would presumably follow that folks who only eat the animals who were well treated would have the moral high ground over those who eat animals who suffered before becoming meat. This is because the folks who eat the happy dead are not parties to the mistreatment of animals. Except, of course, for the killing and eating part. After all, both the happy cow and the sad cow meat…I mean “meet” the same end: death and consumption.

The fact that the animals, happy or sad, end up as meat might be seen as what is important to the ethics of the situation. This seems reasonable. After all, if someone intends to kill me my main concern is with my possible death and not whether the killer will be nice or not.

But it also seems reasonable to be concerned about what comes before. To use an analogy, imagine two legal systems. While both hand out the same punishments, one system treats suspects horribly: they are locked in fetid cells, poorly fed and treated with cruelty. The other legal system treats suspects reasonable well: they can get out on bail, cells are clean, the food is adequate and cruelty is rare. There seems to be a meaningful distinction between the two and this would also seem to hold in the case of meat.

As such, I do think that the folks who eat the happy dead can claim a slight moral superiority over those who dine on cruel food. But, there is still the obvious concern about whether the consumption of meat itself is acceptable or not.

Reblog this post [with Zemanta]

Violating Your Own Right to Privacy?

As I was getting ready to teach my Critical Inquiry class, I heard a woman outside the classroom carrying on a wicked fight over her mobile phone. I won’t go into the details, but she was “discussing” the various misdeeds of her (presumably now ex) boyfriend. On another occasion, I was walking to my truck and I had to cross by a screaming couple. Again, I’ll leave out the details but suffice it to say that he seemed rather concerned about the other men sleeping with her. Most recently, I heard about Penelope Trunk tweeting about her miscarriage. These incidents all caused me to think “hey, you have a right to privacy…think about using it.” This got me thinking about whether a person can violate her own right to privacy (assuming, perhaps incorrectly, that there is such a right).

On the face of it, it would seem that a person cannot violate her own right to privacy. A privacy violation would seem to require that someone acquire information that they do not legitimate have a right to know and they do so without the consent of the person. For example, someone stealing another person’s diary and reading about their secret hopes and fears would be a privacy violation. When a person knowingly reveals information about herself (such as by being very loud in public, posting it on a public blog or twittering it), then that person has obviously given consent to herself.

However, I think that a case can be made for the claim that a person can violate her own right to privacy. The first step in doing this is arguing that a person can (in general) violate her own rights.  To do this, I will  draw an analogy to suicide.  One reason to think suicide is wrong is that it violates a person’s right to life. Obviously, suicide harms (kills) a person and it seems reasonable to regard this as generally being wrong. To be fair, people do argue that consent to death somehow makes the action morally acceptable but this seems to clearly be a point that can be argued.  Returning to the main point, if suicide is a violation of a person’s right to life, then a person can violate her own rights.

Now, if the suicide analogy is found to be lacking, consider a second analogy involving the right to liberty. It seems quite reasonable to believe that a person who consents to slavery would be violating his own right to liberty. After all, if Locke is right, no one can rightly consent to being owned by another (although he does allow for slavery as an alternative to death). If this line of reasoning is plausible, the same would seem to hold for the right to privacy.

The second step in making my case is establishing that there are some things that should remain private, even if a person wants to make them public. This is rather challenging-after all, most people probably believe that people should be generally free to reveal their secrets even if doing so would be rather harmful to them. In fact, many reality TV shows and tell-all books rely on this view. However, it seems reasonable to believe that there is a category of things that should remain private and should not be revealed to others, even by the person whose privacy is at stake. To argue for this, I’ll appeal to the arguments for privacy and claim that these same arguments should also apply to the person in question. After all, if certain things should not be revealed by others, it seems reasonable to think that there are at least some things that should not be revealed by anyone. That is, that there are things that should be regarded as inherently private and not shared with the public.

If both of these steps work, then a person could violate her own right to privacy by making public what should be kept private.

I freely admit that my case for this is rather weak and that there are strong intuitions that folks have the right to reveal whatever they wish about themselves (naturally things change when the privacy of other people is involved as well). However,  it is not unreasonable to think that people can thus violate their own right to privacy by revealing  what should not be public. In any case, I look forward to comments that expose the errors in my line of reasoning.

If my previous line of reasoning is faulty, I have  second approach that I believe can produce a similar result. The idea is that while people have a right to privacy, people also have a right to sort of a reverse privacy. To be clearer, I mean that people have a right not to hear about certain things from other people. So, while someone might not violate her own right to privacy by twittering or yelling about private matters in public, such actions could be seen as violating the right of others not to be exposed to such things. My reason for this is that I think it is wrong for people to inflict their private matters onto the public without the consent of the public. When I am getting ready for class or walking to my truck, I think I have the right not to hear about the sexual activities of strangers. That is, I think I have a right to not have other folks private matters forcible entering my life.

Twitter, blogs and such are a quite a different matter. In these cases, people knowingly expose themselves to mediums that are often used to reveal private matters to the public and, of course, people can easily avoid those known to deal in such content.

In the  specific case of Twitter, people need to intentionally expose themselves to tweets. Since Twitter is well known for folks spilling private matters, people have no expectation that they will not be exposed to such things (this can be seen as a twist on the idea that there are situations in which people have no expectation of privacy). This is why I am not participating in Twitter. As I see it, Twitter is an unholy blend of narcissism and voyeurism that I would rather not invite into my life. But I do appreciate the fact that it does sometimes provide me with things to blog about.

Reblog this post [with Zemanta]

The Case for Death Panels

Rembrandt turns an autopsy into a masterpiece:...
Image via Wikipedia

In the United States, Obama’s call for national health care reform has ignited a firestorm of controversy. One rather interesting result of the furor has been the accusation that Obama plans to create death panels. While the accounts vary, the general idea is that these alleged panels are intended to review cases and decide whether care (and the patient) should be terminated or not. Not surprisingly, this accusation is not true-there is nothing in the actual proposals about such death panels.

As I do every semester, I am teaching an ethics class in which the students have to write an essay on a moral issue. When the students ask what position they should take, I generally suggest that the argue for what they believe (rather than vainly trying to guess my view in the hopes of getting a better grade). But, I also suggest that they consider writing an argument against what they actually believe. Since I am against death panels, I thought I’d try my hand at my own suggestion and make a case for them. When reading, please keep in mind that what follows is not my actual view. Hence, there is no cause to accuse me of Nazi (or even socialist) leanings.

From an intuitive moral standpoint, private citizens are rather restricted in regards to when they can ethically end the life of another person. In general, such killing is restricted to clear cases of self defense. For example, if someone pulls a gun on me while I am out for a run and demands my fancy GPS watch, it would be morally acceptable for me to kill him on the spot. After all, he presents a clear and present threat to my survival (as Locke would say, I have no reason to think that someone who would rob me of my property would not take the next step and try to rob me of my life).

In the case of the death panel matter, it does not seem that this sort of individual right can be used as a justification. After all, a patient who is in need of critical and expensive care is not likely to be a clear and present threat to my survival.

Of course, it could be argued that such a person would be a threat because he is using resources that could save my life. However, killing an innocent person because they happen to have resources that could save my life does not seem to be morally defensible. For example, if am in a ship wreck and at risk of drowning, I have no right to kill another passenger and strip her of her life vest. As such, there seems to be little support for death panels here.

Perhaps, however, the matter changes when the focus is expanded to include society as a whole. After all, actions that would be blatantly immoral for an individual can often be transformed, by the “magic” of the collective, into acceptable actions. For example, what would be murder on the individual level becomes transformed to acceptable killing in the context of war (although, obviously, not everyone buys this).

In many cases, the moral transformation is brought about by an appeal to the general good (essentially an appeal to utilitarian considerations). For example, killing folks in war can be morally justified by appealing to the advantages of the war to “national security” or “national interest.” Not surprisingly, more cynical folks might point out that “national interest” is often the interest of a select few and it might be contended that such actions are no better than those of any organized gang of criminals.

Now, if such things as war can be morally justified, then justifying death panels should be easy enough on the same sort of grounds.

In the case of war, killing folks is most often justified on utilitarian grounds. For example, some folks must be killed (including the inevitable innocent bystanders) in order for the collective good (national security, for example) to be served. Now, let us turn to applying this sort of approach to the death panels.

While the United States and other Western countries have significant medical resources (enough so that certain folks, such as Michael Jackson, can have their own personal doctors) these resources are not unlimited. In fact, it can be contended that the resources are not sufficient to provide adequate health care to everyone.

Now, it is obvious that people who are in need of critical care use far more resources than other folks. It is also obvious that the elderly have more health issues than younger folks. Now, looking at the matter by the numbers, it seems likely that the resources used to maintain a critically ill person or an elderly person could be used to provide health care to a significant number of folks with less serious conditions. Typically, these would often be younger folks as well-folks who also still have years to contribute to the good of the state.

Looked at in terms of the general utility, it would seem to make practical and moral sense to allocate medical resources so that they do the most good for the general populace. As such, it would seem to be acceptable to terminate the care of the critical ill in favor of the less ill. It could also, on similar grounds, be argued that the focus of health care should be on the younger folks rather than the harder to maintain elderly folks. To use a car analogy, it makes more sense to spend less on maintaining a new car than to pour large sums of money in order to keep an old clunker going.

Since the United States is supposed to have a free market economy, the critical ill and the elderly who have the funds to purchase the medical care they need should be allowed to do so. After all, they are paying for the resources they are consuming and hence are not creating an undue burden on the health care system. Naturally, folks who are lacking in such funds would be imposing burdens on the system by consuming beyond what they can afford to pay for. As such, they would be robbing society of valuable resources.

Naturally, it might be pointed out that some critically ill people or elderly folks might have made valuable contributions that justify their being treated at the public expense. There might also be such folks who are making ongoing contributions or who can be expected to make such contributions in the future. For example, a medical student who is badly hurt in a accident may be expensive to treat, but it is likely that she will be able to contribute more than he treatment would cost.

This is, of course, where the death panels come in. These panels would serve to assess the relative worth of each patient and decide who will receive the medical resources and who will not. For those who balk at such an approach, the obvious reply is that this sort of thing is done in the case of triage. In this case, it is a triage of a different sort but would still seem to be justifiable on similar grounds. In this case, the person’s place in the medical queue is based not on her likelihood of survival but based on the value of her survival to the national good.

Of course, some folks might contend that the idea of having folks decide who lives and who dies is a horrific idea. It might also be wondered where people could be found with the adequate experience to make such calls. Fortunately, the United States has plenty of people who have experience in such things. For example, Governors in states that have the death penalty already serve on death panels. As another example, the folks who make decisions about going to war already are on a death panel as well. After all, they have an active role in deciding who will live and who will die. As a final example, folks in insurance companies sometimes make decisions that deny care to people. Since such decisions about life and death are fairly routine, there should be little problem finding people to serve on such panels.

So, death panels seem like a great idea and the United States should hope that Obama makes the rumors a reality. Obviously, philosophers and runners should get an automatic exemption from being reviewed by death panels. This is so obvious that there is no need to even argue.

Dead Man Selling

Billy Mays Dies 1958-2009

For the past month,  I have seen a dead man pitching products on TV.  No, I am not having a Sixth Sense moment. Everyone can see the dead man, not just me.

The dead man is, of course, the famous American pitchman Billy Mays. He is the guy that has sold Americans all sorts of products, such as Oxiclean and Orange Glo. He died recently of heart problems, but his advertisements are still being aired.

Shortly after hearing about his death, I saw one of these ads. Oddly enough, rather than inspiring me to go into a consumer frenzy, the ad gave me a creepy feeling. After all, I knew the man trying to sell me some cell phone attachment was quite dead.

Interestingly, seeing movies that have dead actors in them has never given me that feeling. For example, if I watch an old Bogart film I do not get that creepy feeling. I don’t even get it when the actor died in the course of filming, such as what happened to Brandon Lee during the filming of the Crow.

Obviously, my particular psychological responses are hardly the stuff of philosophical interest. However, I think that the difference in how I feel does point to something that is worthy of philosophical consideration.

In the case of the commercials, while Mays might be playing his pitch man role, it is him selling the product. That is, he is there as himself, an enthusiastic and cheerful fellow who would really like you to buy all the stuff he is pitching.

In the case of the movies, the dead actor was playing a role of a meaningfully different order and this seems to create sort of a psychological buffer. To be a bit more specific, the character the dead actor played has a virtual life of its own (and perhaps even virtual death) and continues to exist as a fictional being.

In contrast, it is just Billy Mays, the dead man, whose recorded image is still pitching products. There is no buffer, no fictional being. Just someone I know is dead. Hence, the creepy feeling.

From a moral standpoint, there seems to be nothing really wrong with the ads remaining on television. After all, he no doubt contracted for a certain run and the fact that he is now dead would not seem to change that contract. Of course, there might seem something vaguely wrong about keeping a man working after his death. Certainly, it is just his recorded image, a digital ghost, that is doing the pitching. But perhaps even digital ghosts deserve to be laid to rest.

Reblog this post [with Zemanta]