Tag Archives: Ethics

Are Animals People?

IsisWhile the ethical status of animals has been debated since at least the time of Pythagoras, the serious debate over whether or not animals are people has just recently begun to heat up. While it is easy to dismiss the claim that animals are people, it is actually a matter worth considering.

There are at least three type of personhood: legal personhood, metaphysical personhood and moral personhood. Legal personhood is the easiest of the three. While it would seem reasonable to expect some sort of rational foundation for claims of legal personhood, it is really just a matter of how the relevant laws define “personhood.” For example, in the United States corporations are people while animals and fetuses are not. There have been numerous attempts by opponents of abortion to give fetuses the status of legal persons. There have even been some attempts to make animals into legal persons.

Since corporations are legal persons, it hardly seems absurd to make animals into legal people. After all, higher animals are certainly closer to human persons than are corporate persons. These animals can think, feel and suffer—things that actual people do but corporate people cannot. So, if it is not absurd for Hobby Lobby to be a legal person, it is not absurd for my husky to be a legal person. Or perhaps I should just incorporate my husky and thus create a person.

It could be countered that although animals do have qualities that make them worthy of legal protection, there is no need to make them into legal persons. After all, this would create numerous problems. For example, if animals were legal people, they could no longer be owned, bought or sold. Because, with the inconsistent exception of corporate people, people cannot be legally bought, sold or owned.

Since I am a philosopher rather than a lawyer, my own view is that legal personhood should rest on moral or metaphysical personhood. I will leave the legal bickering to the lawyers, since that is what they are paid to do.

Metaphysical personhood is real personhood in the sense that it is what it is, objectively, to be a person. I use the term “metaphysical” here in the academic sense: the branch of philosophy concerned with the nature of reality. I do not mean “metaphysical” in the pop sense of the term, which usually is taken to be supernatural or beyond the physical realm.

When it comes to metaphysical personhood, the basic question is “what is it to be a person?” Ideally, the answer is a set of necessary and sufficient conditions such that if a being has them, it is a person and if it does not, it is not. This matter is also tied closely to the question of personal identity. This involves two main concerns (other than what it is to be a person): what makes a person the person she is and what makes the person distinct from all other things (including other people).

Over the centuries, philosophers have endeavored to answer this question and have come up with a vast array of answers. While this oversimplifies things greatly, most definitions of person focus on the mental aspects of being a person. Put even more crudely, it often seems to come down to this: things that think and talk are people. Things that do not think and talk are not people.

John Locke presents a paradigm example of this sort of definition of “person.” According to Locke, a person “is a thinking intelligent being, that has reason and reflection, and can consider itself as itself, the same thinking thing, in different times and places; which it does only by that consciousness which is inseparable from thinking, and, as it seems to me, essential to it: it being impossible for any one to perceive without perceiving that he does perceive.”

Given Locke’s definition, animals that are close to humans in capabilities, such as the great apes and possibly whales, might qualify as persons. Locke does not, unlike Descartes, require that people be capable of using true language. Interestingly, given his definition, fetuses and brain-dead bodies would not seem to be people. Unless, of course, the mental activities are going on without any evidence of their occurrence.

Other people take a rather different approach and do not focus on mental qualities that could, in principle, be subject to empirical testing. Instead, the rest personhood on possessing a specific sort of metaphysical substance or property. Most commonly, this is the soul: things with souls are people, things without souls are not people. Those who accept this view often (but not always) claim that fetuses are people because they have souls and animals are not because they lack souls. The obvious problem is trying to establish the existence of the soul.

There are, obviously enough, hundreds or even thousands of metaphysical definitions of “person.” While I do not have my own developed definition, I do tend to follow Locke’s approach and take metaphysical personhood to be a matter of having certain qualities that can, at least in principle, be tested for (at least to some degree). As a practical matter, I go with the talking test—things that talk (by this I mean true use of language, not just making noises that sound like words) are most likely people. However, this does not seem to be a necessary condition for personhood and it might not be sufficient. As such, I am certainly willing to consider that creatures such as apes and whales might be metaphysical people like me—and erring in favor of personhood seems to be a rational approach to those who want to avoid harming people.

Obviously enough, if a being is a metaphysical person, then it would seem to automatically have moral personhood. That is, it would have the moral status of a person. While people do horrible things to other people, having the moral status of a person is generally a good thing because non-evil people are generally reluctant to harm other people. So, for example, a non-evil person might hunt squirrels for food, but would certainly not (normally) hunt humans for food. If that non-evil person knew that squirrels were people, then he would certainly not hunt them for food.

Interestingly enough, beings that are not metaphysical persons (that is, are not really people) might have the status of moral personhood. This is because the moral status of personhood might correctly or reasonably apply to non-persons.

One example is that a brain-dead human might no longer be a person, yet because of the former status as a person still be justly treated as a person in terms of its moral status. As another example, a fetus might not be an actual person, but its potential to be a person might reasonably grant it the moral status of a person.

Of course, it could be countered that such non-people should not have the moral status of full people, though they should (perhaps) have some moral status. To use the obvious example, even those who regard the fetus as not being a person would tend to regard it as having some moral status. If, to use a horrific example, a pregnant woman were attacked and beaten so that she lost her fetus, that would not just be a wrong committed against the woman but also a wrong against the fetus itself. That said, there are those who do not grant a fetus any moral status at all.

In the case of animals, it might be argued that although they do not meet the requirements to be people for real, some of them are close enough to warrant being treated as having the moral status of people (perhaps with some limitations, such as those imposed in children in regards to rights and liberties). The obvious counter to this is that animals can be given moral statuses appropriate to them rather than treating them as people.

Immanuel Kant took an interesting approach to the status of animals. In his ethical theory Kant makes it quite clear that animals are means rather than ends. People (rational beings), in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) chose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims that we have no direct duties to animals. They are classified in with the other “objects of our inclinations” that derive value from the value we give them.

Interestingly enough, Kant argues that we should treat animals well. However, he does so while also trying to avoid ascribing animals themselves any moral status. Here is how he does it (or tries to do so).

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards people. To make his case for this, he employs an argument from analogy: if a person doing X would obligate us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in his old age.

Given this approach, Kant could be seen as regarding animals as virtual or ersatz people. Or at least those that would be close enough to people to engage in activities that would create obligations if done by people.

In light of this discussion, there are three answers to the question raised by the title of this essay. Are animals legally people? The answer is a matter of law—what does the law say? Are animals really people? The answer depends on which metaphysical theory is correct. Do animals have the moral status of people? The answer depends on which, if any, moral theory is correct.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Should the State Forbid Buying with Food Stamps?

Some states have passed or are considering laws that would restrict what government aid can be used to purchase. One apparently pro-active approach, taken by my adopted state of Florida, has been to weed out drug users by requiring recipients of aid to pass a drug test. In Missouri, there has been an effort to prevent food stamp recipients from using their aid to buy steak or seafood. In Kansas a proposed law forbids people receiving government assistance from using those funds to visit swimming pools, buy movie tickets, gamble or get tattoos.

While these proposals and policies are fueled primarily by unwarranted stereotypes of the poor, it is possible to argue in their favor and two such arguments will be considered. Both arguments share a common principle, namely that the state needs to protect certain citizens from harm (which is a reasonable principle). The first argument centers on the need for the state to protect the poor from their poor decision making. The second focuses on the need to protect the taxpayers from being exploited by the poor.

The first argument is essentially an appeal to paternalism: the poor are incapable of making their own good decisions and thus the wisdom of the lawmakers must guide them. If left unguided, the poor will waste their limited government support on things like drugs, gambling, tattoos, steak and lobsters. This approach certainly has a philosophical pedigree. Aristotle, in his Nicomachean Ethics, argued that the compulsive power of the state should be used to compel the citizens to be virtuous. Other thinkers, usually those who favor totalitarianism, also find the idea of such paternalism very appealing.

Despite the pedigree of this approach, it is always reasonable to inquire as to whether a law is actually needed or not. In the case of a law that forbids, the obvious line of inquiry is to investigate the extent to which people engage in the behavior that is supposed to be forbidden by the law.

Despite the anecdotal evidence of Fox News’ infamous welfare surfer, there seems to be little evidence that people who receive state aid are blowing their state aid on strip clubs, drugs, steak or lobster. Rather, the poor (like almost everyone else) spend most of their money on things like housing and non-luxury food. In regards to drugs, people on support are no more likely than anyone else to be using them. As such, unless it can be clearly shown that a significant percentage of aid recipients are engaged in such “poor choices”, these laws would seem to be solutions in search of a problem.

It is also reasonable to consider whether or not a law is morally consistent in regards to how all citizens are treated. If the principle at work is that recipients of state money must be guided by the state because they cannot be trusted to make their own decisions, then this must be extended to all recipients of such money. This would include farmers getting subsidies, companies getting government contracts, government employees, recipients of tax breaks (such as the mortgage tax breaks), and so on. This is all government aid.

This is a matter of moral consistency—if some citizens must be subject to strict restrictions on how the state money can be spent and perhaps pass a drug test before getting it, then the same must apply to all citizens. Unless, of course, a relevant difference can be shown.

It could be argued that the poor, despite the lack of evidence, are simply more wasteful and worse at spending decisions than the rest of the population. While this does match the stereotypical narrative that some like to push, it does not seem to match reality. After all, billions of dollars simply vanished in Iraq. One does not need to spend much time on Google to find multitudes of examples of how non-poor recipients of state money wasted it or blew it on luxuries.

It could then be argued that extending this principle to everyone would be a good idea. After all, people who are not poor make bad decisions with state money and this shows that they are in need of the guiding wisdom of the state and strict control. Of course, this would result in a paternalistic (or “nanny” as some prefer) state that so many self-proclaimed small government freedom lovers professes to dislike.

Obviously, it is also important to consider whether or not a law will be more harmful or more beneficial. While it could be argued that the poor would be better off if compelled by the state to spend their aid money on what the state deems they should spend it on, there is still the fact that these policies and proposals are solutions in search of a problem. That is, these laws would not benefit people because they are typically not engaged in wasteful spending to begin with.

There is also the moral concern about the harm done to the autonomy and dignity of the recipients of the aid. It is, after all, an assault on a person’s dignity to assume that she is wasteful and bad at making decisions. It is an attack on a person’s autonomy to try to control him, even for his own good.

It might be countered that if the poor accept the state’s money, then they must accept the restrictions imposed by the state. While this does have some appeal, consistency would (as noted above) require this to be applied to everyone getting state money. Which includes the rich. And the people passing such laws. Presumably they would not like to be treated this way and consistency would seem to require that they treat others as they would wish to be treated.

The second main argument for such restrictions is based on the claim that they are needed to protect the taxpayers from being exploited by the poor. While some do contend that any amount of state aid is too much and is theft from the taxpayers (the takers stealing from the makers), such restrictions at least accept that the poor should receive some aid. But, this aid must be for essentials and not wasted—otherwise the taxpayers’ money is being (obviously enough) wasted.

As was discussed above, an obvious point of concern is whether or not such waste is occurring at a level that justifies the compulsive power of the state being employed. As noted above, these proposals and policies seem to be solutions in search of a problem. As a general rule, laws and restrictions should not be imposed without adequate justification and this seems lacking in this case.

This is not to say that people should not be concerned that taxpayer money is being wasted or spent unwisely. It, in fact, is. However, this is not a case of the clever poor milking the middleclass and the rich. Rather, it is a case of the haves milking the have-less. One prime example of this is wealthfare, much of which involves taxpayer money going to subsidize and aid those who are already quite well off, such as corporations. So, I do agree that the taxpayer needs to be protected from exploitation. But, the exploiters are not the poor. This should be rather obvious—if they were draining significant resources from the rest of the citizens, they would no longer be poor.

But, some might still insist, the poor really are spending their rather small aid money on steak, lobsters, strip clubs and gambling. One not unreasonable reply is that “man does not live by bread alone” and it does not seem wrong that the poor would also have a chance to enjoy the tiny luxuries or fun that their small amount of aid can buy.  Assuming, of course, that they are not spending everything on food and shelter. I would certainly not begrudge a person an occasional steak or beer. Or a swim in a pool. I do, of course, think that people should spend wisely, but that is another matter.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

A Philosopher’s Blog: 2014 Free on Amazon

A-Philosopher's-Blog-2014A Philosopher’s Blog: 2014 Philosophical Essays on Many Subjects will be available as a free Kindle book on Amazon from 12/31/2014-1/4/2015. This book contains all the essays from the 2014 postings of A Philosopher’s Blog. The topics covered range from the moral implications of sexbots to the metaphysics of determinism. It is available on all the various national Amazons, such as in the US, UK, and India.

A Philosopher’s Blog: 2014 on Amazon US

A Philosophers Blog: 2014 on Amazon UK

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

 

The Corruption of Academic Research

Synthetic insulin crystals synthesized using r...

Synthetic insulin crystals synthesized using recombinant DNA technology (Photo credit: Wikipedia)

STEM (Science, Technology, Engineering and Mathematics) fields are supposed to be the new darlings of the academy, so I was slightly surprised when I heard an NPR piece on how researchers are struggling for funding. After all, even the politicians devoted to cutting education funding have spoken glowingly of STEM. My own university recently split the venerable College of Arts & Sciences, presumably to allow more money to flow to STEM without risking that professors in the soft sciences and the humanities might inadvertently get some of the cash. As such I was somewhat curious about this problem, but mostly attributed it to a side-effect of the general trend of defunding public education. Then I read “Bad Science” by Llewellyn Hinkes-Jones. This article was originally published in issue 14, 2014 of Jacobin Magazine. I will focus on the ethical aspects of the matters Hinkes-Jones discussed in this article, which is centered on the Bayh-Dole Act.

The Bayh-Dole Act was passed in 1980 and was presented as having very laudable goals. Before the act was passed, universities were limited in regards to what they could do with the fruits of their scientific research. After the act was passes, schools could sell their patents or engage in exclusive licensing deals with private companies (that is, monopolies on the patents). Supporters asserted this act would be beneficial in three main ways. The first is that it would secure more private funding for universities because corporations would provide money in return for the patents or exclusive licenses. The second is that it would bring the power of the profit motive to public research: since researchers and schools could profit, they would be more motivated to engage in research. The third is that the private sector would be motivated to implement the research in the form of profitable products.

On the face of it, the act was a great success. Researchers at Columbia University patented the process of DNA cotransfrormation and added millions to the coffers of the school. A patent on recombinant DNA earned Stanford over $200 million. Companies, in turn, profited greatly. For example, researchers at the University of Utah created Myriad Genetics and took ownership of their patent on the BRCA1 and BRCA2 tests for breast cancer. The current cost of the test is $4,000 (in comparison a full sequencing of human DNA costs $1,000) and the company has a monopoly on the test.

Given these apparent benefits, it is easy enough to advance a utilitarian argument in favor of the act and its consequences. After all, if allows universities to fund their research and corporations to make profits, then its benefits would seem to be considerable, thus making it morally good. However, a proper calculation requires considering the harmful consequences of the act.

The first harm is that the current situation imposes a triple cost on the public. One cost is that the taxpayers fund the schools that conduct the research. The next is that thanks to the monopolies on patents the taxpayers have to pay whatever prices the companies wish to charge, such as the $4,000 for a test that should cost far less. In an actual free market there would be competition and lower prices—but what we have is a state controlled and regulated market. Ironically, those who are often crying the loudest against government regulation and for the value of competition are quite silent on this point.  The final cost of the three is that the corporations can typically write off their contributions on their taxes, thus leaving other taxpayers to pick up their slack. These costs seem to be clear harms and do much to offset the benefits—at least when looked at from the perspective of the whole society and not just focusing on those reaping the benefits.

The second harm is that, ironically, this system makes research more expensive. Since processes, strains of bacteria and many other things needed for research are protected by monopolistic patents the researchers who do not hold these patents have to pay to use them. The costs are usually quite high, so while the patent holders benefit, research in general suffers. In order to pay for these things, researchers need more funding, thus either imposing more cost on taxpayers or forcing them to turn to private funding (which will typically result in more monopolistic patents).

The third harm is the corruption of researchers. Researchers are literally paid to put their names on positive journal articles that advance the interests of corporations. They are also paid to promote drugs and other products while presenting themselves as researchers rather than paid promoters. If the researchers are not simply bought, the money is clearly a biasing factor. Since we are depending on these researchers to inform the public and policy makers about these products, this is clearly a problem and presents a clear danger to the public good.

A fourth harm is that even the honest researchers who have not been bought are under great pressure to produce “sexy science” that will attract grants and funding. While it has always been “publish or perish” in modern academics, the competition is even fiercer in the sciences now. As such, researchers are under great pressure to crank out publications. The effect has been rather negative as evidenced by the fact that the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies.  Far from driving advances in science, the act has served as an engine of corruption, fraud and bad science. This would be bad enough, but there is also the impact on a misled and misinformed public. I must admit that I fell for the antioxidant and omega-3 “research”—I modified my diet to include more antioxidants and omega-3. While this bad science does get debunked, the debunking takes a long time and most people never hear about it. For example, how many people know that the antioxidant and omega-3 “research” is flawed and how many still pop omega-3 “fish oil pills” and drink “antioxidant teas”?

A fifth harm is that universities have rushed to cash in on the research, driven by the success of the research schools that have managed to score with profitable patents. However, setting up research labs aimed at creating million dollar patents is incredibly expensive. In most cases the investment will not yield the hoped for returns, thus leaving many schools with considerable expenses and little revenue.

To help lower costs, schools have turned to employing adjuncts to do the teaching and research, thus creating a situation in which highly educated but very low-paid professionals are toiling away to secure millions for the star researchers, the administrators and their corporate benefactors. It is, in effect, sweat-shop science.

This also shows another dark side to the push for STEM: as the number of STEM graduates increase, the value of the degrees will decrease and wages for the workers will continue to fall. This is great for the elite, but terrible for those hoping that a STEM degree will mean a good job and a bright future.

These harms would seem to outweigh the alleged benefits of the act, thus indicating it is morally wrong. Naturally, it can be countered that the costs are worth it. After all, one might argue, the incredible advances in science since 1980 have been driven by the profit motive and this has been beneficial overall. Without the profit motive, the research might have been conducted, but most of the discoveries would have been left on the shelves. The easy and obvious response is to point to all the advances that occurred due to public university research prior to 1980 as well as the research that began before then and came to fruition.

While solving this problem is a complex matter, there seem to be some easy and obvious steps. The first would be to restore public funding of state schools. In the past, the publicly funded universities drove America’s worldwide dominance in research and helped fuel massive economic growth while also contributing to the public good. The second would be replacing the Bayh-Dole Act with an act that would allow universities to benefit from the research, but prevent the licensing monopolies that have proven so damaging. Naturally, this would not eliminate patents but would restore competition to what is supposed to be a competitive free market by eliminating the creation of monopolies from public university research. The folks who complain about the state regulating business and who praise the competitive free market will surely get behind this proposal.

It might also be objected that the inability to profit massively from research will be a disincentive. The easy and obvious reply is that people conduct research and teach with great passion for very little financial compensation. The folks that run universities and corporations know this—after all, they pay such people very little yet still often get exceptional work. True, there are some people who are solely motivated by profit—but those are typically the folks who are making the massive profit rather than doing the actual research and work that makes it all possible.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Ebola, Ethics & Safety

English: Color-enhanced electron micrograph of...

English: Color-enhanced electron micrograph of Ebola virus particles. Polski: Mikrofotografia elektronowa cząsteczek wirusa Ebola w fałszywych kolorach. (Photo credit: Wikipedia)

Kaci Hickox, a nurse from my home state of Maine, returned to the United States after serving as a health care worker in the Ebola outbreak. Rather than being greeted as a hero, she was confined to an unheated tent with a box for a toilet and no shower. She did not have any symptoms and tested negative for Ebola. After threatening a lawsuit, she was released and allowed to return to Maine. After arriving home, she refused to be quarantined again. She did, however, state that she would be following the CDC protocols. Her situation puts a face on a general moral concern, namely the ethics of balancing rights with safety.

While past outbreaks of Ebola in Africa were met largely with indifference from the West (aside from those who went to render aid, of course), the current outbreak has infected the United States with a severe case of fear. Some folks in the media have fanned the flames of this fear knowing that it will attract viewers. Politicians have also contributed to the fear. Some have worked hard to make Ebola into a political game piece that will allow them to bash their opponents and score points by appeasing fears they have helped create. Because of this fear, most Americans have claimed they support a travel ban in regards to Ebola infected countries and some states have started imposing mandatory quarantines. While it is to be expected that politicians will often pander to the fears of the public, the ethics of the matter should be considered rationally.

While Ebola is scary, the basic “formula” for sorting out the matter is rather simple. It is an approach that I use for all situations in which rights (or liberties) are in conflict with safety. The basic idea is this. The first step is sorting out the level of risk. This includes determining the probability that the harm will occur as well as the severity of the harm (both in quantity and quality). In the case of Ebola, the probability that someone will get it in the United States is extremely low. As the actual experts have pointed out, infection requires direct contact with bodily fluids while a person is infectious. Even then, the infection rate seems relatively low, at least in the United States. In terms of the harm, Ebola can be fatal. However, timely treatment in a well-equipped facility has been shown to be very effective. In terms of the things that are likely to harm or kill an American in the United States, Ebola is near the bottom of the list. As such, a rational assessment of the threat is that it is a small one in the United States.

The second step is determining key facts about the proposals to create safety. One obvious concern is the effectiveness of the proposed method. As an example, the 21-day mandatory quarantine would be effective at containing Ebola. If someone shows no symptoms during that time, then she is almost certainly Ebola free and can be released. If a person shows symptoms, then she can be treated immediately. An alternative, namely tracking and monitoring people rather than locking them up would also be fairly effective—it has worked so far. However, there are the worries that this method could fail—bureaucratic failures might happen or people might refuse to cooperate. A second concern is the cost of the method in terms of both practical costs and other consequences. In the case of the 21-day quarantine, there are the obvious economic and psychological costs to the person being quarantined. After all, most people will not be able to work from quarantine and the person will be isolated from others. There is also the cost of the quarantine itself. In terms of other consequences, it has been argued that imposing this quarantine will discourage volunteers from going to help out and this will be worse for the United States. This is because it is best for the rest of the world if Ebola is stopped in Africa and this will require volunteers from around the world. In the case of the tracking and monitoring approach, there would be a cost—but far less than a mandatory quarantine.

From a practical standpoint, assessing a proposed method of safety is a utilitarian calculation: does the risk warrant the cost of the method? To use some non-Ebola examples, every aircraft could be made as safe as Air-Force One, every car could be made as safe as a NASCAR vehicle, and all guns could be taken away to prevent gun accidents and homicides. However, we have decided that the cost of such safety would be too high and hence we are willing to allow some number of people to die. In the case of Ebola, the calculation is a question of considering the risk presented against the effectiveness and cost of the proposed method. Since I am not a medical expert, I am reluctant to make a definite claim. However, the medical experts do seem to hold that the quarantine approach is not warranted in the case of people who lack symptoms and test negative.

The third concern is the moral concern. Sorting out the moral aspect involves weighing the practical concerns (risk, effectiveness and cost) against the right (or liberty) in question. Some also include the legal aspects of the matter here as well, although law and morality are distinct (except, obviously, for those who are legalists and regard the law as determining morality). Since I am not a lawyer, I will leave the legal aspects to experts in that area and focus on the ethics of the matter.

When working through the moral aspect of the matter, the challenge is determining whether or not the practical concerns morally justify restricting or even eliminating rights (or liberties) in the name of safety. This should, obviously enough, be based on consistent principles in regards to balancing safety and rights. Unfortunately, people tend to be wildly inconsistent in this matter. In the case of Ebola, some people have expressed the “better safe than sorry” view and have elected to impose or support mandatory quarantines at the expense of the rights and liberties of those being quarantined. In the case of gun rights, these are often taken as trumping concerns about safety. The same holds true of the “right” or liberty to operate automobiles: tens of thousands of people die each year on the roads, yet any proposal to deny people this right would be rejected. In general, people assess these matters based on feelings, prejudices, biases, ideology and other non-rational factors—this explains the lack of consistency. So, people are wiling to impose on basic rights for little or no gain to safety, while also being content to refuse even modest infringements in matters that result in great harm. However, there are also legitimate grounds for differences: people can, after due consideration, assess the weight of rights against safety very differently.

Turning back to Ebola, the main moral question is whether or not the safety gained by imposing the quarantine (or travel ban) would justify denying people their rights. In the case of someone who is infectious, the answer would seem to be “yes.” After all, the harm done to the person (being quarantined) is greatly exceeded by the harm that would be inflicted on others by his putting them at risk of infection. In the case of people who are showing no symptoms, who test negative and who are relatively low risk (no known specific exposure to infection), then a mandatory quarantine would not be justified. Naturally, some would argue that “it is better to be safe than sorry” and hence the mandatory quarantine should be imposed. However, if it was justified in the case of Ebola, it would also be justified in other cases in which imposing on rights has even a slight chance of preventing harm. This would seem to justify taking away private vehicles and guns: these kill more people than Ebola. It might also justify imposing mandatory diets and exercise on people to protect them from harm. After all, poor health habits are major causes of health issues and premature deaths. To be consistent, if imposing a mandatory quarantine is warranted on the grounds that rights can be set aside even when the risk is incredibly slight, then this same principle must be applied across the board. This seems rather unreasonable and hence the mandatory quarantine of people who are not infectious is also unreasonable and not morally acceptable.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Neutral Good

My previous essays on alignments have focused on the evil ones (lawful evil, neutral evil and chaotic evil). Patrick Lin requested this essay. He professes to be a devotee of Neutral Evil to such a degree that he regards being lumped in with Ayn Rand as an insult. Presumably because he thinks she was too soft on the good.

In the Pathfinder version of the game, neutral good is characterized as follows:

A neutral good character is good, but not shackled by order. He sees good where he can, but knows evil can exist even in the most ordered place.

A neutral good character does anything he can, and works with anyone he can, for the greater good. Such a character is devoted to being good, and works in any way he can to achieve it. He may forgive an evil person if he thinks that person has reformed, and he believes that in everyone there is a little bit of good.

In a fantasy campaign realm, the player characters typical encounter neutral good types as allies who render aid and assistance. Even evil player characters are quite willing to accept the assistance of the neutral good, knowing that the neutral good types are more likely to try to persuade them to the side of good than smite them with righteous fury. Neutral good creatures are not very common in most fantasy worlds—good types tend to polarize towards law and chaos.

Not surprisingly, neutral good types are also not very common in the real world. A neutral good person has no special commitment to order or lack of order—what matters is the extent to which a specific order or lack of order contributes to the greater good. For those devoted to the preservation of order, or its destruction, this can be rather frustrating.

While the neutral evil person embraces the moral theory of ethical egoism (that each person should act solely in her self-interest), the neutral good person embraces altruism—the moral view that each person should act in the interest of others. In more informal terms, the neutral good person is not selfish. It is not uncommon for the neutral good position to be portrayed as stupidly altruistic. This stupid altruism is usually cast in terms of the altruist sacrificing everything for the sake of others or being willing to help anyone, regardless of who the person is or what she might be doing. While a neutral good person is willing to sacrifice for others and willing to help people, being neutral good does not require a person to be unwise or stupid. So, a person can be neutral good and still take into account her own needs. After all, the neutral good person considers the interests of everyone and she is part of that everyone. A person can also be selective in her assistance and still be neutral good. For example, helping an evil person do evil things would not be a good thing and hence a neutral good person would not be obligated to help—and would probably oppose the evil person.

Since a neutral good person works for the greater good, the moral theory of utilitarianism tends to fit this alignment. For the utilitarian, actions are good to the degree that they promote utility (what is of value) and bad to the degree that they do the opposite. Classic utilitarianism (that put forth by J.S. Mill) takes happiness to be good and actions are assessed in terms of the extent to which they create happiness for humans and, as far as the nature of things permit, sentient beings. Put in bumper sticker terms, both the utilitarian and the neutral good advocate the greatest good for the greatest number.

This commitment to the greater good can present some potential problems. For the utilitarian, one classic problem is that what seems rather bad can have great utility. For example, Ursula K. Le Guin’s classic short story “The Ones Who Walk Away from Omelas” puts into literary form the question raised by William James:

Or if the hypothesis were offered us of a world in which Messrs. Fourier’s and Bellamy’s and Morris’s utopias should all be outdone, and millions kept permanently happy on the one simple condition that a certain lost soul on the far-off edge of things should lead a life of lonely torture, what except a specifical and independent sort of emotion can it be which would make us immediately feel, even though an impulse arose within us to clutch at the happiness so offered, how hideous a thing would be its enjoyment when deliberately accepted as the fruit of such a bargain?

In Guin’s tale, the splendor, health and happiness that is the land of Omelas depends on the suffering of a person locked away in a dungeon from all kindness. The inhabitants of Omelas know full well the price they pay and some, upon learning of the person, walk away. Hence the title.

For the utilitarian, this scenario would seem to be morally correct: a small disutility on the part of the person leads to a vast amount of utility. Or, in terms of goodness, the greater good seems to be well served.

Because the suffering of one person creates such an overabundance of goodness for others, a neutral good character might tolerate the situation. After all, benefiting some almost always comes at the cost of denying or even harming others. It is, however, also reasonable to consider that a neutral good person would find the situation morally unacceptable. Such a person might not free the sufferer because doing so would harm so many other people, but she might elect to walk away.

A chaotic good type, who is committed to liberty and freedom, would certainly oppose the imprisonment of the innocent person—even for the greater good. A lawful good type might face the same challenge as the neutral good type: the order and well being of Omelas rests on the suffering of one person and this could be seen as an heroic sacrifice on the part of the sufferer. Lawful evil types would probably be fine with the scenario, although they would have some issues with the otherwise benevolent nature of Omelas. Truly subtle lawful evil types might delight in the situation and regard it as a magnificent case of self-delusion in which people think they are selecting the greater good but are merely choosing evil.

Neutral evil types would also be fine with it—provided that it was someone else in the dungeon. Chaotic evil types would not care about the sufferer, but would certainly seek to destroy Omelas. They might, ironically, try to do so by rescuing the sufferer and seeing to it that he is treated with kindness and compassion (thus breaking the conditions of Omelas’ exalted state).

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

DBS, Enhancement & Ethics

Placement of an electrode into the brain. The ...

Placement of an electrode into the brain. The head is stabilised in a frame for stereotactic surgery. (Photo credit: Wikipedia)

Deep Brain Stimulation (DBS) involves the surgical implantation of electrodes into a patient’s brain that, as the name indicates, stimulate the brain. Currently the procedure is used to treat movement disorders (such as Parkinson’s disease, dystonia and essential tremor) and Tourette’s syndrome. Research is currently underway for using the procedure to treat neuropsychiatric disorders (such as PTSD) and there is some indications that it can help with the memory loss inflicted by Alzheimers.

From a moral standpoint, the use of DBS in treating such conditions seems no more problematic than using surgery to repair a broken bone. If these were the only applications for DBS, then there would be no real moral concerns about the process. However, as is sometimes the case in medicine, there are potential applications that do raise moral concerns.

One matter for concern has actually been a philosophical problem for some time. To be specific, DBS can be used to stimulate the nucleus accumbens (a part of the brain associated with pleasure). While this can be used to treat depression, it can also (obviously) be used to create pleasure directly—the infamous pleasure machine scenario of so many Ethics 101 classes (the older version of which is the classic pig objection most famously considered by J.S. Mill in his work on Utilitarianism). Thanks to these stock discussions, the ethical ground of pleasure implants is well covered (although, as always, there are no truly decisive arguments).

While the sci-fi/philosophy scenario of people in pleasure comas is interesting, what is rather more interesting is the ethics of DBS as a life-enhancer. That is, getting the implant not to use to excess or in place of “real” pleasure, but to just make life a bit better. To use the obvious analogy, the excessive scenario is like drinking oneself into a stupor, while the life-upgrade would be like having a drink with dinner. On the face of it, it would be hard to object if the effect was simply to make a person feel a bit better about life—and it could even be argued that this would be preventative medicine. Just as person might be on medication to keep from developing high blood pressure or exercise to ward off diabetes, a person might get a brain boost to ward off potential depression. That said, there is the obvious concern of abusing the technology (and the iron law of technology states that any technology that can be abused, will be abused).

Another area of concern is the use of DBS for other enhancements. To use a specific example, if DBS can improve memory in Alzheimer’s patients, then it could do the same for healthy people. It is not difficult to imagine some people seeking to boost their memory or other abilities through this technology. This, of course, is part of the general topic of brain enhancements (which is part of the even more general topic of enhancements). As David Noonan has noted, DBS could become analogous to cosmetic/plastic surgery: what was once intended to treat serious injuries has become an elective enhancement surgery. Just as people seek to enhance their appearance by surgery, it seems reasonable to believe that they will do so to enhance their mental abilities. As long as there is money to be made here, many doctors will happily perform the surgeries—so it is probably a question of when rather than if DBS will be used for enhancement rather than for treatment.

From a moral standpoint, there is the same concern that has long held regarding cosmetic surgery, namely the risk of harm for the sake of enhancement. However, if enhancing one’s looks via surgery is morally acceptable, then enhancing one’s mood, memory and so on should certainly be acceptable as well. In fact, it could be argued that such substantial improvements are more laudable than merely improving appearance.

There is also the stock moral concern with fairness: those who can afford such enhancements will have yet another advantage over those less well off, thus widening the gap even more. This is, of course, a legitimate concern. But, aside from the nature of the specific advantage, nothing new morally. If it is acceptable for the wealthy to buy advantages in other ways, this should not seem to be any special exception.

There is, of course, two practical matters to consider. The first is whether or not DBS will prove effective in enhancement. The answer seems likely to be “yes.” The second is whether or not DBS will be tarnished by a disaster (or disasters). If something goes horribly wrong in a DBS procedure and this grabs media attention, this could slow the acceptance of DBS. That said, horrific tales involving cosmetic surgery did little to slow down its spread. So, someday soon people will go in to get a facelift, a memory lift and a mood lift. Better living through surgery.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Ethics & Free Will

Conscience and law

Conscience and law (Photo credit: Wikipedia)

Azim Shariff and Kathleen Vohs recently had their article, “What Happens to a Society That Does Not Believe in Free Will”, published in Scientific American. This article considers the causal impact of a disbelief in free will with a specific focus on law and ethics.

Philosophers have long addressed the general problem of free will as well as the specific connection between free will and ethics. Not surprisingly, studies conducted to determine the impact of disbelief in free will have the results that philosophers have long predicted.

One impact is that when people have doubts about free will they tend to have less support for retributive punishment. Retributive punishment, as the name indicates, is punishment aimed at making a person suffer for her misdeeds. Doubt in free will did not negatively impact a person’s support for punishment aimed at deterrence or rehabilitation.

While the authors do consider one reason for this, namely that those who doubt free will would regard wrongdoers as analogous to harmful natural phenomenon that need to dealt with rather than subject to vengeance, this view also matches a common view about moral accountability. To be specific, moral (and legal) accountability is generally proportional to the control a person has over events. To use a concrete example, consider the difference between these two cases. In the first case, Sally is driving well above the speed limit and is busy texting and sipping her latte. She doesn’t see the crossing guard frantically waving his sign and runs over the children in the cross walk. In case two, Jane is driving the speed limit and children suddenly run directly in front of her car. She brakes and swerves immediately, but she hits the children. Intuitively, Sally has acted in a way that was morally wrong—she should have been going the speed limit and she should have been paying attention. Jane, though she hit the children, did not act wrongly—she could not have avoided the children and hence is not morally responsible.

For those who doubt free will, every case is like Jane’s case: for the determinist, every action is determined and a person could not have chosen to do other than she did. On this view, while Jane’s accident seems unavoidable, so was Sally’s accident: Sally could not have done other than she did. As such, Sally is no more morally accountable than Jane. For someone who believes this, inflicting retributive punishment on Sally would be no more reasonable than seeking vengeance against Jane.

However, it would seem to make sense to punish Sally to deter others and to rehabilitate Sally so she will drive the speed limit and pay attention in the future. Of course, if these is no free will, then we would not chose to punish Sally, she would not chose to behave better and people would not decide to learn from her lesson. Events would happen as determined—she would be punished or not. She would do it again or not. Other people would do the same thing or not. Naturally enough, to speak of what we should decide to do in regards to punishments would seem to assume that we can chose—that is, that we have some degree of free will.

A second impact that Shariff and Vohs noted was that a person who doubts free will tends to behave worse than a person who does not have such a skeptical view. One specific area in which behavior worsens is that such skepticism seems to incline people to be more willing to harm others. Another specific area is that such skepticism also inclines others to lie or cheat. In general, the impact seems to be that the skepticism reduces a person’s willingness (or capacity) to resist impulsive reactions in favor of greater restraint and better behavior.

Once again, this certainly makes sense. Going back to the examples of Sally and Jane, Sally (unless she is a moral monster) would most likely feel remorse and guilt for hurting the children. Jane, though she would surely feel badly, would not feel moral guilt. This would certainly be reasonable: a person who hurts others should feel guilt if she could have done otherwise but should not feel moral guilt if she could not have done otherwise (although she certainly should feel sympathy). If someone doubts free will, then she will regard her own actions as being out of her control: she is not choosing to lie, or cheat or hurt others—these events are just happening. People might be hurt, but this is like a tree falling on them—it just happens. Interestingly, these studies show that people are consistent in applying the implications of their skepticism in regards to moral (and legal) accountability.

One rather important point is to consider what view we should have regarding free will. I take a practical view of this matter and believe in free will. As I see it, if I am right, then I am…right. If I am wrong, then I could not believe otherwise. So, choosing to believe I can choose is the rational choice: I am right or I am not at fault for being wrong.

I do agree with Kant that we cannot prove that we have free will. He believed that the best science of his day was deterministic and that the matter of free will was beyond our epistemic abilities. While science has marched on since Kant, free will is still unprovable. After all, deterministic, random and free-will universes would all seem the same to the people in them. Crudely put, there are no observations that would establish or disprove metaphysical free will. There are, of course, observations that can indicate that we are not free in certain respects—but completely disproving (or proving) free will would seem to beyond our abilities—as Kant contended.

Kant had a fairly practical solution: he argued that although free will cannot be proven, it is necessary for ethics. So, crudely put, if we want to have ethics (which we do), then we need to accept the existence of free will on moral grounds. The experiments described by Shariff and Vohs seems to support Kant: when people doubt free will, this has an impact on their ethics.

One aspect of this can be seen as positive—determining the extent to which people are in control of their actions is an important part of determining what is and is not a just punishment. After all, we do not want to inflict retribution on people who could not have done otherwise or, at the very least, we would want relevant circumstances to temper retribution with proper justice.  It also makes more sense to focus on deterrence and rehabilitation more than retribution. However just, retribution merely adds more suffering to the world while deterrence and rehabilitation reduces it.

The second aspect of this is negative—skepticism about free will seems to cause people to think that they have a license to do ill, thus leading to worse behavior. That is clearly undesirable. This then, provides an interesting and important challenge: balancing our view of determinism and freedom in order to avoid both unjust punishment and becoming unjust. This, of course, assumes that we have a choice. If we do not, we will just do what we do and giving advice is pointless. As I jokingly tell my students, a determinist giving advice about what we should do is like someone yelling advice to a person falling to certain death—he can yell all he wants about what to do, but it won’t matter.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Twitter Mining

Image representing Twitter as depicted in Crun...

Image via CrunchBase

In February, 2014 Twitter made all its tweets available to researchers. As might be suspected, this massive data is a potential treasure trove to researchers. While one might picture researchers going through the tweets for the obvious content (such as what people eat and drink), this data can be mined in some potentially surprising ways. For example, the spread of infectious diseases can be tracked via an analysis of tweets. This sort of data mining is not new—some years ago I wrote an essay on the ethics of mining data and used Target’s analysis of data to determine when customers were pregnant (so as to send targeted ads). What is new about this is that all the tweets are now available to researchers, thus providing a vast heap of data (and probably a lot of crap).

As might be imagined, there are some ethical concerns about the use of this data. While some might suspect that this creates a brave new world for ethics, this is not the case. While the availability of all the tweets is new and the scale is certainly large, this scenario is old hat for ethics. First, tweets are public communications that are on par morally with yelling statements in public places, posting statements on physical bulletin boards, putting an announcement in the paper and so on. While the tweets are electronic, this is not a morally relevant distinction. As such, researchers delving into the tweets is morally the same as a researcher looking at a bulletin board for data or spending time in public places to see the number of people who go to a specific store.

Second, tweets can (often) be linked to a specific person and this raises the stock concern about identifying specific people in the research. For example, identifying Jane Doe as being likely to have an STD based on an analysis of her tweets. While twitter provides another context in which this can occur, identifying specific people in research without their consent seems to be well established as being wrong. For example, while a researcher has every right to count the number of people going to a strip club via public spaces, to publish a list of the specific individuals visiting the club in her research would be morally dubious—at best. As another example, a researcher has every right to count the number of runners observed in public spaces. However, to publish their names without their consent in her research would also be morally dubious at best. Engaging in speculation about why they run and linking that to specific people would be even worse (“based on the algorithm used to analysis the running patterns, Jane Doe is using her running to cover up her affair with John Roe”).

One counter is, of course, that anyone with access to the data and the right sorts of algorithms could find out this information for herself. This would simply be an extension of the oldest method of research: making inferences from sensory data. In this case the data would be massive and the inferences would be handled by computers—but the basic method is the same. Presumably people do not have a privacy right against inferences based on publically available data (a subject I have written about before). Speculation would presumably not violate privacy rights, but could enter into the realm of slander—which is distinct from a privacy matter.

However, such inferences would seem to fall under privacy rights in regards to the professional ethics governing researchers—that is, researchers should not identify specific people without their consent whether they are making inferences or not. To use an analogy, if I infer that Jane Doe and John Roe’s public running patterns indicate they are having an affair, I have not violated their right to privacy (assuming this also covers affairs). However, if I were engaged in running research and published this in a journal article without their permission, then I would presumably be acting in violation of research ethics.

The obvious counter is that as long as a researcher is not engaged in slander (that is intentionally saying untrue things that harm a person), then there would be little grounds for moral condemnation. After all, as long as the data was publically gathered and the link between the data and the specific person is also in the public realm, then nothing wrong has been done. To use an analogy, if someone is in a public park wearing a nametag and engages in specific behavior, then it seems morally acceptable to report that. To use the obvious analogy, this would be similar to the ethics governing journalism: public behavior by identified individuals is fair game. Inferences are also fair game—provided that they do not constitute slander.

In closing, while Twitter has given researchers a new pile of data the company has not created any new moral territory.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Anyone Home?

English: man coming out of coma.

English: man coming out of coma. (Photo credit: Wikipedia)

As I tell my students, the metaphysical question of personal identity has important moral implications. One scenario I present is that of a human in what seems to be a persistent vegetative state. I say “human” rather than “person”, because the human body in question might no longer be a person. To use a common view, if a person is her soul and the soul has abandoned the shell, then the person is gone.

If the human is still a person, then it seems reasonable to believe that she has a different moral status than a mass of flesh that was once a person (or once served as the body of a person). This is not to say that a non-person human would have no moral status at all—I do not want to be interpreted as holding that view. Rather, my view is that personhood is a relevant factor in the morality of how an entity is treated.

To use a concrete example, consider a human in what seems to be a vegetative state. While the body is kept alive, people do not talk to the body and no attempt is made to entertain the body, such as playing music or audiobooks. If there is no person present or if there is a person present but she has no sensory access at all, then this treatment would seem to be acceptable—after all it would make no difference whether people talked to the body or not.

There is also the moral question of whether such a body should be kept alive—after all, if the person is gone, there would not seem to be a compelling reason to keep an empty shell alive. To use an extreme example, it would seem wrong to keep a headless body alive just because it can be kept alive. If the body is no longer a person (or no longer hosts a person), then this would be analogous to keeping the headless body alive.

But, if despite appearances, there is still a person present who is aware of what is going on around her, then the matter is significantly different. In this case, the person has been effectively isolated—which is certainly not good for a person.

In regards to keeping the body alive, if there is a person present, then the situation would be morally different. After all, the moral status of a person is different from that of a mass of merely living flesh. The moral challenge, then, is deciding what to do.

One option is, obviously enough, to treat all seemingly vegetative (as opposed to brain dead) bodies as if the person was still present. That is, the body would be accorded the moral status of a person and treated as such.

This is a morally safe option—it would presumably be better that some non-persons get treated as persons rather than risk persons being treated as non-persons. That said, it would still seem both useful and important to know.

One reason to know is purely practical: if people know that a person is present, then they would presumably be more inclined to take the effort to treat the person as a person. So, for example, if the family and medical staff know that Bill is still Bill and not just an empty shell, they would tend to be more diligent in treating Bill as a person.

Another reason to know is both practical and moral: should scenarios arise in which hard choices have to be made, knowing whether a person is present or not would be rather critical. That said, given that one might not know for sure that the body is not a person anymore it could be correct to keep treating the alleged shell as a person even when it seems likely that he is not. This brings up the obvious practical problem: how to tell when a person is present.

Most of the time we judge there is a person present based on appearance, using the assumption that a human is a person. Of course, there might be non-human people and there might be biological humans that are not people (headless bodies, for example). A somewhat more sophisticated approach is to use the Descartes’s test: things that use true language are people. Descartes, being a smart person, did not limit language to speaking or writing—he included making signs of the sort used to communicate with the deaf. In a practical sense, getting an intelligent response to an inquiry can be seen as a sign that a person is present.

In the case of a body in an apparent vegetative state applying this test is quite a challenge. After all, this state is marked by an inability to show awareness. In some cases, the apparent vegetative state is exactly what it appears to be. In other cases, a person might be in what is called “locked-in-syndrome.” The person is conscious, but can be mistaken for being minimally conscious or in a vegetative state. Since the person cannot, typically, respond by giving an external sign some other means is necessary.

One breakthrough in this area is due to Adrian M. Owen. Overs implying things considerably, he found that if a person is asked to visualize certain activities (playing tennis, for example), doing so will trigger different areas of the brain. This activity can be detected using the appropriate machines. So, a person can ask a question such as “did you go to college at Michigan State?” and request that the person visualize playing tennis for “yes” or visualize walking around her house for “no.” This method provides a way of determining that the person is still present with a reasonable degree of confidence. Naturally, a failure to respond would not prove that a person is not present—the person could still remain, yet be unable (or unwilling) to hear or respond.

One moral issue this method can held address is that of terminating life support. “Pulling the plug” on what might be a person without consent is, to say the least, morally problematic. If a person is still present and can be reached by Owen’s method, then thus would allow the person to agree to or request that she be taken off life support. Naturally, there would be practical questions about the accuracy of the method, but this is distinct from the more abstract ethical issue.

It must be noted that the consent of the person would not automatically make termination morally acceptable—after all, there are moral objections to letting a person die in this manner even when the person is fully and clearly conscious. Once it is established that the method adequately shows consent (or lack of consent), the broader moral issue of the right to die would need to be addressed.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page