Author Archives: Mike LaBossiere

Bionic Ethics

http://www.gettyimages.com/detail/51256116

Although bionics have been part of science fiction for quite some time (a well-known example is the Six Million Dollar Man), the reality of prosthetics has long been rather disappointing. But, thanks to America’s endless wars and recent advances in technology, bionic prosthetics are now a reality. There are now replacement legs that replicate the functionality of the original organics amazingly well. There have also been advances in prosthetic arms and hands as well as progress in artificial sight.  As with all technology, these bionic devices raise some important ethical issues.

The easiest moral issue to address is that involving what could be called restorative bionics. These are devices that restore a degree of the original functionality possessed by the lost limb or organ. For example, a soldier who lost the lower part of her leg to an IED in Iraq might receive a bionic device that restores much of the functionality of the lost leg. As another example, a person who lost an arm in an industrial accident might be fitted with a replacement arm that does some of what he could do with the original.

On the face of it, the burden of proof would seem to rest on those who would claim that the use of restorative bionics is immoral—after all, they merely restore functionality. However, there is still the moral concern about the obligation to provide such restorative bionics. One version of this is the matter of whether or not the state is morally obligated to provide such devices to soldiers maimed in the course of their duties. Another is whether or not insurance should cover such devices for the general population.

In general, the main argument against both obligations is financial—such devices are still rather expensive. Turned into a utilitarian moral argument, the argument would be that the cost outweighs the benefits; therefore the state and insurance companies should not pay for such devices. One reply, at least in the case of the state, is that the state owes the soldiers restoration. After all, if a soldier lost the use of a body part (or parts) in the course of her duty, then the state is obligated to replace that part if it is possible. Roughly put, if Sally gave her leg for her country and her country can provide her with a replacement bionic leg, then it should do so.

In the case of insurance, the matter is somewhat more complicated. In the United States, insurance is mostly a private, for-profit business. As such, a case can be made that the obligations of the insurance company are limited to the contract with the customer. So, if Sam has coverage that pays for his leg replacement, then the insurance company is obligated to honor that. If Bill does not have such coverage, then the company is not obligated to provide the replacement.

Switching to a utilitarian counter, it can be argued that the bionic replacements actually save money in the long term. Inferior prosthetics can cause the user pain, muscle and bone issues and other problems that result in more ongoing costs. In contrast, a superior prosthetic can avoid many of those problems and also allow the person to better return to the workforce or active duty. As such, there seem to be excellent reasons in support of the state and insurance companies providing such restorative bionics. I now turn to the ethics of bionics in sports.

Thanks to the (now infamous) “Blade Runner” Oscar Pistorious, many people are familiar with unpowered, relatively simple prosthetic legs that allow people to engage in sports. Since these devices seem to be inferior to the original organics, there is little moral worry here in regards to fairness. After all, a device that merely allows a person to compete as he would with his original parts does not seem to be morally problematic. This is because it confers no unfair advantage and merely allows the person to compete more or less normally. There is, however, the concern about devices that are inferior to the original—these would put an athlete at a disadvantage and could warrant special categories in sports to allow for fair competition. Some of these categories already exist and more should be expected in the future.

Of greater concern are bionic devices that are superior to the original organics in relevant ways. That is, devices that could make a person faster, better or stronger. For example, powered bionic legs could allow a person to run at higher speeds than normal and also avoid the fatigue that limits organic legs. As another example, a bionic arm coupled with a bionic eye could allow a person incredible accuracy and speed in pitching. While such augmentations could make for interesting sporting events, they would seem to be clearly unethical when used in competition against unaugmented athletes. To use the obvious analogy, just as it would be unfair for a person to use a motorcycle in a 5K foot race, it would be unfair for a person to use bionic legs that are better than organic legs. There could, of course, be augmented sports competitions—these might even be very popular in the future.

Even if the devices did not allow for superior performance, it is worth considering that they might be banned from competition for other reasons. For example, even if someone’s powered legs only allowed them a slow jog in a 5K, this would be analogous to using a mobility scooter in such a race—though it would be slow, the competitor is not moving under her own power. Naturally, there should be obvious exceptions for events that are merely a matter of participation (like charity walks).

Another area of moral concern is the weaponization of bionic devices. When I was in graduate school, I made some of my Ramen noodle money writing for R. Talsorian Games Cyberpunk. This science fiction game featured a wide selection of implanted weapons as well as weapon grade cybernetic replacement parts. Fortunately, these weapons do not add a new moral problem since they fall under the existing ethics regarding weaponry, concealed or otherwise. After all, a gun in the hand is still a gun, whether it is held in an organic hand or literally inside a mechanical hand.

One final area of concern is that people will elect to replace healthy organic parts with bionic components either to augment their abilities or out of a psychological desire or need to do so. Science fiction, such as the above mentioned Cyberpunk, has explored these problems and even come up with a name for the mental illness caused by a person becoming more machine than human: cyberpsyhcosis.

In general, augmenting for improvement does seem morally acceptable, provided that there are no serious side effects (like cyberpsychosis) or other harms. However, it is easy enough to imagine various potential dangers: augmented criminals, the poor being unable to compete with the augmented rich, people being compelled to upgrade to remain competitive, and so on—all fodder for science fiction stories.

As far as people replacing their healthy organic parts because of some sort of desire or need to do so, that would also seem acceptable as a form of life style choice. This, of course, assumes that the procedures and devices are safe and do not cause health risks. Just as people should be allowed to have tattoos, piercings and such, they should be allowed to biodecorate.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Why You Should (Probably) Not Be A Professor

http://www.gettyimages.com/detail/463246869

While I like being a professor, I am obligated to give a warning to those considering this career path. To be specific, I would warn you to reconsider. This is not because I fear the competition (I am a tenured full professor, so I won’t be competing with anyone for a job). It is not because I have turned against my profession to embrace anti-intellectualism or some delusional ideology about the awfulness of professors. It is not even due to disillusionment. I still believe in education and the value of educators. My real reason is altruism and honesty: I want potential professors to know the truth because it will benefit them. I now turn to the reasons.

First, there is the cost. In order to be a professor, you will need a terminal degree in the field—typically a Ph.D. This means that you will need to first get a B.A. or B.S. first and college is rather expensive these days. Student debt, as the media has been pointing out, it is at a record high. While a bachelor’s degree is, in general, a great investment, you will need to go beyond that and complete graduate school.

While graduate school is expensive, many students work as teaching or research assistants. These positions typically pay the cost of tuition and provide a very modest paycheck. Since the pay is low and the workload is high, you will be more or less in a holding pattern for the duration of grad school in terms of pay and probably life. After 3-7+ years, you will (if you are persistent and lucky) have the terminal degree.

If you are paying for graduate school, it will be rather expensive and will no doubt add to your debt. You might be able to work a decent job at the same time, but that will tend to slow down the process, thus dragging out graduate school.

Regardless of whether you had to pay or not, you will be attempting to start a career after about a decade (or more) in school—so be sure to consider that fact.

Second, the chances of getting a job are usually not great. While conditions do vary, the general trend has been that education budgets have been getting smaller and universities are spending more on facilities and administrators. As such, if you are looking for a job in academics, your better bet is to try to become an administrator rather than a professor. The salary for administrators is generally better than that of professors, although the elite coaches of the prestige sports have the very best salaries.

When I went on the job market in 1993, it was terrible. When I applied, I would get a form letter saying how many hundreds of people applied and how sorry the search committee was about my not getting an interview. I got my job by pure chance—I admit this freely. While the job market does vary, the odds are not great. So, consider this when deciding on the professor path.

Third, even if you do get a job, it is more likely to be a low-paying, benefit free adjunct position. Currently, 51.2% of faculty in non-profit colleges and universities are adjunct faculty. The typical pay for an adjunct is $20-25,000 per year and most positions have neither benefits nor security. The average salary for professors is $84,000. This is good, but not as good as what a person with an advanced degree makes outside of academics. Also, it is worth noting that the average salary for someone with just a B.A. is $45,000. By the numbers, if you go for a professorship, the odds are that you will be worse off financially than if you just stuck with a B.A. and went to work.

Fourth, the workload of professors is rather higher than people think. While administrative, teaching and research loads vary, professors work about 61 hours per week and work on weekends (typically grading, class prep and research). Thanks to budget cuts and increased enrollment, class sizes have tended to increase or remain high. For example, I typically have 150+ students per semester, with three of those classes being considered “writing intensive” (= lots of papers to grade).

People do like to point out that professors get summers off, it is important to point out that a summer off is a summer without pay. Also, even when a professor is not under contract for the summer, she is typically still doing research and class preparation. So, if you are dreaming about working two or three days a week and having an easy life, then being a professor is not the career for you.

Fifth, the trend in academics has been that professors do more and more uncompensated administrative work on top of their academic duties (research, teaching, advising, etc.). As one extreme example, one semester I was teaching four classes, advising, writing a book, directing the year long seven year program review, completing all the assessment tasks, and serving on nine committees. So, be sure to consider the joys of paperwork and meetings when considering being a professor.

Sixth, while there was at time that professors were well-respected, that respect has faded. Some of this is due to politicization of education. Those seeking to cut budgets to lower taxes, to transform education into a for-profit industry, and to break education unions have done an able job demonizing the profession and academics. Some is, to be fair, due to professors. As a whole, we have not done as good a job as we should in making the case for our profession in the public arena.

Seventh, while every generation claims that the newer generations are worse, the majority of students today see education as a means to the end of getting a well-paying job (or just a job). Given the economy that our political and financial elites have crafted, this is certainly a sensible and pragmatic approach. However, it has also translated into less student interest. So, if you are expecting students who value education, you must prepare for disappointment. The new model of education, as crafted by state legislators, administrators and the business folks is to train the job fillers for the job creators. The students have largely accepted this model as well, with some exceptions.

Finally, the general trend in politics has been one of increased hostility to education and in favor of seeing education as yet another place to make money. So, things will continue to worsen—perhaps to the point that professors will all be low-paid workers in the for-profit education factories that are manufacturing job fillers for the job creators.

In light of all this, you should probably not be a professor.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Catcalling

http://www.gettyimages.com/detail/455658932

For those not familiar with the term, to catcall is to whistle, shout or make a comment of a sexual nature to a person passing by. In general, the term is used when the person being harassed is a women, but men can also be subject to such harassment.

Thanks to a video documenting a woman’s 10 hours of being catcalled as she walked New York City, catcalling has garnered considerable attention. While it is well known that men catcall, it is less obvious why men engage in this behavior.

Some men seem to hold to the view that they have a right to catcall. As one man put it, “if you have a beautiful body, why can’t I say something?” This view seems to have two main parts. The first (“you have a beautiful body”) seems to indicate that the woman is responsible for the response of men because she has a beautiful body. It is, I think, reasonable to accept the idea that beauty, be it in a person or painting, can evoke a response from a viewer. The problem is, however, that a catcall is not a proper response to beauty and certainly not a proper response to a person. Also, while a woman’s appearance might cause a reaction, the verbal response chosen by the man (or boy) is his responsibility. To use an analogy, seeing a cake at a wedding might make me respond with hunger, but if I chose to paw at the cake and drool on it, then the response (which is very inappropriate) is my choice. To forestall any criticism, I am not saying that women are objects—I just needed an analogy and I am hungry as I write this. Hence the cake analogy.

The second part (“why can’t I say something?”) seems to indicate that the man has a presumptive right to catcall. Put another way, this seems to assume that the burden of proving that men should not catcall rests on women and that it should be assumed that a man has such a right. While the moral right to free speech does entail than men have a right to express their views, there is also the matter of whether it is right to engage in such catcalling. I would say not, on the grounds that the harm done to women by men catcalling them outweighs the harm that would be done to men if they did not engage in such behavior. While I am vary of any laws that infringe on free expression, I do hold that men should not (in the moral sense) behave this way.

This question also seems to show a sense of entitlement—that the man seeing the woman as beautiful entitles him to harass her. This seems similar to believing that seeing someone as unattractive warrants saying derogatory things about the person. Again, while people do have a freedom of expression, there are things that are unethical to express.

Some men also claim that the way a woman dresses warrants their behavior. As one young man said, “If a girl comes out in tight leggings, and you can see something back there… I’m saying something.” This is, obviously enough, just an expression of the horrible view that a woman invites or deserves the actions of men by her choice of clothing. This “justification” is best known as a “defense” for rape—the idea that the woman was “asking for it” because she was dressed in provocative clothing. However, a woman’s mode of dress does not warrant her being catcalled or attacked. After all, if a man was wearing an expensive Rolex watch and he was robbed, it would not be said that he was provocative or was “asking for it” by displaying such an expensive timepiece. Naturally, it might be a bad idea to dress a certain way or wear an expensive watch when going certain places, but this does not justify the catcalling or robbery.

There has been some speculation that catcalling, like everything else, is the result of natural selection. Looked at one way, if the theory of evolution is correct and one also accepts the notion that human behavior is determined (rather than free), then this would be true. This is because all human behavior would be the result of such selection and determining factors. In this case, one cannot really say that the behavior would be wrong, at least if something being immoral requires that the person engaging in the behavior could do otherwise. If a person cannot do otherwise, placing blame or praise on the person would be pointless—like praising or blaming water for boiling at a certain temperature and pressure. Looked at another way, it might be useful to consider the evolutionary forces that might lead to the behavior.

One possible “just so” story is that males would call out to passing females as a form of mating display (like how birds display for each other). Some of the females would respond positively and thus the catcalling genes would be passed on to future generations of men who would in turn catcall women to attract a mate.

One reason to accept this view is that some forms of what could be regarded as catcalling do seem to work. Having been on college campuses for decades, I have seen a vast amount of catcalling in various forms (including the “hollaback” thing). Some women respond by ignoring it, some respond with hostility, and some respond positively. While the positive response rate seems low, it is a low effort “fishing trip” and hence the cost to the male is rather small. After all, he just has to sit there and say things as “bait” in the hopes he will get a bite. Like fishing, a person might cast hundreds of times to catch a single fish.

One reason to reject this view is that many of the guys who use it will obviously never get a positive response. However, they might think they will—they are casting away like mad, not realizing that their “bait” will never work. After all, they might have seen it work for other guys and think they have a chance.

Moving away from evolution, one stock explanation for catcalling is that men do it as an expression of power—they are doing it to show (to themselves, other men and women) that they have power over women. A man might be an unfit, ugly, overweight, graceless, unemployed slob but he can make a fit, beautiful and successful woman feel afraid and awful by screeching about her buttocks or breasts. Of course, catcalling is not limited to such men, though the power motive would still seem to hold. This is clearly morally reprehensible because of the harm it does to women. Even if the woman is not afraid of the man, having to hear such things can diminish her enjoyment. While I am a man, I do understand what it is like to have stupid and hateful remarks yelled at me. When I was young and running was not as accepted as it is now, it was rare for me to go for a run without someone saying something stupid or hateful. Or throwing things. Being a reasonably large male, I did not feel afraid (most of those yelling did so from the safety of passing automobiles). However, such remarks did bother me—much in the way that being bitten by mosquitoes bothers me. That is, it just made the run less pleasant. As such, I have some idea of what it is like for women to be catcalled, but it is presumably much worse for them.

I have even been catcalled by women—but I am sure that it is not the same sort of experience that women face when catcalled by men. After all, the women who have catcalled me are probably just kidding (perhaps even being ironic) and, even if they are not, they almost certainly harbor no hostile intentions and present no real threat. To have a young college woman yell “nice ass” from her car as I run through the FSU campus is a weird sort of compliment rather than a threat. Though it is still weird.  In contrast, when men engage in such behavior it seems overtly predatory and threatening. So, stop catcalling, guys.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

The Teenage Mind & Decision Making

http://www.gettyimages.com/detail/163207027

One of the stereotypes regarding teenagers is that they are poor decision makers and engage in risky behavior. This stereotype is usually explained in terms of the teenage brain (or mind) being immature and lacking the reasoning abilities of adults. Of course, adults often engage in poor decision-making and risky behavior.

Interestingly enough, there is research that shows teenagers use basically the same sort of reasoning as adults and that they even overestimate risks (that is, regard something as more risky than it is). So, if kids use the same processes as adults and also overestimate risk, then what needs to be determined is how teenagers differ, in general, from adults.

Currently, one plausible hypothesis is that teenagers differ from adults in terms of how they evaluate the value of a reward. The main difference, or so the theory goes, is that teenagers place higher value on rewards (at least certain rewards) than adults. If this is correct, it certainly makes sense that teenagers are more willing than adults to engage in risk taking. After all, the rationality of taking a risk is typically a matter of weighing the (perceived) risk against the (perceived) value of the reward. So, a teenager who places higher value on a reward than an adult would be acting rationally (to a degree) if she was willing to take more risk to achieve that reward.

Obviously enough, adults also vary in their willingness to take risks and some of this difference is, presumably, a matter of the value the adults place on the rewards relative to the risks. So, for example, if Sam values the enjoyment of sex more than Sally, then Sam will (somewhat) rationally accept more risks in regards to sex than Sally. Assuming that teenagers generally value rewards more than adults do, then the greater risk taking behavior of teens relative to adults makes considerable sense.

It might be wondered why teenagers place more value on rewards relative to adults. One current theory is based in the workings of the brain. On this view, the sensitivity of the human brain to dopamine and oxytocin peaks during the teenage years. Dopamine is a neurotransmitter that is supposed to trigger the “reward” mechanisms of the brain. Oxytocin is another neurotransmitter, one that is also linked with the “reward” mechanisms as well as social activity. Assuming that the teenage brain is more sensitive to the reward triggering chemicals, then it makes sense that teenagers would place more value on rewards. This is because they do, in fact, get a greater reward than adults. Or, more accurately, they feel more rewarded. This, of course, might be one and the same thing—perhaps the value of a reward is a matter of how rewarded a person feels. This does raise an interesting subject, namely whether the value of a reward is a subjective or objective matter.

Adults are often critical of what they regard as irrationally risk behavior by teens. While my teen years are well behind me, I have looked back on some of my decisions that seemed like good ideas at the time. They really did seem like good ideas, yet my adult assessment is that they were not good decisions. However, I am weighing these decisions in terms of my adult perspective and in terms of the later consequences of these actions. I also must consider that the rewards that I felt in the past are now naught but faded memories. To use the obvious analogy, it is rather like eating an entire good cake. At the time, that sugar rush and taste are quite rewarding and it seems like a good idea while one is eating that cake. But once the sugar rush gives way to the sugar crash and the cake, as my mother would say, “went right to the hips”, then the assessment might be rather different. The food analogy is especially apt: as you might well recall from your own youth, candy and other junk food tasted so good then. Now it is mostly just…junk. This also raises an interesting subject worthy of additional exploration, namely the assessment of value over time.

Going back to the cake, eating the whole thing was enjoyable and seemed like a great idea at the time. Yes, I have eaten an entire cake. With ice cream. But, in my defense, I used to run 95-100 miles per week. Looking back from the perspective of my older self, that seems to have been a bad idea and I certainly would not do that (or really enjoy doing so) today. But, does this change of perspective show that it was a poor choice at the time? I am tempted to think that, at the time, it was a good choice for the kid I was. But, my adult self now judges my kid self rather harshly and perhaps unfairly. After all, there does seem to be considerable relativity to value and it seems to be mere prejudice to say that my current evaluation should be automatically taken as being better than the evaluations of the past.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Eating What Bugs Us

Like most people, I have eaten bugs. Also, like most Americans, this consumption has been unintentional and often in ignorance. In some cases, I’ve sucked in a whole bug while running. In most cases, the bugs are bug parts in foods—the FDA allows a certain percentage of “debris” in our food and some of that is composes of bugs.

While Americans typically do not willingly and knowingly eat insects, about 2 billion people do and there are about 2,000 species that are known to be edible. As might be guessed, many of the people who eat insect live in developing countries. As the countries develop, people tend to switch away from eating insects. This is hardly surprising—eating meat is generally seen as a sign of status while eating insects typically is not. However, there are excellent reasons to utilize insects on a large scale as a food source for humans and animals. Some of these reasons are practical while others are ethical.

One practical reason to utilize insects as a food source is the efficiency of insects. 10 pounds of feed will yield 4.8 pounds of cricket protein, 4.5 pounds of salmon, 2.2 pounds of chicken, 1.1 pounds of pork, and .4 pounds of beef. With an ever-growing human population, increased efficiency will be critical to providing people with enough food.

A second practical reason to utilize insects as a food source is that they require less land to produce protein. For example, it takes 269 square feet to produce a pound of pork protein while it requires only 88 square feet to generate one pound of mealworm protein. Given an ever-expanding population and every-less available land, this is a strong selling point for insect farming as a food source. It is also morally relevant, at least for those who are concerned about the environmental impact of food production.

A third reason, which might be rejected by those who deny climate change, is that producing insect protein generates less greenhouse gas. The above-mentioned pound of pork generates 38 pounds of CO2 while a pound of mealworms produces only 14. For those who believe that CO2 production is a problem, this is clearly both a moral and practical reason in favor of using insects for food. For those who think that CO2 has no impact or does not matter, this would be no advantage.

A fourth practical reason is that while many food animals are fed using food that humans could also eat (like grain and corn based feed), many insects readily consume organic waste that is unfit for human consumption. As such, insects can transform low-value feed material (such as garbage) into higher value feed or food. This would also provide a moral reason, at least for those who favor reducing the waste that ends up in landfills. This could provide some interesting business opportunities and combinations—imagine a waste processing business that “processes” organic waste with insects and then converts the insects to feed, food or for use in other products (such as medicine, lipstick and alcoholic beverages).

Perhaps the main moral argument in favor of choosing insect protein over protein from animals such as chicken, pigs and cows is based on the assumption than insects have a lower moral status than such animals or at least would suffer less.

In terms of the lower status version, the argument would be a variation on one commonly used to support vegetarianism over eating meat: plants have a lower moral status than animals; therefore it is preferable to eat plants rather than animals. Assuming that insects have a lower moral status than chickens, pigs, cows, etc., then using insects for food would be morally preferable. This, of course, also rests on the assumption that it is preferable to do wrong (in this case kill and eat) to beings with a lesser moral status than to those with a higher status.

In terms of the suffering argument, this would be a stock utilitarian style argument. The usual calculation involves weighing the harms (in this case, the suffering) against the benefits. Insects are, on the face of it, less able to suffer (and less able understand their own suffering) than animals like pigs and cows. Also, insects would seem to suffer less under the conditions in which they would be raised. While chickens might be factory farmed with their beaks clipped and confined to tiny cages, mealworms would be pretty much doing what they would do in the “wild” when being raised as food. While the insect would still be killed, it would seem that the overall suffering generated by using insects as food would be far less than that created by using animals like pigs and cows as food. This would seem to be a morally compelling argument.

The most obvious problem with using insects as food is what people call the “yuck factor.” Bugs are generally seen as dirty and gross—things that you do not want to find in food, let alone being the food. Some of the “yuck” is visual—seeing the insect as one eats it. One obvious solution is to process insects into forms that look like “normal” foods, such as powders, pastes, and the classic “mystery meat patty.” People can also learn to overcome the distaste, much as some people have to overcome their initial rejection of foods like lobster and crab.

Another concern is that insect might bear the stigma of being a food suitable for “primitive” cultures and not suitable for “civilized” people. Insect based food products might also be regarded as lacking in status, especially in contrast with traditional meats. These are, of course, all matters of social perception. Just as they are created, they can be altered. As such, these problems could be overcome.

Since I grew up eating lobsters and crabs (I’m from Maine), I am already fine with eating “bug-like” creatures. So, I would not have any problem with eating actual bugs, provided that they are safe to eat. I will admit that I probably will not be serving up plates of fried beetles to my friends, but I would have no problem serving up food containing properly processed insects. And not just because it would be, at least initially, funny.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Determinism, Order & Chaos

http://www.gettyimages.com/detail/460688619

As science and philosophy explained ever more of the natural world in the Modern Era, there arose the philosophical idea of strict determinism. Strict determinism, as often presented, includes both metaphysical and epistemic aspects. In regards to the metaphysics, it is the view that each event follows from previous events by necessity. In negative terms, it is a denial of both chance and free will. A religious variant on this is predestination, which is the notion that all events are planned and set by a supernatural agency (typically God). The epistemic aspect is grounded in the metaphysics: if each event follows from other events by necessity, if someone knew all the relevant facts about the state of a system at a time and had enough intellectual capabilities, she could correctly predict the future of that system. Philosophers and scientists who are metaphysical determinists typically claim that the world seems undetermined to us because of our epistemic failings. In short, we believe in choice or chance because we are unable to always predict what will occur. But, for the determinist, this is a matter of ignorance and not metaphysics. For those who believe in choice or chance, our inability to predict is taken as being the result of a universe in which choice or chance is real. That is, we cannot always predict because the metaphysical nature of the universe is such that it is unpredictable. Because of choice or chance, what follows from one event is not a matter of necessity.

One rather obvious problem for choosing between determinism and its alternatives is that given our limited epistemic abilities, a deterministic universe seems the same to us as a non-deterministic universe. If the universe is deterministic, our limited epistemic abilities mean that we often make predictions that turn out to be wrong. If the universe is not deterministic, our limited epistemic abilities and the non-deterministic nature of the universe mean that we often make predictions that are in error. As such, the fact that we make prediction errors is consistent with deterministic and non-deterministic universes.

It can be argued that as we get better and better at predicting we will be able to get a better picture of the nature of the universe. However, until we reach a state of omniscience we will not know whether our errors are purely epistemic (events are unpredictable because we are not perfect predictors) or are the result of metaphysics (that is, the events are unpredictable because of choice or chance).

Interestingly, one feature of reality that often leads thinkers to reject strict determinism is what could be called chaos. To use a concrete example, consider the motion of the planets in our solar system. In the past, the motion of the planets was presented as a sign of the order of the universe—a clockwork solar system in God’s clockwork universe. While the planets might seem to move like clockwork, Newton realized that the gravity of the planets affected each other but also realized that calculating the interactions was beyond his ability. In the face of problems in his physics, Newton famously used God to fill in the gaps. With the development of powerful computers, scientists have been able to model the movements of the planets and the generally accepted view is that they are not parts of deterministic divine clock. To be less poetical, the view is that chaos seems to be a factor. For example, some scientists believe that the gas giant Jupiter’s gravity might change Mercury’s gravity enough that it collides with Venus or Earth. This certainly suggests that the solar system is not an orderly clockwork machine of perfect order. Because of this sort of thing (which occurs at all levels in the world) some thinkers take the universe to include chaos and infer from the lack of perfect order that strict determinism is false. While this is certainly tempting, the inference is not as solid as some might think.

It is, of course, reasonable to infer that the universe lacks a strict and eternal order from such things as the chaotic behavior of the planets. However, strict determinism is not the same thing as strict order. Strict order is a metaphysical notion that a system will work in the same way, without any variation or change, for as long as it exists. The idea of an eternally ordered clockwork universe is an excellent example of this sort of system: it works like a perfect clock, each part relentlessly following its path without deviation. While a deterministic system would certainly be consistent with such an orderly system, determinism is not the same thing as strict order. After all, to accept determinism is to accept that each event follows by necessity from previous events. This is consistent with a system that changes over time and changes in ways that seem chaotic.

Returning to the example of the solar system, suppose that Jupiter’s gravity will cause Mercury’s orbit to change enough so that it hits the earth. This is entirely consistent with that event being necessarily determined by past events such that things could not have been different. To use an analogy, it is like a clockwork machine built with a defect that will inevitably break the machine. Things cannot be otherwise, yet to those ignorant of the defect, the machine will seem to fall into chaos. However, if one knew the defect and had the capacity to process the data, then this breakdown would be completely predictable. To use another analogy, it is like scripted performance of madness by an actor: it might seem chaotic, but the script determines it. That is, it merely seems chaotic because of our ignorance. As such, the appearance of chaos does not disprove strict determinism because determinism is not the same thing as unchanging.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

The Corruption of Academic Research

Synthetic insulin crystals synthesized using r...

Synthetic insulin crystals synthesized using recombinant DNA technology (Photo credit: Wikipedia)

STEM (Science, Technology, Engineering and Mathematics) fields are supposed to be the new darlings of the academy, so I was slightly surprised when I heard an NPR piece on how researchers are struggling for funding. After all, even the politicians devoted to cutting education funding have spoken glowingly of STEM. My own university recently split the venerable College of Arts & Sciences, presumably to allow more money to flow to STEM without risking that professors in the soft sciences and the humanities might inadvertently get some of the cash. As such I was somewhat curious about this problem, but mostly attributed it to a side-effect of the general trend of defunding public education. Then I read “Bad Science” by Llewellyn Hinkes-Jones. This article was originally published in issue 14, 2014 of Jacobin Magazine. I will focus on the ethical aspects of the matters Hinkes-Jones discussed in this article, which is centered on the Bayh-Dole Act.

The Bayh-Dole Act was passed in 1980 and was presented as having very laudable goals. Before the act was passed, universities were limited in regards to what they could do with the fruits of their scientific research. After the act was passes, schools could sell their patents or engage in exclusive licensing deals with private companies (that is, monopolies on the patents). Supporters asserted this act would be beneficial in three main ways. The first is that it would secure more private funding for universities because corporations would provide money in return for the patents or exclusive licenses. The second is that it would bring the power of the profit motive to public research: since researchers and schools could profit, they would be more motivated to engage in research. The third is that the private sector would be motivated to implement the research in the form of profitable products.

On the face of it, the act was a great success. Researchers at Columbia University patented the process of DNA cotransfrormation and added millions to the coffers of the school. A patent on recombinant DNA earned Stanford over $200 million. Companies, in turn, profited greatly. For example, researchers at the University of Utah created Myriad Genetics and took ownership of their patent on the BRCA1 and BRCA2 tests for breast cancer. The current cost of the test is $4,000 (in comparison a full sequencing of human DNA costs $1,000) and the company has a monopoly on the test.

Given these apparent benefits, it is easy enough to advance a utilitarian argument in favor of the act and its consequences. After all, if allows universities to fund their research and corporations to make profits, then its benefits would seem to be considerable, thus making it morally good. However, a proper calculation requires considering the harmful consequences of the act.

The first harm is that the current situation imposes a triple cost on the public. One cost is that the taxpayers fund the schools that conduct the research. The next is that thanks to the monopolies on patents the taxpayers have to pay whatever prices the companies wish to charge, such as the $4,000 for a test that should cost far less. In an actual free market there would be competition and lower prices—but what we have is a state controlled and regulated market. Ironically, those who are often crying the loudest against government regulation and for the value of competition are quite silent on this point.  The final cost of the three is that the corporations can typically write off their contributions on their taxes, thus leaving other taxpayers to pick up their slack. These costs seem to be clear harms and do much to offset the benefits—at least when looked at from the perspective of the whole society and not just focusing on those reaping the benefits.

The second harm is that, ironically, this system makes research more expensive. Since processes, strains of bacteria and many other things needed for research are protected by monopolistic patents the researchers who do not hold these patents have to pay to use them. The costs are usually quite high, so while the patent holders benefit, research in general suffers. In order to pay for these things, researchers need more funding, thus either imposing more cost on taxpayers or forcing them to turn to private funding (which will typically result in more monopolistic patents).

The third harm is the corruption of researchers. Researchers are literally paid to put their names on positive journal articles that advance the interests of corporations. They are also paid to promote drugs and other products while presenting themselves as researchers rather than paid promoters. If the researchers are not simply bought, the money is clearly a biasing factor. Since we are depending on these researchers to inform the public and policy makers about these products, this is clearly a problem and presents a clear danger to the public good.

A fourth harm is that even the honest researchers who have not been bought are under great pressure to produce “sexy science” that will attract grants and funding. While it has always been “publish or perish” in modern academics, the competition is even fiercer in the sciences now. As such, researchers are under great pressure to crank out publications. The effect has been rather negative as evidenced by the fact that the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies.  Far from driving advances in science, the act has served as an engine of corruption, fraud and bad science. This would be bad enough, but there is also the impact on a misled and misinformed public. I must admit that I fell for the antioxidant and omega-3 “research”—I modified my diet to include more antioxidants and omega-3. While this bad science does get debunked, the debunking takes a long time and most people never hear about it. For example, how many people know that the antioxidant and omega-3 “research” is flawed and how many still pop omega-3 “fish oil pills” and drink “antioxidant teas”?

A fifth harm is that universities have rushed to cash in on the research, driven by the success of the research schools that have managed to score with profitable patents. However, setting up research labs aimed at creating million dollar patents is incredibly expensive. In most cases the investment will not yield the hoped for returns, thus leaving many schools with considerable expenses and little revenue.

To help lower costs, schools have turned to employing adjuncts to do the teaching and research, thus creating a situation in which highly educated but very low-paid professionals are toiling away to secure millions for the star researchers, the administrators and their corporate benefactors. It is, in effect, sweat-shop science.

This also shows another dark side to the push for STEM: as the number of STEM graduates increase, the value of the degrees will decrease and wages for the workers will continue to fall. This is great for the elite, but terrible for those hoping that a STEM degree will mean a good job and a bright future.

These harms would seem to outweigh the alleged benefits of the act, thus indicating it is morally wrong. Naturally, it can be countered that the costs are worth it. After all, one might argue, the incredible advances in science since 1980 have been driven by the profit motive and this has been beneficial overall. Without the profit motive, the research might have been conducted, but most of the discoveries would have been left on the shelves. The easy and obvious response is to point to all the advances that occurred due to public university research prior to 1980 as well as the research that began before then and came to fruition.

While solving this problem is a complex matter, there seem to be some easy and obvious steps. The first would be to restore public funding of state schools. In the past, the publicly funded universities drove America’s worldwide dominance in research and helped fuel massive economic growth while also contributing to the public good. The second would be replacing the Bayh-Dole Act with an act that would allow universities to benefit from the research, but prevent the licensing monopolies that have proven so damaging. Naturally, this would not eliminate patents but would restore competition to what is supposed to be a competitive free market by eliminating the creation of monopolies from public university research. The folks who complain about the state regulating business and who praise the competitive free market will surely get behind this proposal.

It might also be objected that the inability to profit massively from research will be a disincentive. The easy and obvious reply is that people conduct research and teach with great passion for very little financial compensation. The folks that run universities and corporations know this—after all, they pay such people very little yet still often get exceptional work. True, there are some people who are solely motivated by profit—but those are typically the folks who are making the massive profit rather than doing the actual research and work that makes it all possible.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Ebola, Ethics & Safety

English: Color-enhanced electron micrograph of...

English: Color-enhanced electron micrograph of Ebola virus particles. Polski: Mikrofotografia elektronowa cząsteczek wirusa Ebola w fałszywych kolorach. (Photo credit: Wikipedia)

Kaci Hickox, a nurse from my home state of Maine, returned to the United States after serving as a health care worker in the Ebola outbreak. Rather than being greeted as a hero, she was confined to an unheated tent with a box for a toilet and no shower. She did not have any symptoms and tested negative for Ebola. After threatening a lawsuit, she was released and allowed to return to Maine. After arriving home, she refused to be quarantined again. She did, however, state that she would be following the CDC protocols. Her situation puts a face on a general moral concern, namely the ethics of balancing rights with safety.

While past outbreaks of Ebola in Africa were met largely with indifference from the West (aside from those who went to render aid, of course), the current outbreak has infected the United States with a severe case of fear. Some folks in the media have fanned the flames of this fear knowing that it will attract viewers. Politicians have also contributed to the fear. Some have worked hard to make Ebola into a political game piece that will allow them to bash their opponents and score points by appeasing fears they have helped create. Because of this fear, most Americans have claimed they support a travel ban in regards to Ebola infected countries and some states have started imposing mandatory quarantines. While it is to be expected that politicians will often pander to the fears of the public, the ethics of the matter should be considered rationally.

While Ebola is scary, the basic “formula” for sorting out the matter is rather simple. It is an approach that I use for all situations in which rights (or liberties) are in conflict with safety. The basic idea is this. The first step is sorting out the level of risk. This includes determining the probability that the harm will occur as well as the severity of the harm (both in quantity and quality). In the case of Ebola, the probability that someone will get it in the United States is extremely low. As the actual experts have pointed out, infection requires direct contact with bodily fluids while a person is infectious. Even then, the infection rate seems relatively low, at least in the United States. In terms of the harm, Ebola can be fatal. However, timely treatment in a well-equipped facility has been shown to be very effective. In terms of the things that are likely to harm or kill an American in the United States, Ebola is near the bottom of the list. As such, a rational assessment of the threat is that it is a small one in the United States.

The second step is determining key facts about the proposals to create safety. One obvious concern is the effectiveness of the proposed method. As an example, the 21-day mandatory quarantine would be effective at containing Ebola. If someone shows no symptoms during that time, then she is almost certainly Ebola free and can be released. If a person shows symptoms, then she can be treated immediately. An alternative, namely tracking and monitoring people rather than locking them up would also be fairly effective—it has worked so far. However, there are the worries that this method could fail—bureaucratic failures might happen or people might refuse to cooperate. A second concern is the cost of the method in terms of both practical costs and other consequences. In the case of the 21-day quarantine, there are the obvious economic and psychological costs to the person being quarantined. After all, most people will not be able to work from quarantine and the person will be isolated from others. There is also the cost of the quarantine itself. In terms of other consequences, it has been argued that imposing this quarantine will discourage volunteers from going to help out and this will be worse for the United States. This is because it is best for the rest of the world if Ebola is stopped in Africa and this will require volunteers from around the world. In the case of the tracking and monitoring approach, there would be a cost—but far less than a mandatory quarantine.

From a practical standpoint, assessing a proposed method of safety is a utilitarian calculation: does the risk warrant the cost of the method? To use some non-Ebola examples, every aircraft could be made as safe as Air-Force One, every car could be made as safe as a NASCAR vehicle, and all guns could be taken away to prevent gun accidents and homicides. However, we have decided that the cost of such safety would be too high and hence we are willing to allow some number of people to die. In the case of Ebola, the calculation is a question of considering the risk presented against the effectiveness and cost of the proposed method. Since I am not a medical expert, I am reluctant to make a definite claim. However, the medical experts do seem to hold that the quarantine approach is not warranted in the case of people who lack symptoms and test negative.

The third concern is the moral concern. Sorting out the moral aspect involves weighing the practical concerns (risk, effectiveness and cost) against the right (or liberty) in question. Some also include the legal aspects of the matter here as well, although law and morality are distinct (except, obviously, for those who are legalists and regard the law as determining morality). Since I am not a lawyer, I will leave the legal aspects to experts in that area and focus on the ethics of the matter.

When working through the moral aspect of the matter, the challenge is determining whether or not the practical concerns morally justify restricting or even eliminating rights (or liberties) in the name of safety. This should, obviously enough, be based on consistent principles in regards to balancing safety and rights. Unfortunately, people tend to be wildly inconsistent in this matter. In the case of Ebola, some people have expressed the “better safe than sorry” view and have elected to impose or support mandatory quarantines at the expense of the rights and liberties of those being quarantined. In the case of gun rights, these are often taken as trumping concerns about safety. The same holds true of the “right” or liberty to operate automobiles: tens of thousands of people die each year on the roads, yet any proposal to deny people this right would be rejected. In general, people assess these matters based on feelings, prejudices, biases, ideology and other non-rational factors—this explains the lack of consistency. So, people are wiling to impose on basic rights for little or no gain to safety, while also being content to refuse even modest infringements in matters that result in great harm. However, there are also legitimate grounds for differences: people can, after due consideration, assess the weight of rights against safety very differently.

Turning back to Ebola, the main moral question is whether or not the safety gained by imposing the quarantine (or travel ban) would justify denying people their rights. In the case of someone who is infectious, the answer would seem to be “yes.” After all, the harm done to the person (being quarantined) is greatly exceeded by the harm that would be inflicted on others by his putting them at risk of infection. In the case of people who are showing no symptoms, who test negative and who are relatively low risk (no known specific exposure to infection), then a mandatory quarantine would not be justified. Naturally, some would argue that “it is better to be safe than sorry” and hence the mandatory quarantine should be imposed. However, if it was justified in the case of Ebola, it would also be justified in other cases in which imposing on rights has even a slight chance of preventing harm. This would seem to justify taking away private vehicles and guns: these kill more people than Ebola. It might also justify imposing mandatory diets and exercise on people to protect them from harm. After all, poor health habits are major causes of health issues and premature deaths. To be consistent, if imposing a mandatory quarantine is warranted on the grounds that rights can be set aside even when the risk is incredibly slight, then this same principle must be applied across the board. This seems rather unreasonable and hence the mandatory quarantine of people who are not infectious is also unreasonable and not morally acceptable.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Factions & Fallacies

http://www.gettyimages.com/detail/151367815

In general, human beings readily commit to factions and then engage in very predictable behavior: they regard their own factions as right, good and truthful while casting opposing factions as wrong, evil and deceitful. While the best known factions tend to be political or religious, people can form factions around almost anything, ranging from sports teams to video game consoles.

While there can be rational reasons to form and support a faction, factionalism tends to be fed and watered by cognitive biases and fallacies. The core cognitive bias of factionalism is what is commonly known as in group bias. This is the psychology tendency to easily form negative views of those outside of the faction. For example, Democrats often regard Republicans in negative terms, casting them as uncaring, sexist, racist and fixated on money. In turn, Republicans typically look at Democrats in negative terms and regard them as fixated on abortion, obsessed with race, eager to take from the rich, and desiring to punish success. This obviously occurs outside of politics as well, with competing religious groups regarding each other as heretics or infidels. It even extends to games and sports, as the battle of #gamergate serving as a nice illustration.

The flip side of this bias is that members of a faction regard their fellows and themselves in a positive light and are thus inclined to attribute to themselves positive qualities. For example, Democrats see themselves as caring about the environment and being concerned about social good. As another example, Tea Party folks cast themselves as true Americans who get what the founding fathers really meant.

This bias is often expressed in terms of and fuelled by stereotypes. For example, critics of the sexist aspects of gaming will make use of the worst stereotypes of male gamers (dateless, pale misogynists who spew their rage around a mouthful of Cheetos). As another example, Democrats will sometimes cast the rich as being uncaring and out of touch plutocrats. These stereotypes are sometimes taken the extreme of demonizing: presenting the other faction members as not merely wrong or bad but evil to the extreme.

Such stereotypes are easy to accept and many are based on another bias, known as a fundamental attribution error. This is a psychological tendency to fail to realize that the behavior of other people is as much limited by circumstances as our behavior would be if we were in their shoes. For example, a person who was born into a well off family and enjoyed many advantages in life might fail to realize the challenges faced by people who were not so lucky in their birth. Because of this, she might demonize those who are unsuccessful and attribute their failure to pure laziness.

Factionalism is also strengthened by various common fallacies. The most obvious of these is the appeal to group identity. This fallacy occurs when a person accepts her pride in being in a group as evidence that a claim is true. Roughly put, a person believes it because her faction accepts it as true. The claim might actually be true, the mistake is that the basis of the belief is not rational. For example, a devoted environmentalist might believe in climate change because of her membership in that faction rather than on the basis of evidence (which actually does show that climate change is occurring). This method of belief “protects” group members from evidence and arguments because such beliefs are based on group identity rather than evidence and arguments. While a person can overcome this fallacy, faction-based beliefs tend to only change when the faction changes or if the person leaves the faction.

The above-mentioned biases also tend to lean people towards fallacious reasoning. The negative biases tend to motivate people to accept straw man reasoning, which is when a when a person simply ignores a person’s actual position and substitutes a distorted, exaggerated or misrepresented version of that position. Politicians routinely make straw men out of the views they oppose and their faction members typically embrace these. The negative biases also make ad hominem fallacies common. An ad homimen is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made (or the character, circumstances, or actions of the person reporting the claim). Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). For example, opponents of a feminist critic of gaming might reject her claims by claiming that she is only engaged in the criticism so as to become famous and make money. While it might be true that she is doing just that, this does not disprove her claims. The guilt by association fallacy, in which a person rejects a claim simply because it is pointed out that people she dislikes accept the claim, both arises from and contributes to factionalism.

The negative views and stereotypes are also often fed by fallacies that involve poor generalizations. One is misleading vividness, a fallacy in which a very small number of particularly dramatic events are taken to outweigh a significant amount of statistical evidence. For example, a person in a faction holding that gamers are violent misogynists might point to the recent death threats against a famous critic of sexism in games as evidence that most gamers are violent misogynists. Misleading vividness is, of course, closely related to hasty generalization, a fallacy in which a person draws a conclusion about a population based on a sample that is not large enough to justify that conclusion. For example, a Democrat might believe that all corporations are bad based on the behavior of BP and Wal-Mart. Biased generalizations also occur, which is a fallacy that is committed when a person draws a conclusion about a population based on a sample that is biased or prejudiced in some manner. This tends to be fed by the confirmation bias—the tendency people have to seek and accept evidence for their view while avoiding or ignoring evidence against their view. For example, a person might hold that his view that the poor want free stuff for nothing from visits to web sites that feature Youtube videos selected to show poor people expressing that view.

The positive biases also contribute to fallacious reasoning, often taking the form of a positive ad hominem. A positive ad hominem occurs when a claim is accepted on the basis of some irrelevant fact about the author or person presenting the claim or argument. Typically, this fallacy involves two steps. First, something positive (but irrelevant) about the character of person making the claim, her circumstances, or her actions is made. Second, this is taken to be evidence for the claim in question. For example, a Democrat might accept what Bill Clinton says as being true, just because he really likes Bill.

Nor surprisingly, factionalism is also supported by faction variations on appeals to belief (it is true/right because my faction believes it is so), appeal to common practice (it is right because my faction does it), and appeal to tradition (it is right because my faction has “always done this”).

Factionalism is both fed by and contributes to such biases and poor reasoning. This is not to say that group membership is a bad thing, just that it is wise to be on guard against the corrupting influence of factionalism.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

42 Fallacies for Free in Portuguese

Thanks to Laércio Lameira my 42 Fallacies is available in Portuguese as a free PDF.

42 Falacias

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page