Trump & Authenticity

Donald Trump has managed to relentlessly prove the political pundits wrong. While the idea of Trump in the White House was once an absurd joke, each passing day makes it ever more likely that America will fall under the Trumpocracy.

Given that Trump lacks the experience and skills that are usually expected in a presidential candidate, it might be wondered how he is doing so well. When his supporters are asked about their reasons, they typically assert that Trump “tells it like it is”, that he is not politically correct and that he is “authentic.”

Trump’s remarks do clearly establish that he is not politically correct—at least from the standpoint of the left. Trump does, however, go beyond merely not being politically correct and his rhetoric enters into the realms of xenophobia and misogyny. While I am fine with a person not being political correct, regarding his crude and vulgar xenophobia and misogyny as appealing seems to be a mark of character flaws. But, it cannot be denied that this is what some people really like. While it would be unfair to claim that supporting Trump is equivalent to endorsing xenophobia and misogyny, to support Trump is to support his professed values.

The claim that Trump “tells it like it is” is both false and absurd. Trump tells it like it is not, as the Politifact evaluation of his claims attests. Those who support Trump might honestly believe his untruths (as Trump himself might) and they can sincerely claim they back him because he “tells it as they think it is.” However, voters should at least make some minimal effort to check on the truth of Trump’s claims. That said, truth seems to matter very little in political support—perhaps because the system generally provides voters with a choice between untruths.

In order to determine whether or not Trump is authentic, I need to work out a rough account of authenticity in politics. Part of being authentic is a matter of not having certain qualities: not being scripted, not presenting an act, and not saying what one thinks the audience wants to hear. In terms of the positive qualities, authenticity presenting one’s genuine self and saying what one really believes.

It might be thought that Trump’s unrelenting untruths would disqualify him from being authentic. However, authenticity is distinct from saying true things. Authenticity just requires that a person says what she believes, not that she say what is true. This is analogous to honesty: being honest does not entail that a person tells the truth. It entails that the person tells what they believe to be the truth. A dishonest person is not someone who says untrue things—it is someone who says things they believe to be untrue.

Interestingly, there could be a paradox of authenticity. Imagine, if you will, a person whose genuine self is a scripted self and whose views are those that the audience wants to hear at that moment. This would be a person whose authentic self is unauthentic. It could, of course, be argued that there is no paradox: the person would just be unauthentic because she would lack a genuine self and genuine views. It can also be argued that no such person exists, so there is no real paradox. In any case, it is time to return to discussing Trump.

With the rough account of authenticity in hand, the next step is considering the sort of empirical data that would confirm of disprove a person’s authenticity. Since authenticity is mainly a matter of the presented self matching the genuine self, this runs right into the classic philosophical problem of other minds: “how do I know what is going on in another person’s mind?” In the case of authenticity, the questions are “how do I know the presented persona is the real person?” and “how do I know that the person believes what they say?”

In the case of Trump, people point to the fact that he rambles and riffs when giving speeches as evidence that he is unscripted. They also point to the fact that his assertions are political incorrect and regarded by many as outrageous as evidence that he is saying what he really believes. The idea seems to be that if he was a scripted and inauthentic politician, he would be better organized and would be presenting the usual safe and pandering speeches of politicians.

While this does have a certain appeal, the riffing and rambling could be taken as evidence that he is just not well organized. His outrageous claims can also be taken as evidence of ignorance. It would be a mistake to accept disorganized ignorance as evidence of laudable authenticity. Then again, that might be his genuine self, thus making it authentic. A such, more is needed in the way of evidence.

One common way of looking for authenticity is to take consistency as evidence. The idea is that if a person sticks to a set of beliefs and acts in generally the same way in various circumstances, then this consistency reveals that those believes and actions are sincere. While this is certainly appealing, a smart inauthentic person (like a smart liar) could create a consistent false persona for the public.

In contrast, a person who shifts beliefs with alarming regularity and acts in very different ways depending on the audience is often regarded as being inauthentic because of this inconsistency. The inference is that the person is shifting because they are acting and pandering. While this is also appealing, a person could be sincerely inconsistent and an authentic panderer.

Trump has shifted his professed positions in his transformation to the Republican nominee and his former opponents and current critics have spent considerable time and energy making this point. As such, it is tempting to question Trump’s authenticity in regards to his professed positions. That said, a person can change and adopt new sincere beliefs.

Former presidential hopeful Ben Carson made the interesting claim that there are two Trumps: the on one stage and the one “who’s very cerebral, sits there and considers things carefully.” If Carson is right about this, the “authentic” Trump that appeals to the voters is, ironically, just an act. The Trump on stage is a persona and not his real self—which would hardly be surprising given that he is a master showman.

One reasonable reply to this is that professionals put on a persona when engaging in their professional activities and everyone changes how they behave depending on the audience. For example, I behave differently when I am teaching a class than when I am running with friends. As such, if such change means a person is unauthentic, most people are not authentic. Thus making the charge of authenticity less stinging.

However, there seems to be more to inauthenticity than merely changing behavior to match the social context. Rather, an inauthentic person is engaged in an intentional deception to get others to accept something the person is, in fact, not. This is something that actors do—and it is harmless and even laudable when it is done to amuse. However, when it is done with a different intent (such as deceiving voters so as to get elected), then it is neither harmless nor laudable. I suspect Trump is not authentic, but since I do not know the true Trump, I cannot say with certainty.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Incredible Shifting Hillary

When supporters of Donald Trump are asked why they back him, the most common answers are that Trump “tells it like it is” and that he is “authentic.” When people who dislike Hillary are asked why, they often refer to her ever shifting positions and that she just says what she thinks people want to hear.

Given that Trump has, at best, a distant relation with the truth it is somewhat odd that he is seen as telling it like it is. He may be authentic, but he is most assuredly telling it like it is not. While Hillary has shifted positions, she has a far closer relationship to the truth (although still not a committed one). Those who oppose Hillary tend to focus on these shifts in making the case against her. Her defenders endeavor to minimize the impact of these claims or boldly try to make a virtue of said shifting. Given the importance of the shifting, this a matter well worth considering.

While the extent of Hillary’s shifting can be debated, the fact that she has shifted on major issues is a matter of fact. Good examples of shifts include the second Iraq War, free trade, same-sex marriage and law enforcement. While many are tempted to claim that the fact that she has shifted her views on such issues proves she is wrong now, doing this would be to fall victim to the classic ad hominem tu quoque fallacy. This is an error in reasoning in which it is inferred that a person’s current view or claim is mistaken because they have held to a different view or claim in the past. While two inconsistent claims cannot be true at the same time, pointing out that a person’s current claim is inconsistent with a past claim does not prove which claim is not true (and both could actually be false). After all, the person could have been wrong then while being right now. Or vice versa. Or wrong in both cases. Because of this, it cannot be inferred that Hillary’s views are wrong now simply because she held opposite views in the past.

While truth is important, the main criticism of Hillary’s shifting is not that she has moved from a correct view to an erroneous view. Rather, the criticism is that she is shifting her expressed views to match whatever she thinks the voters want to hear. That is, she is engaged in pandering.

Since pandering is a common practice in politics, it seems reasonable to hold that it is unfair to single Hillary out for special criticism. This does not, of course defend the practice. To accept that being common justifies a practice would be to fall victim to the common practice fallacy. This is an error in reasoning in which a practice is defended by asserting it is a common one. Obviously enough, the mere fact that something is commonly done does not entail that it is good or justified. That said, if a practice is common yet wrong, it is still unfair to single out a specific person for special criticism for engaging in that practice. Rather, all those that engage in the practice should be criticized.

It could be argued that while pandering is a common practice, Hillary does warrant special criticism because her shifting differs in relevant and significant ways from the shifting of others. This could be a matter of volume (she shifts more than others), content (she shifts on more important issues), extent (she shifts to a greater degree) or some other factors. While judging the nature and extent of shifts does involve some subjective assessment, these factors can be evaluated with a reasonable degree of objectivity—although partisan influences can interfere with this. Since Hillary is generally viewed through the lenses of intense partisanship, I will not endeavor to address this matter—it is unlikely that anything I could write would sway partisan opinions. I will, however, address the ethics of shifting.

While there is a tendency to regard position shifting with suspicion, there are cases in which is not only acceptable, but laudable. These are cases in which the shift is justified by evidence or reasoning that warrants such a shift. For example, I was a theoretical anarchist for a while in college: I believed that the best government was the least government and preferably none at all. However, reading Locke, Hobbes and others as well as gaining a better understanding of how humans actually behave resulted in a shift in my position. I am no longer an anarchist on the grounds that the position is not well supported. To use another example, I went through a phase in which I was certain in my atheism. However, arguments made by Hume and Kant changed my view regarding the possibility of such certainty. As a final example, I used to believe in magical beings like the Easter Bunny and Santa Claus. However, the evidence of their nonexistence convinced me to shift my view. In all these cases the shifts are laudable: I changed my view because of considered evidence and argumentation. While there can be considerable debate about what counts as good evidence or reasoning for a shift, the basic principle seems sound. A person should believe what is best supported by evidence and reasoning and this often changes over time.

Turning back to Hillary, if she has shifted her views on the basis of evidence and reasoning that justly support her new views, then she should not be condemned for the shift. For example, if she believed in the approach to crime taken by her husband when he was President, but has changed her view in the face of evidence that this view is flawed, then her change would be quite reasonable. As might be expected, her supporters tend to claim this is why she changes her views. The challenge is to show that this is the case. Her critics typically claim that the reason for her shifts is to match what she thinks will get her the most votes, which leads to the question of whether this is a bad thing or not.

A very reasonable concern about a politician who just says what she thinks the voters want to hear is that the person lacks principles, so that the voters do not really know who they are voting for. As such, they cannot make a good decision regarding what the politician would actually do in office.

A possible reply to this is that a politician who shifts her views to match those of the voters is exactly what people should want in a representative democracy: the elected officials should act in accord with the will of the people. This does raise the broad subject of the proper function of an elected official: to do the will of the people, to do what they said they would do, to act in accord with their character and principles or something else. This goes beyond the limited scope of the essay, but the answer is rather critical to determining whether Hillary’s shifting is a good or bad thing. If politicians should act on their own principles and views rather than doing what the people want them to do, then there would seem to be good grounds for criticizing any politician whose own views are not those of the people.

A final interesting point is to argue that Hillary should not be criticized for shifting her views to match those that are now held by the majority of people (or majority of Democrats). If other people can shift their views on these matters over time in ways that are acceptable, then the same should apply to Hillary. For example, when Hillary was against same-sex marriage that was the common view in the country. Now, most Americans are fine with it—and so is Hillary. Her defenders assert that she, like most Americans, has changed her views over time in the face of changing social conditions. Her detractors claim she is merely pandering and has no commitment beyond achieving power. This is a factual matter, albeit one that is hard to settle without evidence as to what is really going on in her mind. After all, a mere change in her view to match the general view is consistent with both unprincipled pandering and a reasoned change in a position that has evolved with the times.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Antibiotics & the Cost of Agriculture

Modern agriculture does deserve considerable praise for the good that it does. Food is plentiful, relatively cheap and easy to acquire. Instead of having to struggle with raising crops and livestock or hunting and gathering, I can simply drive to the supermarket and stock up with the food I need to not die. However, as with all things, there is a price.

The modern agricultural complex is now highly centralized and industrialized, which does have its advantages and disadvantages. There are also the harms of specific, chosen practices aimed at maximizing profits. While there are many ways to maximize profits, two common ones are to pay the lowest wages possible (which the agricultural industry does—and not just to the migrant laborers, but to the ranchers and farmers) and to shift the costs to others. I will look, briefly, at one area of cost shifting: the widespread use of antibiotics in meat production.

While most people think of antibiotics as a means of treating diseases, food animals are now routinely given antibiotics when they are healthy. One reason for this is to prevent infections: factory farming techniques, as might be imagined, vastly increase the chances of a disease spreading like wildfire among an animal population. Antibiotics, it is claimed, can help reduce the risk of bacterial infections (antibiotics are useless against viruses, of course). A second reason is that antibiotics increase the growth rate of healthy animals, allowing them to pack on more meat in less time—and time is money. These uses allow the industry to continue factory farming and maintain high productivity—which initially seems laudable. The problem is, however, that this use of antibiotics comes with a high price that is paid for by everyone else.

Eric Schlosser wrote “A Safer Food Future, Now”, which appeared in the May 2016 issue of Consumer Reports. In this article, he notes that this practice has contributed significantly to the rise of antibiotic resistant bacteria. Each year, about two million Americans are infected with resistant strains and about 20,000 die. The healthcare cost is about $20 billion. To be fair, the agricultural industry is not the only contributor to this problem: improper use of antibiotics in humans has also added to this problem. That said, the agricultural use of antibiotics accounts for about 75% of all antibiotic usage in the United States, thus converting the factory farms into for resistant bacteria.

The harmful consequences of this antibiotic use have been known for years and there have, not surprisingly, been attempts to address this through legislation. It should, however, come as little surprise that our elected leaders have failed to take action. One likely explanation is that the lobbying on the part of the relevant corporations has been successful in preventing action. After all these is a strong incentive on the part of industry to keep antibiotics in use: this increases profits by enabling factory farming and the faster growth of animals. That said, it could be contended that the lawmakers are ignorant of the harms, doubt there are harms from antibiotics or honestly believe that the harms arising from their use are outweighed by the benefits to society. That is, the lawmakers have credible reasons other than straight up political bribery (or “lobbying” as it is known in polite company). This is a factual matter, albeit one that is difficult to settle: no professional politician who has been swayed by lobbying will attribute her decision to any but the purist of motivations.

This matter is certainly one of ethical concern and, like most large scale ethical matters that involves competing interests, is one that seems best approached by utilitarian considerations. On the side of using the antibiotics, there is the increased productivity (and profits) of the factory farming system of producing food. This allows more and cheaper food to be provided to the population, which can be regarded as pluses. The main reasons to not use the antibiotics, as noted above, are that they contribute to the creation of antibiotic resistant strains that sicken and kill many people (vastly more Americans than are killed by terrorism). This inflicts considerable costs on the sickened and those who are killed as well as those who care about them. There are also the monetary costs in the health care system (although the increased revenue can be tagged as a plus for health care providers). In addition to these costs, there are also other social and economic costs, such as lost hours of work. As this indicates, the cost (illness, death, etc.) of the use of the antibiotics is shifted: the industry does not pay these costs, they are paid by everyone else.

Using a utilitarian calculation requires weighing the cost to the general population against the profits of the industry and the claimed benefits to the general population. Put roughly, the moral question is whether the improved profits and greater food production outweigh the illness, deaths and costs suffered by the public. The people in the government seem to believe that the answer is “yes.”

If the United States were in a food crisis in which the absence of the increased productivity afforded by antibiotics would cause more suffering and death than their presence, then their use would be morally acceptable. However, this does not seem to be the case—while banning this sort of antibiotic use would decrease productivity (and impact profits), the harm of doing this would seem to be vastly exceeded by the reduction in illness, deaths and health care costs. However, if an objective assessment of the matter showed that the ban on antibiotics would not create more benefits than harms, then it would be reasonable and morally acceptable to continue to use them. This is partially a matter of value (in terms of how the harms and benefits are weighted) and partially an objective matter (in terms of monetary and health costs). I am inclined to agree that the general harm of using the antibiotics exceeds the general benefits, but I could be convinced otherwise by objective data.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Understanding & “An Open Letter to My White Colleagues”

The May 2016 issue of the NEA Higher Education Advocate features “An Open Letter to my White Colleagues” by Professor Dana Stachowiak. Since I have a genetic background that is a blend of Mohawk, French and English, I am not entirely sure if I am, in fact, white. However, I look white and I am routinely identified by others as white. As such, my social identity would seem to be white. Thus, the intended audience for the letter probably includes me. The letter provides a five-point guide to “sustainable anti-racist work.” While the entire letter is certainly worthy of assessment, I will focus this essay on the third point.

Professor Stachowiak asserts that whites should “Stop trying to understand how it [racism]feels or relate to it with a personal anecdote.  You are white; you will never ever know what it feels like to experience racism.”

This assertion about what whites can never ever know is a matter of what philosophers call epistemology, which is the study of knowledge. More specifically, it falls under the subject of the limits of knowledge. In this case, the assertion is that a person’s epistemic capabilities are limited and defined (at least in part) by their race. Interestingly, this sort of view is routinely accepted by racists—a stock racist view is that other races have limits on what they are capable of knowing and this is typically connected to alleged defects in their cognitive capabilities. I am not claiming that Stachowiak is a racist, just that she has presented a race-based epistemic principle that whites cannot, in virtue of their whiteness, know the experience of racism.

There are epistemic views that do rest on the idea of incommensurable experiences. One extreme version is that no one can know what it is like to be another being. Stachowiak is presenting a less extreme version, one that limits knowledge about a specific sort of experience to a certain set of people. This can be seen as an assertion about the social reality of the United States: American racism is, by its nature, aimed at non-whites. As such, whites can never experience the racism of being targeted for being non-white. To use an analogy, it could be asserted that a man could never know the experience of misogyny because he cannot be hated as a woman (presumably even if he disguised himself as a woman).

This view obviously also requires that there cannot be racism directed against whites (at least in the United States), otherwise whites could experience racism. At this point, most readers are probably thinking that whites can be subject to racism—they can be called racist names, treated poorly simply because they are white, subject to hatred simply because of their skin color and so on for all the apparent manifestations of racism. The usual reply to this sort of claim is that whites can be subject to bias or prejudice, but racism is such that it only applies to non-whites. This requires a definition of “racism” in which the behavior is part of a social system and is based on a power disparity. To illustrate, a black might call a white “cracker” and punch him in the face for being white. This would be prejudice. A white might call a black the n-word and punch him in the face for being black. This would be racism. The difference is that the United States social system provides whites, in general, with systematic power advantages over non-whites.

It might be wondered about specific institutions that are predominantly non-white. In such cases, a white person could be the one at the power disadvantage. The likely reply is that in the broader society the whites still have the power advantage. So, if a philosophy department at a mostly white university does not hire a person because she is black, that is racism. If a philosophy department at a predominantly black university does not hire a person because she is white, that is prejudice but not racism. Thus, with a certain definition of “racism” a white can never experience racism.

It might be asserted that since anyone can experience prejudice and bias in ways that match up with racism (like being attacked, insulted or not hired because of race) it follows that a white person could have an understanding of what it feels like to experience racism. For example, a white person who finds out she was not hired because she is white would seem to be able to understand what it feels like for a black person to not get hired because she is black. There are also white people who belong to groups that are systematically mistreated and subject to oppression—such as women. One might contend that a white woman who experiences sexism her whole life would be able to know what racism feels like, at least by analogy. However, it could be countered that she cannot—there is an insurmountable gulf between the sexism a white woman experiences and the racism a black person experiences that renders her incapable of understanding that experience.

While it is certainly true that a person cannot perfectly know the experience of others, normal human beings are actually quite good at empathy and understanding how others feel. Many moral theorists, such as David Hume, note the importance of sympathy in ethics. It is by trying to understand what others suffer that one develops sympathy and compassion. It is certainly reasonable to accept that perfect understanding is not possible. But, to use an example, a white person who knows what it is like to be beaten up and brutalized because he would rather read books than play football could use that experience to try to grasp what it feels like to be beaten up and brutalized just because one is black. Such a person, it would be expected, would be less likely to act in racist ways if they were able to feel sympathy based on their own experiences.

Another point worth considering is the moral method of reversing the situation, more commonly known as the Golden Rule. Using this method requires being able to have some understanding of what it is like to be in a situation (say being a victim of racism) so as to be able to reason that certain things are wrong. So, for example, a person who can consider what it would be like to be refused a job because of his color would presumably be less likely to engage in that wrongful action. Given the importance of sympathy and the Golden Rule, it seems that whites should not stop trying to understand—rather, they should try to understand more. This, of course, assumes that this would lead to more moral behavior. If not, then I would concede the matter of Professor Stachowiak.

In regards to the anecdotes, I am more inclined to agree with Stachowiak. Having taught at Florida A&M University for almost twenty-five years, I have lost count of the awkward anecdotes I have heard from well-meaning fellow whites trying to show that they understand racism. On the one hand, I do get what they intend when they are sincere—they are making an effort to understand racism within the context of their own experience. This is a natural thing for humans to do and can show that the person is really trying and does have laudable intentions. As such, to condemn such attempts seems unfair.

On the other hand, when a white person busts out an anecdote trying to compare a personal experience to racism I immediately think “oh no, do not do this.” This is usually because the anecdotes so often involve comparing some minor incident (like being called a name as a child) to racism. This is analogous to a person speaking to combat veterans and talking about how he was punched once on the playground. There is also the fact that such anecdotes are often used to say “I understand” and are then followed by clear evidence the person does not understand.  From a purely practical standpoint, I would certainly agree that whites should avoid the awkward anecdote.



My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Chart that Explains Everyone

Back in March, 2016 I did an interview about the Dungeons & Dragons alignment system and the real world. Part of this interview appears here:

The audio is here:

Arguments for Bathroom Bills

American news is awash with tales of the battle of the bathroom bills. In response to a growing general acceptance of LGBT rights, some states have passed laws requiring a person to use the bathroom (and similar facilities, such as locker rooms) for the sex on their birth certificate. These laws have been met with a negative response from much of the business community, making for a rare conflict between Republicans and business interests. The federal government has also taken a stance on this matter, asserting that states that have such laws are in violation of federal law. The Obama administration has warned these states that their violation could cost them federal funds.

Being a veteran runner, I am generally fine with people using whatever bathroom they wish to use, provided that they do not otherwise engage in immoral or criminal activity. Almost anyone who has been at a major race probably has a similar view out of pure practicality. Also, like any mature adult, I go to the bathroom to do my business and as long as everyone else is minding their business, I could care less who is in the next stall. Or urinal. Obviously, I do hold that assault, rape, harassment, stalking, and so on should not be allowed: but all these misdeeds are covered by existing law.

Being a philosopher does require that I give fair consideration to opposing arguments and that that be given the merit they earn through the quality of the reasoning and the plausibility of the premises. As such, I will consider a few arguments in favor of bathroom bills.

One of the most compelling arguments is the one from harm. The gist of the argument is that allowing people to use facilities based on their gender identity will allow rapists, molesters, pedophiles and peepers easy access to women and girls, thus putting them in danger. The bathroom bills, it is claimed, will protect women and girls from this danger.

Since I also accept the principle of harm, I accept the basic reasoning conditionally: if the law did protect women and girls from harm (and did not inflict a greater harm), then it would be a sensible law. The main problem with the argument lies in the claim that the bills will protect women and girls from harm. Many states and localities have prohibited discrimination in public facilities and there has not been an increase in sexual assault or rape. As such, the claim that the bills are needed to protect the public seems to be untrue. The imposition of law should, as a matter of principle, be aimed at addressing a significant harm.

This is not to deny that a person could pretend to be transgender so as to engage in an attack. However, such a determined attacker would presumably attack elsewhere (it is not as if attacks can only occur in public facilities) or could disguise himself as a woman (the law does not magically prevent that). There seems to be an unwarranted fear that bathrooms are ideal places for attacks, which does not seem true. That said, if it turns out that allowing people to use facilities based on their gender identity does lead to a significant harm in regards to increasing sexual assaults and other harms, then the bathroom bills would need to be reconsidered.

A second argument that has been advanced is the privacy argument. The gist of it is that allowing people in facilities based on their gender identification would violate the privacy of other people. One common example of this is the concern expressed on the behalf of school girls in locker rooms: the fear that a transgender classmate might be in the locker room with them.

While our culture does endeavor to condition people to be ashamed of their nakedness and to be terrified that someone of the opposite sex might see them naked, the matter of privacy needs to be discussed a bit here.

On the face of it, gender restricted locker rooms are not actually private. While I am not familiar with the locker room for girls and women, the men’s locker room in my high school had a group shower and an open area for lockers. So, every guy in the locker room could see every other guy while they were naked. I recall many of my fellows (who professed to be straight) checking out the penis sizes of everyone else. Some boys found this lack of privacy too much to take and would simply put their normal clothes on over their gym clothes without showering. Or they would try to cover up as much as possible. As such, the concern about privacy is not about privacy in the general sense. In space, everyone can hear your scream. In the locker room, everyone can see your junk.

As such, the concern about privacy in locker rooms in regards to the bathroom bills must be about something other than privacy in the usual sense. The most reasonable interpretation is privacy from members of the opposite sex: that is, girls not being seen by boys and vice versa. This could, I suppose, be called “gender privacy.”

Those favoring transgender rights would point out that allowing people to use facilities based on gender identity would not result in boys seeing girls or vice versa. It would just be the usual girls seeing girls and boys seeing boys. Since the main worry is transgender girls in girls’ locker rooms, I will focus on that. However, the same discussion could be made for transgender boys.

The obvious reply to this would be to assert that gender identification is not a real thing: a person’s gender is set by biological sex. So, a transgender girl would, in fact, be a boy and hence should not be allowed in the girls’ locker room. This is presumably, based on the assumption that a transgender girl is still sexually attracted to girls because he is really still a boy. There seem to be three possibilities here.

The first is that transgender girls really are boys and are sexually attracted to girls (that is, they are just faking) and this grounds the claim that a transgender girl would violate the privacy of biological girls. This would seem to entail that lesbian girls would also violate the privacy of biological girls and since about 10% of the population is gay, then any locker room with ten or more girls probably has some privacy violation occurring. As such, those concerned with privacy would presumably need to address this as well. The worry that a “hidden homosexual” might be violating privacy could be addressed by having private changing rooms and closed shower stalls—however, this would be quite costly and most public schools and facilities would not have the budget for this. As such, a more economical solution might be needed: no nakedness in locker rooms at all to ensure that privacy is not being violated. People could wear bathing suits while showering and then wear them under their clothes the rest of the day. Sure, it would be uncomfortable—but that is a small price to pay for privacy.

The second is that transgender girls are not sexually attracted to girls and hence do not violate their privacy: they are just girls like other girls. It could be objected that what matters is the biology: a biological boy seeing a biological girl in the locker room violates her privacy. Arguing for this requires showing how the biology matters in terms of privacy—that being seen non-sexually by biological girls is no privacy violation but being seen non-sexually by a biological boy who is just going about their business is a privacy violation. That is, if the person looking does not care about what is being seen, then how is it a privacy violation? The answer would need to differentiate based on biology, which could perhaps be done.

The third is that transgender girls are just girls. In which case, there is no privacy violation since it is just girls seeing girls.

While the harm and privacy arguments do have some appeal, they do not seem to stand up well under scrutiny. However, they might be other arguments for the bathroom bills worth considering.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Should Establishment Republicans Vote for Hillary Clinton?

At the start of May, Donald Trump is the presumptive Republican nominee—all the other Republicans have suspended their campaigns. There is still talk of a contested convention; but that seems to be just talk. Barring some very unusual event, it appears that Trump will be the Republican candidate.

For the Democrats, Bernie Sanders has said he is in it to the end. But, most of the folks in the media have taken the stance that it is over—Hillary will be the nominee. While Sanders has not been mathematically eliminated, the smart and big money is on Hillary.

While many Republicans have lined up behind Trump already, there is still a significant number of establishment Republicans who have embraced the “never Trump” view. These folks seem to have a few options. One is to simply not vote for president. While this is not a vote for Hillary, it does help her in that the vote could have been one for Trump. Those taking this option can claim that it is the morally better choice: while this does help Hillary win, it relieves the voter of the moral responsibility that would go along with voting for Trump or Hillary. This can be seen as analogous to the moral distinction between killing and letting die: while the difference might be seen as fine, it is nonetheless a difference.

The second option is to vote for someone other than Hillary or Trump. This could be a write in (vote for me) or perhaps even a third party candidate. As with not voting for either Trump or Hillary, this avoids the moral responsibility of providing a positive contribution to a win. It could also have the virtue of making a moral or political statement.

The third option, which might seem to be political blasphemy, is to vote for Hillary. While the Republicans seem to have cultivated a demonic hate for the devilish Hillary, she is actually far closer to a Republican establishment candidate than Trump. While Hillary does profess liberal social values, these are now mainstream and middle of the road. That is, her professed social values seem to match those of the majority of Americans. More importantly, she ticks many of the boxes of the establishment Republicans: she is pro-trade, pro-Wall Street, well connected to major corporations, a hawk on defense, someone who favors a foreign policy that advances America’s economic interests, and she has a tough-on-crime stance (or perhaps did). She is also an establishment politician, just like them. She knows how the game is played and plays the same way they want it played.

While Trump does not actually have any developed policy, he has expressed his dislike of free trade, has expressed hostility towards Wall Street, has used isolationist language, and has expressed views that seem rather pro-worker: making corporations bring jobs back to the United States and similar things that almost make him sound like a union boss of old. Trump seems to be playing his own game, much to the dismay of the establishment.

Because of these facts, Hillary seems to be a viable choice for the Republican establishment: she is the closest thing to a traditional establishment Republican and will ensure that it will be business as usual if she is elected.

Interestingly, while there is a never Trump movement for Republicans, there is also a Bernie or Bust movement among Democrats and independents.  As with the Republican establishment voters, they seem to have three options: do not vote, vote for a third party, or vote for Trump. While it might seem impossible for Bernie supporters to go Trump, Trump is the other populist candidate and the one who has said he will do the most for working Americans. While I think this is a political sham, it does have its appeal. And, who knows, Trump might actually intend to make good on his vague assertions. So, this election might see some strange voting: Republicans voting for Hillary and former Sander supporters backing Trump.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The shame of public shaming

Russell Blackford, University of Newcastle

Public shaming is not new. It’s been used as a punishment in all societies – often embraced by the formal law and always available for day-to-day policing of moral norms. However, over the past couple of centuries, Western countries have moved away from more formal kinds of shaming, partly in recognition of its cruelty.

Even in less formal settings, shaming individuals in front of their peers is now widely regarded as unacceptable behaviour. This signifies an improvement in the moral milieu, but its effect is being offset by the rise of social media and, with it, new kinds of shaming.

Indeed, as Welsh journalist and documentary maker Jon Ronson portrays vividly in his latest book, social media shaming has become a social menace. Ronson’s So You’ve Been Publicly Shamed (Picador, 2015) is a timely contribution to the public understanding of an emotionally charged topic.

Shaming is on the rise. We’ve shifted – much of the time – to a mode of scrutinising each other for purity. Very often, we punish decent people for small transgressions or for no real transgressions at all. Online shaming, conducted via the blogosphere and our burgeoning array of social networking services, creates an environment of surveillance, fear and conformity.

The making of a call-out culture

I noticed the trend – and began to talk about it – around five years ago. I’d become increasingly aware of cases where people with access to large social media platforms used them to “call out” and publicly vilify individuals who’d done little or nothing wrong. Few onlookers were prepared to support the victims. Instead, many piled on with glee (perhaps to signal their own moral purity; perhaps, in part, for the sheer thrill of the hunt).

Since then, the trend to an online call-out culture has continued and even intensified, but something changed during 2015. Mainstream journalists and public intellectuals finally began to express their unease.

There’s no sign that the new call-out culture is fading away, but it’s become a recognised phenomenon. It is now being discussed more openly, and it’s increasingly questioned. That’s partly because even its participants – people who assumed it would never happen to them – sometimes find themselves “called out” for revealing some impurity of thought. It’s become clear that no moral or political affiliation holds patents on the weaponry of shaming, and no one is immune to its effects.

As Ronson acknowledges, he has, himself, taken part in public shamings, though the most dramatic episode was a desperate act of self-defence when a small group of edgy academics hijacked his Twitter identity to make some theoretical point. Shame on them! I don’t know what else he could have done to make them back down.

That, however, was an extreme and peculiar case. It involved ongoing abuse of one individual by others who refused to “get” what they were doing to distress him, even when asked to stop. Fascinating though the example is, it is hardly a precedent for handling more common situations.

At one time, if we go along with Ronson, it felt liberating to speak back in solidarity against the voices of politicians, corporate moguls, religious leaders, radio shock jocks, newspaper columnists and others with real power or social influence.

But there can be a slippery slope… from talking back in legitimate ways against, say, a powerful journalist (criticising her views and arguments, and any abusive conduct), to pushing back in less legitimate ways (such as attempting to silence her viewpoint by trying to get her fired), to destroying relatively powerless individuals who have done nothing seriously wrong.

Slippery slope arguments have a deservedly bad reputation. But some slopes really are slippery, and some slippery slope arguments really are cogent. With public online shaming, we’ve found ourselves, lately, on an especially slippery slope. In more ways than one, we need to get a grip.

Shaming the shamers

Ronson joined in a campaign of social media shaming in October 2009: one that led to some major advertisers distancing themselves from the Daily Mail in the UK. This case illustrates some problems when we discuss social media shaming, so I’ll give it more analysis than Ronson does.

One problem is that, as frequently happens, it was a case of “shame the shamer”. The recipient of the shaming was especially unsympathetic because she was herself a public shamer of others.

The drama followed a distasteful – to say the least – column by Jan Moir, a British journalist with a deplorable modus operandi. Moir’s topic was the death of Stephen Gately, one of the singers from the popular Irish band Boyzone.

Gately had been found dead while on holiday in Mallorca with his civil partner, Andrew Cowles. Although the coroner attributed the death to natural causes, Moir wrote that it was “not, by any yardstick, a natural one” and that “it strikes another blow to the happy-ever-after myth of civil partnerships.”

Ronson does not make the point explicit in So You’ve Been Publicly Shamed, but what immediately strikes me is that Moir was engaging in some (not-so-)good old-fashioned mainstream media shaming. She used her large public platform to hold up identified individuals to be shamed over very private behaviour. Gately could not, of course, feel any shame from beyond the grave, but Moir’s column was grossly tasteless since he had not even been buried when it first appeared.

Moir stated, self-righteously: “It is important that the truth comes out about the exact circumstances of [Gately’s] strange and lonely death.” But why was it so important that the public be told such particulars as whether or not Cowles (at least) hooked up that tragic evening for sex with a student whom Moir names, and whether or not some, or all, of the three young men involved used cannabis or other recreational drugs that night?

To confirm Moir’s propensities as a public shamer, no one need go further than the same column. She follows her small-minded paragraphs about Gately with a few others that shame “socialite” Tara Palmer-Tomkinson for no worse sin than wearing a revealing outfit to a high-society party.

You get the picture, I trust. I’m not asking that Moir, or anyone else, walk on eggshells lest her language accidentally offend somebody, or prove open to unexpectedly uncharitable interpretations. Quite the opposite: we should all be able to speak with some spontaneity, without constantly censoring how we formulate our thoughts. I’ll gladly extend that freedom to Moir.

But Moir is not merely unguarded in her language: she can be positively reckless, as with her suggestion that Palmer-Tomkinson’s wispy outfit might more appropriately be worn by “Timmy the Tranny, the hat-check personage down at the My-Oh-My supper club in Brighton.” No amount of charitable interpretation can prevent the impression that she is often deliberately, or at best uncaringly, hurtful. In those circumstances, I have no sympathy for her if she receives widespread and severe criticism for what she writes.

When it comes to something like Moir’s hatchet job on Gately and Cowles, and their relationship, I can understand the urge to retaliate – to shame and punish in return. It’s no wonder, then, that Ronson discusses the feeling of empowerment when numerous people, armed with their social media accounts, turned on badly behaved “giants” such as the Daily Mail and its contributors. As it seemed to Ronson in those days, not so long ago, “the silenced were getting a voice.”

But let’s be careful about this.

Some distinctions

A few aspects need to be teased out. Even when responding to the shamers, we ought to think about what’s appropriate.

For a start, I am – I’m well aware – being highly critical of Moir’s column and her approach to journalism. In that sense, I could be said to be “shaming” her. But we don’t have to be utterly silent when confronted by unpleasant behaviour from public figures.

My criticisms are, I submit, fair comment on material that was (deliberately and effectively) disseminated widely to the public. In writing for a large audience in the way she does – especially when she takes an aggressive and hurtful approach toward named individuals – Moir has to expect some push-back.

We can draw reasonable distinctions. I have no wish to go further than criticism of what Moir actually said and did. I don’t, for example, want to misrepresent her if I can avoid it, to make false accusations, or to punish her in any way that goes beyond criticism. I wouldn’t demand that she be no-platformed from a planned event or that advertisers withdraw their money from the Daily Mail until she is fired.

The word criticism is important. We need to think about when public criticism is fair and fitting, when it becomes disproportionate, and when it spirals down into something mean and brutal.

Furthermore, we can distinguish between 1) Moir’s behaviour toward individuals and 2) her views on issues of general importance, however wrong or ugly those views might be. In her 2009 comments on Gately’s death, the two are entangled, but it doesn’t follow that they merit just the same kind of response.

Moir’s column intrudes on individuals’ privacy and holds them up for shaming, but it also expresses an opinion on legal recognition of same-sex couples in the form of civil unions. Although she is vague, Moir seems to think that individuals involved in legally recognised same-sex relationships are less likely to be monogamous (and perhaps more likely to use drugs) than people in heterosexual marriages. This means, she seems to imply, that there’s something wrong with, or inferior about, same-sex civil unions.

In fairness, Moir later issued an apology in which she explained her view: “I was suggesting that civil partnerships – the introduction of which I am on the record in supporting – have proved just to be as problematic as marriages.” This is, however, difficult to square with the words of her original column, where she appears to deny, point blank, that civil unions “are just the same as heterosexual marriages.”

Even if she is factually correct about statistical differences between heterosexual marriages and civil unions, this at least doesn’t seem to be relevant to public policy. After all, plenty of marriages between straight people are “open” (and may or may not involve the use of recreational drugs), but they are still legally valid marriages.

If someone does think certain statistical facts about civil unions are socially relevant, however, it’s always available to them to argue why. They should be allowed to do so without their speech being legally or socially suppressed. It’s likewise open to them to produce whatever reliable data might be available. Furthermore, we can’t expect critics of civil unions to present their full case on every occasion when they speak up to express a view. That would be an excessive condition for any of us to have to meet when we express ourselves on important topics.

More generally, we can criticise bad ideas and arguments – or even make fun of them if we think they’re that bad – but as a rule we shouldn’t try to stop their expression.

Perhaps some data exists to support Moir’s rather sneering claims about civil unions. But an anecdote about the private lives of a particular gay couple proves nothing one way or the other. Once again, many heterosexual marriages are not monogamous, but a sensational story involving a particular straight couple would prove nothing about how many.

In short, Moir is entitled to express her jaundiced views about civil unions or same-sex relationships more generally, and the worst she should face is strong criticism, or a degree of satire, aimed primarily at the views themselves. But shining a spotlight on Cowles and Gately was unfair, callous, nasty, gratuitous, and (to use one of her own pet words) sleazy. In addition to criticising her apparent views, we can object strongly when she publicly shames individuals.

Surfing down the slippery slope

Ronson discusses a wide range of cases, and an evident problem is that they can vary greatly, making it difficult to draw overall conclusions or to frame exact principles.

Some individuals who’ve been publicly shamed clearly enough “started it”, but even they can suffer from a cruel and disproportionate backlash. Some have been public figures who’ve genuinely done something wrong, as with Jonah Lehrer, a journalist who fabricated quotes to make his stories appear more impressive. It’s only to be expected that Lehrer’s irresponsibility and poor ethics would damage his career. But even in his case, the shaming process was over the top. Some of it was almost sadistic.

Other victims of public shaming are more innocent than Lehrer. Prominent among them is Justine Sacco, whom Ronson views with understandable sympathy. Sacco’s career and personal life were ruined after she made an ill-advised tweet on 20 January 2013. It said: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” She was then subjected to an extraordinarily viral Twitter attack that led quickly to her losing her job and becoming an international laughing stock.

It appears that her tweet went viral after a Gawker journalist retweeted it (in a hostile way) to his 15,000 followers at the time – after just one person among Sacco’s 170 followers had passed it on to him.

Ronson offers his own interpretation of the Sacco tweet:

It seemed obvious that her tweet, whilst not a great joke, wasn’t racist, but a self-reflexive comment on white privilege – on our tendency to naively imagine ourselves immune to life’s horrors. Wasn’t it?

In truth, it’s not obvious to me just how to interpret the tweet, and of course I can’t read Sacco’s mind. If it comes to that, I doubt that she pondered the wording carefully. Still, this small piece of sick humour was aimed only at her small circle of Twitter followers, and it probably did convey to them something along the lines of what Ronson suggests. In its original context, then, it did not merely ridicule the plight of black AIDS victims in Africa.

Much satire and humour is, as we know, unstable in its meaning – simultaneously saying something outrageous and testing our emotions as we find ourselves laughing at it. It can make us squirm with uncertainty. This applies (sometimes) to high literary satire, but also to much ordinary banter among friends. We laugh but we also squirm.

In any event, charitable interpretations – if not a single straightforward one – were plainly available for Sacco’s tweet. This was a markedly different situation from Jan Moir’s gossip-column attacks on hapless celebrities and socialites. And unlike Moir, Sacco lacked a large media platform, an existing public following, and an understanding employer.

Ronson also describes the case of Lindsey Stone, a young woman whose life was turned to wreckage because of a photograph taken in Arlington National Cemetery in Virginia. In the photo she is mocking a “Silence and Respect” sign by miming a shout and making an obscene gesture. The photo was uploaded on Facebook, evidently with inadequate privacy safeguards, and eventually it went viral, with Stone being attacked by a cybermob coming from a political direction opposite to the mob that went after Sacco.

While the Arlington photograph might seem childish, or many other things, posing for it and posting it on Facebook hardly add up to any serious wrongdoing. It is not behaviour that merited the outcome for Lindsey Stone: destruction of her reputation, loss of her job, and a life of ongoing humiliation and fear.

Referring to such cases, Ronson says:

The people we were destroying were no longer just people like Jonah [Lehrer]: public figures who had committed actual transgressions. They were private individuals who really hadn’t done anything much wrong. Ordinary humans were being forced to learn damage control, like corporations that had committed PR disasters.

Thanks to Ronson’s intervention, Stone sought help from an agency that rehabilitates online reputations. Of Stone’s problems in particular, he observes:

The sad thing was that Lindsey had incurred the Internet’s wrath because she was impudent and playful and foolhardy and outspoken. And now here she was, working with Farukh [an operative for the rehabilitation agency] to reduce herself to safe banalities – to cats and ice cream and Top 40 chart music. We were creating a world where the smartest way to survive is to be bland.

This is not the culture we wanted

Ronson also quotes Michael Fertik, from the agency that helped Stone: “We’re creating a culture where people feel constantly surveilled, where people are afraid to be themselves.”

“We see ourselves as nonconformist,” Ronson concludes sadly, “but I think all of this is creating a more conformist, conservative age.”

This is not the culture we wanted. It’s a public culture that seems broken, but what can we do about it?

For a start, it helps to recognise the problem, but it’s difficult, evidently, for most people to accept the obvious advice: Be forthright in debating topics of general importance, but always subject to some charity and restraint in how you treat particular people. Think through – and not with excuses – what that means in new situations. Be willing to criticise people on your own side if they are being cruel or unfair.

It’s not our job to punish individuals, make examples of them, or suppress their views. Usually we can support our points without any of this; we can do so in ways that are kinder, more honest, more likely to make intellectual progress. The catch is, it requires patience and courage.

Our public culture needs more of this sort of patience, more of this sort of courage. Can we – will we – rise to the challenge?

Russell Blackford, Conjoint Lecturer in Philosophy, University of Newcastle

This article was originally published on The Conversation. Read the original article.

[My page at]

Trump’s Enquiring Rhetoric

As this is being written, Donald Trump is the last surviving Republican presidential candidate. His final opponents, Cruz and Kasich, suspended their campaigns, though perhaps visions of a contested convention still haunt their dreams.

Cruz left the field of battle with a bizarre Trump arrow lodged in his buttocks: Trump had attacked Cruz by alleging that Ted Cruz’ father was associated with Lee Harvey Oswald. The basis for this claim was an article in the National Enquirer, a tabloid that has claimed Justice Scalia was assassinated by a hooker working for the CIA. While this tabloid has no credibility, the fact that Trump used it as a source necessitated an investigation into the claim about Cruz’ father. As should be expected, Politifact ranked it as Pants on Fire. I almost suspect that Trump is trolling the media and laughing about how he has forced them to seriously consider and thoroughly investigate claims that are utterly lacking in evidence (such as his claims about televised celebrations in America after the 9/11 attacks).

When confronted about his claim about an Oswald-Cruz connection, Trump followed his winning strategy: he refused to apologize and engaged in some Trump-Fu as his “defense.” When interviewed on ABC, his defense was as follows:  “What I was doing was referring to a picture reported and in a magazine, and I think they didn’t deny it. I don’t think anybody denied it. No, I don’t know what it was exactly, but it was a major story and a major publication, and it was picked up by many other publications. …I’m just referring to an article that appeared. I mean, it has nothing to do with me.”

This response begins with what appears to be a fallacy: he is asserting that if a claim is not denied, then it is therefore true (I am guessing the “they” is either the Cruz folks or the National Enquirer folks. This can be seen as a variation on the classic appeal to ignorance fallacy. In this fallacy, a person infers that if there is a lack of evidence against a claim, then the claim is true. However, proving a claim requires that there be adequate evidence for the claim, not just a lack of evidence against it. There is no evidence that I do not have a magical undetectable pet dragon that only I can sense. This, however, does not prove that I have such a pet.

While a failure to deny a claim might be regarded as suspicious, not denying a claim is not proof the claim is true. It might not even be known that a claim has been made (so it would not be denied). For example, Kanye West is not denying that he plans to become master of the Pan flute—but this is not proof he intends to do this. It can also be a good idea to not lend a claim psychological credence by denial—some people think that denial of a claim is evidence it is true. Naturally, Cruz did end up denying the claim.

Trump next appears to be asserting the claim is true because it was “major” and repeated. He failed to note the “major” publication is a tabloid that is lacking in credibility. As such, Trump could be seen as engaging in a fallacious appeal to authority. In this case, the National Enquirer lacks the credibility needed to serve as the basis for a non-fallacious argument from authority. Roughly put, a good argument from authority is such that the credibility of the authority provides good grounds for accepting a claim. Trump did not have a good argument from authority.

Trump also uses a fascinating technique of “own and deny.” He does this by launching an attack and then both “owning” and denying it. It is as if he punched Cruz in the face and then said, “it wasn’t me, someone else did the punching. But I will punch Cruz again. Although it wasn’t me.” I am not sure if this is a rhetorical technique or a pathological condition. However, it does allow him the best of both worlds: he can appear tough and authentic by “owning it” yet also appear to not be responsible for the attack. This seems to be quite appealing to his followers, although it is obviously logically problematic: one must either own or deny, both cannot be true.

He also makes use of an established technique:  he gets media attention drawn to a story and then uses this attention to “prove” the story is true (because it is “major” and repeated). While effective, this technique does not prove a claim is true.

Trump was also interviewed on NBC and asked why he attacked Cruz in the face of almost certain victory in Indiana.  In response, he said, “Well, because I didn’t know I had it in the grasp. …I had no idea early in the morning that was — the voting booths just starting — the voting booths were practically not even opened when I made this call. It was a call to a show. And they ran a clip of some terrible remarks made by the father about me. And all I did is refer him to these articles that appeared about his picture. And — you know, not such a bad thing.”

This does provide something of a defense for Trump. As he rightly says, he did not know he would win and he hoped that his attack would help his chances. While the fact that a practice is common does not justify it (this would be the common practice fallacy), Trump seems to be playing within the rules of negative campaigning. That said, the use of the National Enquirer as a source is a new twist as is linking an opponent to the JFK assassination. This is not to say that Trump is acting in a morally laudable manner, just that he is operating within the rules of the game. To use an analogy, while the brutal hits of football might be regarded as morally problematic, they are within the rules of the game. Likewise, such attacks are within the rules of politics.

However, Trump goes on to commit the “two wrongs make a right” fallacy: since bad things were said about Trump, he concludes that he has the right to strike back. While Trump has every right to respond to attacks, he does not have a right to respond with a completely fabricated accusation.

Trump then moves to downplaying what he did and engages in one of his signature moves: he is not really to blame (he just pointed out the articles). So, his defense is essentially “I am just punching the guy back. But, I really didn’t punch him. I just pointed out that someone else punched him. And that punching was not a bad thing.”


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

A Patient’s Right to Know

English: Livingston, TX, 9/25/05 -- A doctor t...

All professions have their problem members and the field of medicine is no exception. Fortunately, the percentage of bad doctor is rather low—but this small percentage can do considerable harm. After all, when your professor is incompetent, you might not learn as much as you should. If your doctor is incompetent, she could kill you.

The May, 2016 issue of Consumer Reports includes a detailed article by Rachel Rabkin Peachman covering the subject of bad doctors and the difficulty patients face in learning whether a physician is a good doctor or a disaster.

Based on the research in the article, there are three main problems. The first is that there are bad doctors. The article presents numerous examples to add color to the dry statistics and this include such tales of terror as doctors molesting patients, doctors removing healthy body parts, and patient deaths due to negligence, impairment or incompetence. These are obvious all moral and professional failings on part of the doctors and they should clearly not be engaged in such misdeeds.

The second is that, according to Peachman, the disciplinary actions taken by the profession tend to be rather less than ideal. While doctors should enjoy the protection of a due process, the hurdles are, perhaps, too high. There is also the problem that the responses to the misdeeds are often very mild. For example, a doctor whose negligence has resulted in the death of patients can be allowed to keep practicing with only minor limitations. As another example, a doctor who has engaged in sexual misconduct might continue practicing after a class or two on ethics and with the requirement that someone else be present when he is seeing patients. In addition to the practical concerns about this, there is also the moral concern that the disciplinary boards are failing to protect patients.

One possible argument against harsher punishments is that there is a shortage of doctors and taking a doctor out of practice would have worse consequences than allowing a bad doctor to keep practicing. This would be the basis for a utilitarian argument for continuing mild punishments. Crudely put, it is better to have a doctor who might kill a patient or two than no doctor at all.

This argument does have some appeal. However, there is the factual question of whether or not the mild punishments do more good than harm. If they do, then one would need to accept that this approach is morally tolerable. If not, then the argument would fail. There is also the response that consequences are not what matters—people should be reprimanded based on their misdeeds and not based on some calculation of utility. This also has some intuitive appeal.

It could also be argued that it should be left to patients to judge if they want to take the risk. If a doctor is known for sexual misdeeds with female patients but is fine with male patients, then a man who has few or no other options might decide that the doctor is his best choice. This leads to the third problem.

The third problem is that it is very difficult for patients to learn about bad doctors. While there is a National Practitioner Data Bank (NPDB), it is off limits to patients and is limited to people in law enforcement, hospital administration, insurance and a few other groups.

The main argument advanced against allowing public access to the NPDB is based on the premise that it contains inaccurate information which could be harmful to innocent doctors. Interestingly enough, this makes it similar to the credit report data—it is notorious for containing harmful inaccuracies that can plague people.

While the possibility of incorrect data is a matter of concern, that premise best supports the conclusion that the NPDB should be reviewed regularly to ensure that the information is accurate. While perfect accuracy is not possible, it would seem to be well within the realm of possibility for the information to meet a reasonable standard of accuracy. This could be aided by providing robust tools for doctors to inform those running the NPDB of errors and to inform doctors about the content of their files. As such, the error argument is easily defeated.

Patients do have some access to data about doctors, but there are many barriers in place. In some cases, there is a financial cost to access data. In almost all cases, the patient will need to grind through lengthy documents and penetrate the code of legalize. There is also the fact that this data is often incomplete and inaccurate.  While it could be argued that a responsible patient would expend the resources needed to research a doctor, this seems to be an unreasonable request—a patient should not need to do all this just to know that the doctor is competent. A reason for this is that a patient might be in rough shape and expecting her to engage in all this work would seem unfair. There is also the fact that one legitimate role of the state is to protect citizens from harm and having a clear means of identifying bad doctors would seem to fall within this.

Given the above, it seems reasonable to accept that a patient has the right to know about her doctor’s competence and should have an easy means of acquiring accurate information. This enables a patient to make an informed choice about her physician without facing an undue burden. This will also help the profession—good doctors will attract more patients and bad doctors will have a greater incentive to improve their practice.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter