Tag Archives: united states

Is Everyone a Little Bit Racist?

One in a series of posters attacking Radical R...

One in a series of posters attacking Radical Republicans on the issue of black suffrage, issued during the Pennsylvania gubernatorial election of 1866. (Photo credit: Wikipedia)

It has been argued that everyone is a little bit racist. Various studies have shown that black America are treated rather differently than white Americans. Examples of this include black students being more likely to be suspended than white students, blacks being arrested at a higher rate than whites, and job applications with “black sounding” names being less likely to get callbacks than those with “white sounding” names. Interestingly, studies have shown that the alleged racism is not confined to white Americans: black Americans also seem to share this racism. One study involves a simulator in which the participant takes on the role of a police officer and must decide to shoot or holster her weapon when confronted by simulated person. The study indicates that participants, regardless of race, shoot more quickly at blacks than whites and are more likely to shoot an unarmed black person than an unarmed white person. There are, of course, many other studies and examples that support the claim that everyone is a little bit racist.

Given the evidence, it would seem reasonable to accept the claim that everyone is a little bit racist. It is, of course, also an accepted view in certain political circles. However, there seems to be something problematic with claiming that everyone is racist, even if it is the claim that the racism is of the small sort.

One point of logical concern is that inferring that all people are at least a little racist on the basis of such studies would be problematic. Rather, what should be claimed is that the studies indicate the presence of racism and that these findings can be generalized to the entire population. But, this could be dismissed as a quibble about induction.

Some people, as might be suspected, would take issue with this claim because to be accused of racism is rather offensive. Some, as also might be suspected, would take issue with this claim because they claim that racism has ended in America, hence people are not racist. Not even a little bit. Other might complain that the accusation is a political weapon that is wielded unjustly. I will not argue about these matters, but will instead focus on another concern, that of the concept of racism in this context.

In informal terms, racism is prejudice, antagonism or discrimination based on race. Since various studies show that people have prejudices linked to race and engage in discrimination along racial lines, it seems reasonable to accept that everyone is at least a bit racist.

To use an analogy, consider the matter of lying. A liar, put informally, is someone who makes a claim that she does not believe with the intention of getting others to accept it as true. Since there is considerable evidence that people engage in this behavior, it can be claimed that everyone is a little bit of a liar. That is, everyone has told a lie.

Another analogy would be to being an abuser. Presumably each person has been at least a bit mean or cruel to another person she has been in a relationship with (be it a family relationship, a friendship or a romantic relationship). This would thus entail that everyone is at least a little bit abusive.

The analogies could continue almost indefinitely, but it will suffice to end them here, with the result that we are all racist, abusive liars.

On the one hand, the claim is true. I have been prejudiced. I have lied. I have been mean to people I love. I have engaged in addictive behavior. The same is likely to be true of even the very best of us. Since we have lied, we are liars. Since we have abused, we are abusers. Since we have prejudice and have discriminated based on race, we are racists.

On the other hand, the claim is problematic. After all, to judge someone to be a racist, an abuser, or a liar is to make a strong moral judgment of the person. For example, imagine the following conversation:

Sam: “I’m interested in your friend Sally. You know her pretty well…what is she like?”

Me: “She is a liar and a racist.”

Sam: “But…she seems so nice.”

Me: “She is. In fact, she’s one of the best people I know.”

Sam: “But you said she is a liar and a racist.”

Me: “Oh, she is. But just a little bit.”

Sam: “What?”

Me: “Well, she told me that when she was in college, she lied to a guy to avoid going on a date. She also said that when she was a kid, she thought white people were all racists and would not be friends with them. So, she is a liar and a racist.”

Sam: “I don’t think you know what those words mean.”

The point is, of course, that terms like “racist”, “abuser” and “liar” have what can be regarded as proper moral usage. To be more specific, because these are such strong terms, they should be applied in cases in which they actually fit. For example, while anyone who lies is technically a liar, the designation of being a liar should only apply to someone who routinely engages in that behavior. That is, a person who has a moral defect in regards to honesty. Likewise, anyone who has a prejudice based on race or discriminates based on race is technically a racist. However, the designation of racist should be reserved for those who have the relevant moral defect—that is, racism is their way of being, as opposed to failing to be perfectly unbiased. As such, using the term “racist” (or “liar”) in claiming that “everyone is a little bit racist” (or “everyone is little bit of a liar”) either waters down the moral term or imposes too harsh a judgment on the person. Either way would be problematic.

So, if the expression “we are all a little bit racist” should not be used, what should replace it? My suggestion is to speak instead of people being subject to race linked biases. While saying “we are all subject to race linked biases” is less attention grabbing than “we are all a little bit racist”, it seems more honest as a description.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Group Responsibility



After the murders in France, people were once again discussing the matter of group responsibility. In the case of these murders, some contend that all Muslims are responsible for the actions of the few who committed murder. In most cases people do not claim that all Muslims support the killings, but there is a tendency to still put a special burden of responsibility upon Muslims as a group.

Some people do take the killings and other terrible events as evidence that Islam itself is radical and violent. This sort of “reasoning” is, obviously enough, the same sort used when certain critics of the Tea Party drew the conclusion that the movement was racist because some individuals in the Tea Party engaged in racist behavior. It is also the same “reasoning” used to condemn all Christians or Republicans based on the actions of a very few.

To infer that an entire group has a certain characteristic (such as being violent or prone to terrorism) based on the actions of a few would generally involve committing the fallacy of hasty generalization. It can also be seen as the fallacy of suppressed evidence in that evidence contrary to the claim is simply ignored. For example, to condemn Islam as violent based on the actions of terrorists would be to ignore the fact that the vast majority of Muslims are as peaceful as people of other faiths, such as Christians and Jews.

It might be objected that a group can be held accountable for the misdeeds of its members even when those misdeeds are committed by a few and even when these misdeeds are supposed to not be in accord with the real beliefs of the group. For example, if I were to engage in sexual harassment while on the job, Florida A&M University can be held accountable for my actions. Thus, it could be argued, all Muslims are accountable for the killings in France and these killings provide just more evidence that Islam itself is a violent and murderous religion.

In reply, Islam (like Christianity) is not a monolithic faith with a single hierarchy over all Muslims. After all, there are various sects of Islam and a multitude of diverse Muslim hierarchies. For example, the Moslems of Saudi Arabia do not fall under the hierarchy of the Moslems of Iran.

As such, treating all of Islam as an organization with a chain of command and a chain of responsibility that extends throughout the entire faith would be rather problematic. To use an analogy, sports fans sometimes go on violent rampages after events. While the actions of the violent fans should be condemned, the peaceful fans are not accountable for those actions. After all, while the fans are connected by their being fans of a specific team this is not enough to form a basis for accountability. So, if some fans of a team set fire to cars, this does not make all the fans of that team responsible. Also, if people unassociated with the fans decide to jump into action and destroy things, it would be even more absurd to claim that the peaceful fans are accountable for their actions. As such, to condemn all of Islam based on what happened in France would be both unfair and unreasonable. As such, the people who murdered in France are accountable but Islam cannot have these incidents laid at its collective doorstep.

This, of course, raises the question of the extent to which even an organized group is accountable for its members. One intuitive guide is that the accountability of the group is proportional to the authority the group has over the individuals. For example, while I am a philosopher and belong to the American Philosophical Association, other philosophers have no authority over me. As such, they have no accountability for my actions. In contrast, my university has considerable authority over my work life as a professional philosopher and hence can be held accountable should I, for example, sexually harass a student or co-worker.

The same principle should be applied to Islam (and any faith). Being a Moslem is analogous to being a philosopher in that there is a recognizable group. As with being a philosopher, merely being a Moslem does not make a person accountable for all other Moslems.

But, just as I belong to an organization with a hierarchy, a Moslem can belong to an analogous organization, such as a mosque or ISIS. To the degree that the group has authority over the individual, the group is accountable. So, if the killers in France were acting as members of ISIS or Al-Qaeda, then the group would be accountable. However, while groups like ISIS and Al-Qaeda might delude themselves into thinking they have legitimate authority over all Moslems, they obviously do not. After all, they are opposed by most Moslems.

So, with a religion as vast and varied as Islam, it cannot be reasonably be claimed that there is a central earthly authority over its members and this would serve to limit the collective responsibility of the faith. Naturally, the same would apply to other groups with a similar lack of overall authority, such as Christians, conservatives, liberals, Buddhists, Jews, philosophers, runners, and satirists.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Of Lies & Disagreements

http://www.gettyimages.com/detail/2119780

When people disagree on controversial issues it is not uncommon for one person to accuse another of lying. In some cases this accusation is clearly warranted and in others it is clearly not. Discerning between these cases is clearly a matter of legitimate concern. There is also some confusion of what should count as a lie and what should not.

While this might seem like a matter of mere semantics, the distinction between what is a lie and what is not actually matters. The main reason for this is that to accuse a person of lying is, in general, to lay a moral charge against the person. It is not merely to claim that the person is in error but to claim that the person is engaged in something that is morally wrong. While some people do use “lie” interchangeably with “untruth”, there is clearly a difference.

To use an easy and obvious example, imagine a student who is asked which year the United States dropped an atomic bomb on Hiroshima. The student thinks it was in 1944 and writes that down. She has made an untrue claim, but it would clearly not do for the teacher to accuse her of being a liar.

Now, imagine that one student, Sally, is asking another student, Jane, about when the United States bombed Hiroshima. Jane does not like Sally and wants her to do badly on her exam, so she tells her that the year was 1944, though she knows it was 1945. If Sally tells another student that it was 1944 and also puts that down on her test, Sally could not justly be accused of lying. Jane, however, can be fairly accused. While Sally is saying and writing something untrue, she believes the claim and is not acting with any malicious intent. In contrast, Jane believes she is saying something untrue and is acting from malice. This suggests some important distinctions between lying and making untrue claims.

One obvious distinction is that a lie requires that the person believe she is making an untrue claim. Naturally, there is the practical problem of determining whether a person really believes what she is claiming, but this is not relevant to the abstract distinction: if the person believes the claim, then she would not be lying when she makes that claim.

It can, of course, be argued that a person can be lying even when she believes what she claims—that what matters is whether the claim is true or not. The obvious problem with this is that the accusation of lying is not just a claim the person is wrong, it is also a moral condemnation of wrongdoing. While “lie” could be taken to apply to any untrue claim, there would be a need for a new word to convey not just a statement of error but also of condemnation.

It can also be argued that a person can lie by telling the truth, but by doing so in such a way as to mislead a person into believing something untrue. This does have a certain appeal in that it includes the intent to deceive, but differs from the “stock” lie in that the claim is true (or at least believed to be true).

A second obvious distinction is that the person must have a malicious intent. This is a key factor that distinguishes the untruths of the fictions of movies, stories and shows from lies. When the actor playing Darth Vader says to Luke “No. I am your father.”, he is saying something untrue, yet it would be unfair to say that the actor is thus a liar. Likewise, the references to dragons, hobbits and elves in the Hobbit are all untrue—yet one would not brand Tolkien a liar for these words.

The obvious reply to this is that there is a category of lies that lack a malicious intent. These lies are often told with good intentions, such as a compliment about a person’s appearance that is not true or when parents tell their children about Santa Claus. As such, it would seem that there are lies that are not malicious—these are often called “white lies.” If intent matters, then this sort of lie would seem rather less bad than the malicious lie; although they do meet a general definition of “lie” which involves making an untrue claim with the intent to deceive. In this case, the deceit is supposed to be a positive one. Naturally, there are those who would argue that such deceits are still wrong, even if the intent is a good one. The matter is also complicated by the fact that there seem to be untrue claims aimed at deceit that intuitively seem morally acceptable. The classic case is, of course, misleading a person who is out to commit murder.

In some cases one person will accuse another of lying because the person disagrees with a claim made by the other person. For example, a person might claim that Obamacare will help Americans and be accused of lying about this by a person who is opposed to Obamacare.

In this sort of context, the accusation that the person is lying seems to rest on three clear points. The first is that the accuser thinks that the person does not actually believe his claim. That is, he is engaged in an intentional deceit. The accuser also thinks that the claim is not true. The second is that the accuser believes that the accused intends to deceive—that is, he expects people to believe him. The third is that the accuser thinks that the accused has some malicious intent. This might be merely limited to the intent to deceive, but it typically goes beyond this. For example, the proponent of Obamacare might be suspected of employing his alleged deceit to spread socialism and damage businesses. Or it might be that the person is trolling.

So, in order to be justified in accusing a person of lying, it needs to be shown that the person does not really believe his claim, that he intends to deceive and that there is some malicious intent. Arguing against the claim can show that it is untrue, but this would not be sufficient to show that the person is lying—unless one takes a lie to merely be a claim that is not true (so, if someone made a mistake in a math problem and got the wrong answer, he would be a liar). What would be needed would be adequate evidence that the person is insincere in his claim (that is, he believes he is saying the untrue), that he intends to deceive and that there is some malicious intent.

Naturally, effective criticism of a claim does not require showing that the person making the claim is a liar—this is a matter of arguing about the claim. In fact, the truth or falsity of a claim has no connection to the intent of the person making the claim or what he actually believes about it. An accusation of lying, rather, moves from the issue of whether the claim is true or not to a moral dispute about the character of the person making the claim. That is, whether he is a liar or not. It can, of course, be a useful persuasive device to call someone a liar, but it (by itself) does nothing to prove or disprove the claim under dispute.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Cyber Warfare: Proportionality

SAN DIEGO, Calif. (March 10, 2009) Information...

As predicted by science fiction writers, cyber warfare has become a rather real thing. The United States and Israel, some say, launched a cyber-attack on the Iranian nuclear program. North Korea, some say, launched a cyber-attack on Sony.

On the face of it, cyber-attacks seem to be a special sort of thing. While conventional attacks can be secret and hard to trace, the typical cyber-attack does not cause the sort of damage and causalities that a traditional attack causes. For example, a conventional attack aimed at the Iranian nuclear program would have most likely killed people and caused considerable damage. In contrast, the cyber-attack was narrowly focused and did not kill anyone. People often seem to “feel” that cyber-attacks are just “different” since they do not involve the sorts of things that most people think of as weapons and do not do the sort of damage that people tend to associate with military attacks. Despite this conceptual problem, it seems quite reasonable to accept that cyber-attacks can have qualities that make it reasonable to regard them as military attacks. To use the obvious analogy, criminals and soldiers both use guns, but the difference between a bank robber and a military attack lies in the agents carrying out the attack, those ordering the attack, and the goals of the attack. In the case of cyber-attacks, cyber-criminals and cyber-soldiers both use similar weapons. The distinction lies in the agents, those behind the action and the goals.

As mentioned above, some people lay the blame of the attack on Sony on North Korea. If this is true, then this would seem to have the potential of being a military action. After all, it was carried out by a state and had political goals as motivating factors. That said, it could also be argued that the attack was state-sponsored crime. After all, the target was Sony rather than a state target and the operation was more vandalism and extortion than a military strike. This can, of course, be countered by the claim that economic warfare is still warfare—North Korea was attacking an economic entity in another sovereign state (assuming North Korea was behind the attack).

President Obama took the attack seriously and seems to have accepted that North Korea was responsibility. He did fall short of calling it a military action and described it in terms of vandalism. He did, however, say that the United States would have a proportional response.

A proportional response is, as matter of general principle, the right thing to do. After all, the retaliation should be proportional to the provocation. Excessive response would be morally problematic. To use the obvious analogy, if someone shoves me in a dispute and I shoot them in the head with a twelve-gauge shotgun, then I would have acted wrongly. Naturally, there can be considerable debate about the matter of proportionality as well as the value of using a “robust” response as a deterrence (such as pulling a gun when the other person has a stick).

One problem with cyber-attacks is that they are relatively new. Because of this, states have not worked out the norms governing these interactions and there are, as of yet, no clear and specific international treaties and rules laying out the rules of cyber-warfare in a way comparable to the norms and rules of traditional war. We are now in the stages of making up the norms and rules. It should be expected that there will be some problems with this and, no doubt, some defining incidents. The attack on Sony might be one of these.

Obama’s decision to use a proportional response does seem sound and will, perhaps, serve as a starting point for the norms and rules of cyber warfare. This approach is certainly analogous to how conventional attacks are handled. This nicely fits the existing model, namely that incidents in the “physical world” between countries usually stay proportional. For example, with North Korea does something provocative with its military, the United States does not over-react, such as by firing cruise missiles into the country.

One obvious problem with cyber-attacks is working out the proportionality, especially if non-cyber responses are being considered. In such cases, the challenge would be working out what sort of conventional military response would be a proportional response to a cyber-attack. It is not uncommon for people to see cyber-attacks as somehow less “serious” and damaging than “real” world attacks. If North Korea had, for example, sent a strike team to the United States to physically grab computers and erase drives on the spot, then people would feel that something more serious had happened—though the results would have been the same. In such a case, the proportional response would almost certainly be more robust than a proportional response to a cyber-attack. Perhaps this would be justified on the grounds that a physical intrusion is a greater violation of territorial integrity than a virtual intrusion. But, this might simply be a matter of “feeling” and a result of “old-fashioned” thinking—that is, people thinking about attacks in the old way.

I think a reasonable case can be made to treat cyber-attacks as being comparable to traditional attacks and using the results as the measure of proportionality. That is, the United States’ response to the (alleged) North Korean intrusion should be treated the same way that the United States should react to a team of North Koreans physically breaking into Sony at the behest of the state. To treat cyber-attacks as somehow less serious because they are “virtual” seems, as I have been suggesting, a mistake based on outdated concepts of warfare.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Food Waste

"CLEAN YOUR PLATE...THERE'S NO FOOD TO WA...

“CLEAN YOUR PLATE…THERE’S NO FOOD TO WASTE” – NARA – 516248 (Photo credit: Wikipedia)

Like many Americans my age, I was cajoled by my parents to finish all the food on my plate because people were starving somewhere. When I got a bit older and thought about the matter, I realized that my eating (or not eating) the food on my plate would have no effect on the people starving in some far away part of the world. However, I did internalize two lessons. One was that I should not waste food. The other was that there is always someone starving somewhere.

While food insecurity is a problem in the United States, we Americans waste a great deal of food. It is estimated that about 21% of the food that is harvested and available to be consumed is not consumed. This food includes the unconsumed portions tossed into the trash at restaurants, spoiled tomatoes thrown out by families ($900 million worth), moldy leftovers tossed out when the fridge is cleaned and so on. On average, a family of four wastes about 1,160 pounds of food per year—which is a lot of food.

On the national level, it is estimated that one year of food waste (or loss, if one prefers) uses up 2.5% of the energy consumed in the U.S., about 25% of the fresh water used for agriculture, and about 300 million barrels of oil. The loss, in dollars, is estimated to be $115 billion.

The most obvious moral concern is with the waste. Intuitively, throwing away food and wasting it seems to be wrong—especially (as parents used to say) when people are starving. Of course, as I mentioned above, it is quite reasonable to consider whether or not less waste by Americans would translate into more food for other people.

On the one hand, it might be argued that less wasted food would surely make more food available to those in need. After all, there would be more food.

On the other hand, it seems obvious that less waste would not translate into more food for those who are in need. Going back to my story about cleaning my plate, my eating all the food on my plate would certainly not have helped starving people. After all, the food I eat does not help them. Also, if I did not eat the food, then they would not be harmed—they would not get less food because I threw away my Brussel sprouts.

To use another illustration, suppose that Americans conscientiously only bought the exact number of tomatoes that they would eat and wasted none of them. The most likely response is not that the extra tomatoes would be handed out to the hungry. Rather, farmers would grow less tomatoes and markets would stock less in response to the reduced demand.

For the most part, people go hungry not because Americans are wasting food and thus making it unavailable, but because they cannot afford the food they need. To use a metaphor, it is not that the peasants are starving because the royalty are tossing the food into the trash. It is that the peasants cannot afford the food that is so plentiful that the royalty can toss it away.

It could be countered that less waste would actually influence the affordability of food. Returning to the tomato example, farmers might keep on producing the same volume of tomatoes, but be forced to lower the prices because of lower demand and also to seek new markets.

It can also be countered that as the population of the earth grows, such waste will really matter—that food thrown away by Americans is, in fact, taking food away from people. If food does become increasingly scarce (as some have argued will occur due to changes in climate and population growth), then waste will really matter. This is certainly worth considering.

There is, as mentioned above, the intuition that waste is, well, just wrong. After all, “throwing away” all those resources (energy, water, oil and money) is certainly wasteful. There is, of course, also the obvious practical concern: when people waste food, they are wasting money.

For example, if Sally buys a mega meal and throws half of it in the trash, she would have been better off buying a moderate meal and eating all of it. As another example, Sam is throwing away money if he buys steaks and vegetables, then lets them rot. So, not wasting food would certainly make good economic sense for individuals. It would also make sense for businesses—at least to the degree that they do not profit from the waste.

Interestingly, some businesses do profit from the waste. To be specific, consider the snacks, meats, cheese, beverages and such that are purchased and never consumed. If people did not buy them, this would result in less sales and this would impact the economy all the way from the store to the field. While the exact percentage of food purchased and not consumed is not known, the evidence is that it is significant. So, if people did not overbuy, then the food economy would be reduced that percentage—resulting in reduced profits and reduced employment. As such, food waste might actually be rather important for the American food economy (much as planned obsolescence is important in the tech fields). And, interestingly enough, the greater the waste, the greater its importance in maintaining the food economy.

If this sort of reasoning is good, then it might be immoral to waste less food—after all, a utilitarian argument could be crafted showing that less waste would create more harm than good (putting supermarket workers and farmers out of work, for example). As such, waste might be good. At least in the context of the existing economic system, which might not be so good.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Responsibility for Shootings

http://www.gettyimages.com/detail/176420210

In December 2014 two NYC police officers, Rafeal Ramos and Wenjian Liu, were shot to death by Ismaaiyl Brinsley. Brinsley had earlier shot and wounded his ex-girlfriend. Brinsley claimed to have been acting in response to the police killings of Brown and Garner.  There have been some claims of a connection between Brinsley’s actions and the protests against those two killings. This situation does raise an issue of moral responsibility in regards to such acts of violence.

Not surprisingly, this is not the first time I have written about gun violence and responsibility. After Jared Lee Lougher shot congresswoman Giffords and others in 2011, there was some blame placed on Sarah Palin and the Tea Party. Palin, it might be recalled, made use of cross hairs and violent metaphors when discussing matters of politics. The Tea Party was also accused of creating a context of violence.

Back in 2011 I argued that Palin and the Tea Party were not morally responsible for Lougher. I still agree with my position of that time. First, while Palin used violent metaphors, she clearly was not calling on people to engage in actual violence. Such metaphors are used regularly in sports and politics with the understanding that they are just that, metaphors.

Second, while there are people in the Tea Party who are very much committed to gun rights, the vast majority of them do not support the shooting of their fellow Americans—even if they disagree with their politics. While there are some notable exceptions, those who advocate and use violence are rare. Most Tea Partiers, like most other Americans, prefer their politics without bloodshed. Naturally, specific individuals who called for violence and encouraged others can be held accountable to the degree that they influence others—but these folks are not common.

Third, while Lougher was apparently interested in politics, he seemed to have a drug problem and serious psychological issues.  His motivation to go after Giffords seems to be an incident from when he was a student. He went to one of Giffords’ meetings and submitted a rather unusual question about what government would be if words had no meaning. Giffords apparently did not answer the question in a way that satisfied him. This, it is alleged, is the main cause of his dislike of Gifford

As such, the most likely factors seem to be a combination of drug use and psychological problems that were focused onto Giffords by that incident. Because of these reasons, I concluded that Sarah Palin and the Tea Party had no connection the incident and should not have been held morally accountable. This is because neither Palin nor the Tea Party encouraged Lougher and because he seemed to act primarily from his own mental illness.

As far as who is to blame, the obvious answer is this: the person who shot those people. Of course, as the media psychologists point out, it can be claimed that others are to blame as well. The parents. The community college. Society.

On the one hand, this blame sharing seems to miss the point that people are responsible for their actions. The person who pulled that trigger is the one that is responsible. He did not have to go there that day. Going there, he did not have to pull the trigger.

On the other hand, no one grows up and acts in a perfect vacuum. Each of us is shaped by factors around us and, of course, we have responsibilities to each other. There was considerable evidence that Lougher was unstable and likely to engage in violence. As such, it could be argued that those who were aware of these facts and failed to respond bear some of the blame for allowing him to be free to kill and wound.

Back in 2011 I did state that there were some legitimate concerns about Palin’s use of violent rhetoric and the infamous cross-hair map. I ended by saying that Palin should step up to address this matter. Not because she was responsible, but because these were matters worth considering on their own. I now return to the 2014 shooting by Brinsley.

Since consistency is rather important, I will apply the same basic principles of responsibility to the Brinsley case. First, as far as I am aware, no major figure involved in the protests has called upon people to kill police officers. No one with a status comparable with Palin’s (in 2011) has presented violent metaphors aimed at the police—as far as I know.  Naturally, if there are major figures who engaged in such behavior, then this would be relevant in assigning blame. So, as with Sarah Palin in 2011, the major figures of the protest movement seem to be morally blameless for Brinsley. They did not call on anyone to kill, even metaphorically.

Second, the protest movements seem to be concerned with keeping people from being killed rather than advocating violence. Protesters say “hands up, don’t shoot!” rather than “shoot the police!” People involved in the protests seem to have, in general, condemned the shooting of the officers and have certainly not advocated or engaged in such attacks. So, as with the Tea Party in 2011, the protest movement (which is not actually a political party or well-defined movement) is not accountable for Brinsley’s actions. While he seems to have been motivated by the deaths of Brown and Garner, the general protest movement did not call on him to kill.

Third, Brinsley seems to be another terrible case of a mentally ill person engaging in senseless violence against innocent people. Brinsley seems to have had a history of serious problems (he shot and wounded his girlfriend before travelling to NYC). Like Lougher, Brinsley is the person who pulled the trigger. He is responsible. Not the protestors, not the police, and not the slogans.

As with Lougher, there is also the question of our general responsibility as a society for those who are mentally troubled enough to commit murder. I have written many essays on gun violence in the United States and one recurring theme is that of a mentally troubled person with a gun. This is a different matter than the protests and also different from the matter of police use of force. As such, it is important to distinguish these different issues. While Brinsley claims to have been motivated by the deaths of Brown and Garner, the protesters are not accountable for his actions, no more than the NYC officers were accountable for the deaths of Brown and Garner.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Slogan-Industrial Complex

University of South Florida Seal

University of South Florida Seal (Photo credit: Wikipedia)

Higher education in the United States has been pushed steadily towards the business model. One obvious example of this is the brand merchandizing of schools. In 2011, schools licensed their names and logos for a total of $4.6 billion. Inspired by this sort of brand-based profits, schools started trademarking their slogans. Impressively, there are over 10,000 trademarked slogans.

These slogans include “project safety” (University of Texas), “ready to be heard” (Chatham University), “power” (University of North Dakota), “rise above” (University of the Rockies), “students with diabetes” (University of South Florida), “student life” (Washington University in St. Louis) and “resolve” (Lehigh University). Those not familiar with trademark law might be surprised by some of these examples. After all, “student life” seems to be such a common phrase on campuses that it would be insane for a school to be allowed to trademark it. But, one should never let sanity be one’s guide when considering how the law works.

While the rabid trademarking undertaken by schools might be seen as odd but harmless, the main purpose of a trademark is so that the owner enjoys an exclusive right to what is trademarked and can sue others for using it. This is, of course, limited to certain contexts. So, for example, if I write about student life at Florida A&M University in a blog, Washington University would (I hope) not be able to sue me. However, in circumstances in which the trademark protection applies, then lawsuits are possible (and likely). For example, Eastern Carolina University sued Cisco Systems because of Cisco’s use of the phrase “tomorrow begins here.”

One practical and moral concern about universities’ enthusiasm for trademarking is that it further pushes higher education into the realm of business. One foundation for this concern is that universities should be focused on education rather than being focused on business—after all, an institution that does not focus on its core mission tends to do worse at that mission. This would also be morally problematic, assuming that schools should (morally) focus on education.

An easy and obvious reply is that a university can wear many hats: educator, business, “professional in all but name” sport franchise and so on provided that each function is run properly and not operated at the expense of the core mission. Naturally, it could be added that the core mission of the modern university is not education, but business—branding, marketing and making money.

Another reply is that the trademarks protect the university brand and also allow them to make money by merchandizing their slogans and suing people for trademark violations. This money could then be used to support the core mission of the school.

There is, naturally enough, the worry that universities should not be focusing on branding and suing. While this can make them money, it is not what a university should be doing—which takes the conversation back to the questions of the core mission of universities as well as the question about whether schools can wear many hats without becoming jacks of all trades.

A second legal and moral concern is the impact such trademarks have on free speech. On the one hand, United States law is fairly clear about trademarks and the 1st Amendment. The gist is that noncommercial usage is protected by the 1st Amendment and this allows such things as using trademarked material in protests or criticism. So, for example, the 1st Amendment allows me to include the above slogans in this essay. Not surprisingly, commercial usage is subject to the trademark law. So, for example, I could not use the phrase “the power of independent thinking” as a slogan for my blog since that belongs to Wilkes University. In general, this seems reasonable. After all, if I created and trademarked a branding slogan for my blog, then I would certainly not want other people making use of my trademarked slogan. But, of course, I would be fine with people using the slogan when criticizing my blog—that would be acceptable use under freedom of expression.

On the other hand, trademark holders do endeavor to exploit their trademarks and people’s ignorance of the law to their advantage. For example, threats made involving claims of alleged trademark violations are sometimes used as a means of censorship and silencing critics.

The obvious reply is that this is not a problem with trademarks as such. It is, rather, a problem with people misusing the law. There is, of course, the legitimate concern that the interpretation of the law will change and that trademark protection will be allowed to encroach into the freedom of expression.

What might be a somewhat abstract point of concern is the idea that what seem to be stock phrases such as “the first year experience” (owned by University of South Carolina) can be trademarked and thus owned. This diminishes the public property that is language and privatizes it in favor of those with the resources to take over tracts of linguistic space. While the law currently still allows non-commercial use, this also limits the language other schools and businesses can legally use. It also requires that they research all the trademarks before using common phrases if they wish to avoid a lawsuit from a trademark holder.

The obvious counter, which I mentioned above, is that trademarks have a legitimate function. The obvious response is that there is still a reasonable concern about essentially allowing private ownership over language and thus restricting freedom of expression. There is a need to balance the legitimate need to own branding slogans with the legitimate need to allow the use of stock and common phrases in commercial situations. The challenge is to determine the boundary between the two and where a specific phrase or slogan falls.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

College Education for Prisoners

http://www.gettyimages.com/detail/73979720

At one time, inmates in the United States were eligible for government Pell tuition grants and there was a college prison program. Then Congress decided that prisoners should not get such grants and this effectively doomed the college prison programs. Fortunately, people like Max Kenner have worked hard to bring college education to prisoners once more. Kenner has worked with Bard College to offer college education with prisoners and this program seems to have been a success. As might be imagined, there are some interesting ethical issues here.

One approach to college education for prisoners is both ethical and practical. If it is accepted that one function of the prison system is to reform prisoners so that they do not return to crime after they are released, then there seems to be a very good reason to support such programs.

Since 2001 about 300 prisoners have received college degrees from Bard. Of those released from prison, it is claimed that less than 2% have been arrested again. In contrast, 70% of state prison inmates are arrested and incarcerated again within five years of their release. Prisoners who participate in education programs are 43% less likely to return to prison than former prisoners who did not participate in such programs.

Given the very high cost of incarceration ($14-60,000, with an average of $31,000 per year), reducing the number of people returning to prison would save the state and taxpayers money. There is also the cost of crime, both to the victims and society in general.

Of course, there is the practical concern that the prison-industrial complex in the United States is a key job and profit creator (mostly transferring public money to the private sector) and having fewer people in prison would actually be a practical loss, economically speaking.

In moral terms, as long as the cost of the programs is not high, then a utilitarian argument can be given in favor of such programs. Using the stock utilitarian moral argument, the benefits generated by the education programs would make them morally correct. There is, of course, also the moral value in having people not committing crimes and being, instead, productive members of the community.

One practical objection to the programs is that the cost of such programs might exceed the benefits. However, this is partially a factual matter, namely weighing the economic cost of crime and imprisonment against the cost of providing such programs. The positive economic value of such programs should be considered as well. The cost to the state can, obviously, be offset if the programs are supported by others (such as donors and private universities). Given the cost of incarceration, practical considerations seem to favor the programs. However, this can be debated.

Another practical objection is that the benefits being discussed arise only when a released prisoner does not return to prison because of the education program. If a prisoner is serving a sentence that will keep him in prison for life, then there would seem to be no practical benefit. The counter to this is that most prisoners are not in prison for life, so this would apply in only a very few cases that would be offset by the cases in which people do leave prison.

It could also be claimed that the education programs are not the cause of the former prisoners remaining out of prison. After all, this could be a case of a common cause (that is, what seems to be a cause and an effect are really both effects of an underlying cause): the qualities that would cause a prisoner to participate in such an education program are likely to be the same ones that would make it less likely that the former prisoner would return. If this is the case, then it could be argued that such programs are not needed since they are not actually the causal factor.

While it is always wise to consider the possibility of a common cause, it does make sense that an education program would have causal role to play in a former prisoner not returning to prison. At the very least, education would increase the chances of the person getting a job and this would have an impact on the likelihood that she would return to crime.

It can also be argued that even if the education did not have this effect (that is, the former prisoners who would have been in the program would not have returned to prison anyway), the value of the education itself would justify the programs. I do believe that education has intrinsic value. However, this is not a view that is shared by all and it can obviously be argued against, usually on economic grounds.

In general, though, the education programs do seem worthwhile, if only on practical grounds. In cases in which the programs are being privately funded, there seems to be no practical reason to oppose such programs, provided that they do have the claimed benefits regarding recidivism.

One moral objection that can be raised against these programs is that resources are being expended on prisoners that could be used to help those who cannot afford an education and are not convicted criminals. One might also add that prisons exist to punish people for their crimes and not to reward them. As such, prisoners should not receive such education. Instead, any resources that might have been spent on educating prisoners should be spent on assisting non-criminals who cannot afford college. Of course, there are those who would not want to assist even non-criminals who cannot afford college.

This moral objection does have some bite. After all, a person in need who has not committed crimes seems more deserving of the assistance of others than someone who has committed crimes. If it did, in fact, come down to a choice between helping a non-criminal or a criminal, then it would seem preferable to assist the non-criminal—just as it would be preferable to spend money on education and infrastructure  rather than on subsidies to corporations. It would also presumably be preferable to spend money on addressing the causes of crime rather than creating a prison-industrial complex.

The reply to this objection is based on the fact that when a person is imprisoned, there will be a significant expenditure to simply keep that person in prison (an average of $31,000 a year in 2010). While it would be preferable to avoid having to imprison people, once they are in prison it would seem desirable to invest a little more to keep them from returning to prison. Calculating this would involve using the cost of the education, the cost of keeping the prisoner in prison, the likely chance of returning to prison and for how long. To use a made up example, if it cost $31,000 for a prisoner to get her degree and $31,000 a year to keep her locked up, then if there is a good chance that her degree would keep her out of prison for another four year sentence, then it would seem to be worthwhile even as a gamble. After all, expending $31,000 is likely to save much more money. If the fact that she is likely to be a contributing member of society is factored in, the deal is even better. So, the gist of the reply is that spending the money education does make sense, provided that it has a good chance of saving money and doing some social good. If the money is not spent on education, then it seems likely that even more will be spent on dealing with recidivism. Either way society pays, the question is whether one should pay more or less or whether to pay for something positive (education) or negative (locking someone up). So, it is not a matter of spending money that could be spent to assist non-criminals, it is a matter of how to spend the money that will most likely be spent either way.

I do, of course, understand how someone struggling to pay for her or her child’s college would be outraged that prisoners are getting an education for free. However, I would simply refer back to the previous argument: paying for the education of a prisoner, assuming it reduced recidivism, is cheaper than paying to keep locking the prisoner up.

It might be objected that the problem should be addressed before people go to prison, that there should be education programs designed to assist people who are at risk for prison, but are also likely to be able to complete college and avoid prison.

In reply, I would say that I agree completely. It is better that a person never go to prison in the first place and education certainly seems to be a much better investment than prison. There are, of course, those who would disagree and argue that it is better to let people end up in prison than to spend public money on college education. Others could argue that while such plans might be good intentioned, they would not work—the money would be spent and the result would merely be educated criminals. These objections are worth considering, but I would still contend that spending on education to keep people out of prison is preferable to spending money to keep people in prison.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Protests, Peaceful & Otherwise

http://www.gettyimages.com/detail/459550624

In response to the nighttime announcement of the Ferguson verdict in which officer Wilson was not indicted, some people attacked the police and damaged property. Some experts have been critical of the decision to make the announcement at night, since the time of day does actually influence how people behave. In general, making such an announcement at night is a bad idea—unless one intends to increase the chances that people will respond badly.

Obviously enough, peacefully protesting is a basic right and in a democratic state the police should not interfere with that right. However, protests do escalate and violence can occur. In the United States it is all too common for peaceful protests to be marred by violence—most commonly damage to businesses and looting.

When considering reports of damage and looting during protests it is reasonable to consider whether or not the damage and looting is being done by actual protestors or by people who are opportunists using the protest as cover or an excuse. An actual protestor is someone whose primary motivation is a moral one—she is there to express her moral condemnation of something she perceives as wrong. Not all people who go to protests are actual protestors—some are there for other reasons, some of which are not morally commendable. Some people, not surprisingly, know that a protest can provide an excellent opportunity to engage in criminal activity—to commit violence, to damage property and to loot. Protests do, sadly, attract such people and often these are people who are not from the area.

Of course, actual protesters can engage in violence and damage property. Perhaps they can even engage in looting (though that almost certainly crosses a moral line). Anger and rage are powerful things, especially righteous anger. A protestor who is motivated by her moral condemnation of a perceived wrong can give in to her anger and do damage to others or their property. When people damage the businesses in their own community, this sort of behavior seems irrational—probably because it is. After all, setting a local gas station on fire is hardly morally justified by the alleged injustice of the grand jury’s verdict in regards to not indicting Officer Wilson for the shooting of Brown. However, anger tends to impede rationality. I, and I assume most people, have seen people angry enough to break their own property.

While I am not a psychologist, I do suspect that people do such damage when they are angry because they cannot actually reach the target of their anger. Alternatively, they might be damaging property to vent their rage in place of harming people. I have seen people do just that. For example, I saw a person hit a metal door frame (and break his hand) rather than hit the person he was mad at. Anger does summon up a need to express itself and this can easily take the form of property damage.

When a protest becomes destructive (or those using it for cover start destroying things), the police do have a legitimate role to play at protests. While protests are intended to draw attention and often aim to do so by creating a disruption of the normal course of events, a state of protest does not grant protestors a carte blanche right to interfere with the legitimate rights of others. As such, the police have a legitimate right to prevent protestors from violating the rights of others and this can correctly involve the use of force.

That said, the role of rage needs to be considered. When property is destroyed during protests, some people immediately condemn the destruction and wonder why people are destroying their own neighborhoods. In some cases, as noted above, the people doing the damage might not be from the neighborhood at all and might be there to destroy rather than to protest. If such people can be identified, they should be dealt with as the criminals they are. What becomes somewhat more morally problematic are people who are driven to such destruction by moral rage—that is, they have been pushed to a point at which they believe they must use violence and destruction to express their moral condemnation.

When looked at from the cool and calm perspective of distance, such behavior seems irrational and unwarranted.  And, I think, it usually is. However, it is well worth it to think of something that has caused the fire of righteous anger to ignite your soul. Think of that and consider how you might respond if you believed that you have been systematically denied justice. Over. And over. Again.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Bionic Ethics

http://www.gettyimages.com/detail/51256116

Although bionics have been part of science fiction for quite some time (a well-known example is the Six Million Dollar Man), the reality of prosthetics has long been rather disappointing. But, thanks to America’s endless wars and recent advances in technology, bionic prosthetics are now a reality. There are now replacement legs that replicate the functionality of the original organics amazingly well. There have also been advances in prosthetic arms and hands as well as progress in artificial sight.  As with all technology, these bionic devices raise some important ethical issues.

The easiest moral issue to address is that involving what could be called restorative bionics. These are devices that restore a degree of the original functionality possessed by the lost limb or organ. For example, a soldier who lost the lower part of her leg to an IED in Iraq might receive a bionic device that restores much of the functionality of the lost leg. As another example, a person who lost an arm in an industrial accident might be fitted with a replacement arm that does some of what he could do with the original.

On the face of it, the burden of proof would seem to rest on those who would claim that the use of restorative bionics is immoral—after all, they merely restore functionality. However, there is still the moral concern about the obligation to provide such restorative bionics. One version of this is the matter of whether or not the state is morally obligated to provide such devices to soldiers maimed in the course of their duties. Another is whether or not insurance should cover such devices for the general population.

In general, the main argument against both obligations is financial—such devices are still rather expensive. Turned into a utilitarian moral argument, the argument would be that the cost outweighs the benefits; therefore the state and insurance companies should not pay for such devices. One reply, at least in the case of the state, is that the state owes the soldiers restoration. After all, if a soldier lost the use of a body part (or parts) in the course of her duty, then the state is obligated to replace that part if it is possible. Roughly put, if Sally gave her leg for her country and her country can provide her with a replacement bionic leg, then it should do so.

In the case of insurance, the matter is somewhat more complicated. In the United States, insurance is mostly a private, for-profit business. As such, a case can be made that the obligations of the insurance company are limited to the contract with the customer. So, if Sam has coverage that pays for his leg replacement, then the insurance company is obligated to honor that. If Bill does not have such coverage, then the company is not obligated to provide the replacement.

Switching to a utilitarian counter, it can be argued that the bionic replacements actually save money in the long term. Inferior prosthetics can cause the user pain, muscle and bone issues and other problems that result in more ongoing costs. In contrast, a superior prosthetic can avoid many of those problems and also allow the person to better return to the workforce or active duty. As such, there seem to be excellent reasons in support of the state and insurance companies providing such restorative bionics. I now turn to the ethics of bionics in sports.

Thanks to the (now infamous) “Blade Runner” Oscar Pistorious, many people are familiar with unpowered, relatively simple prosthetic legs that allow people to engage in sports. Since these devices seem to be inferior to the original organics, there is little moral worry here in regards to fairness. After all, a device that merely allows a person to compete as he would with his original parts does not seem to be morally problematic. This is because it confers no unfair advantage and merely allows the person to compete more or less normally. There is, however, the concern about devices that are inferior to the original—these would put an athlete at a disadvantage and could warrant special categories in sports to allow for fair competition. Some of these categories already exist and more should be expected in the future.

Of greater concern are bionic devices that are superior to the original organics in relevant ways. That is, devices that could make a person faster, better or stronger. For example, powered bionic legs could allow a person to run at higher speeds than normal and also avoid the fatigue that limits organic legs. As another example, a bionic arm coupled with a bionic eye could allow a person incredible accuracy and speed in pitching. While such augmentations could make for interesting sporting events, they would seem to be clearly unethical when used in competition against unaugmented athletes. To use the obvious analogy, just as it would be unfair for a person to use a motorcycle in a 5K foot race, it would be unfair for a person to use bionic legs that are better than organic legs. There could, of course, be augmented sports competitions—these might even be very popular in the future.

Even if the devices did not allow for superior performance, it is worth considering that they might be banned from competition for other reasons. For example, even if someone’s powered legs only allowed them a slow jog in a 5K, this would be analogous to using a mobility scooter in such a race—though it would be slow, the competitor is not moving under her own power. Naturally, there should be obvious exceptions for events that are merely a matter of participation (like charity walks).

Another area of moral concern is the weaponization of bionic devices. When I was in graduate school, I made some of my Ramen noodle money writing for R. Talsorian Games Cyberpunk. This science fiction game featured a wide selection of implanted weapons as well as weapon grade cybernetic replacement parts. Fortunately, these weapons do not add a new moral problem since they fall under the existing ethics regarding weaponry, concealed or otherwise. After all, a gun in the hand is still a gun, whether it is held in an organic hand or literally inside a mechanical hand.

One final area of concern is that people will elect to replace healthy organic parts with bionic components either to augment their abilities or out of a psychological desire or need to do so. Science fiction, such as the above mentioned Cyberpunk, has explored these problems and even come up with a name for the mental illness caused by a person becoming more machine than human: cyberpsyhcosis.

In general, augmenting for improvement does seem morally acceptable, provided that there are no serious side effects (like cyberpsychosis) or other harms. However, it is easy enough to imagine various potential dangers: augmented criminals, the poor being unable to compete with the augmented rich, people being compelled to upgrade to remain competitive, and so on—all fodder for science fiction stories.

As far as people replacing their healthy organic parts because of some sort of desire or need to do so, that would also seem acceptable as a form of life style choice. This, of course, assumes that the procedures and devices are safe and do not cause health risks. Just as people should be allowed to have tattoos, piercings and such, they should be allowed to biodecorate.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter