Augmented Soldier Ethics I: Exoskeletons

US-Army exoskeleton

US-Army exoskeleton (Photo credit: Wikipedia)

One common element of military science fiction is the powered exoskeleton, also known as an exoframe, exosuit or powered armor. The basic exoskeleton is a powered framework that serves to provide the wearer with enhanced strength. In movies such as Edge of Tomorrow and video games such as Call of Duty Advanced Warfare the exoskeletons provide improved mobility and carrying capacity (which can include the ability to carry heavier weapons) but do not provide much in the way of armor. In contrast, the powered armor of science fiction provides the benefits of an exoskeleton while also providing a degree of protection. The powered armor of Starship Troopers, The Forever War, Armor and Iron Man all serve as classic examples of this sort of gear.

Because the exoskeletons of fiction provide soldiers with enhanced strength, mobility and carrying capacity, it is no surprise that militaries are very interested in exoskeletons in the real world. While exoskeletons have yet to be deployed, there are some ethical concerns about the augmentation of soldiers.

On the face of it, the use of exoskeletons in warfare seems to be morally unproblematic. The main reason is that an exoskeleton is analogous to any other vehicle, with the exception that it is worn rather than driven. A normal car provides the driver with enhanced mobility and carrying capacity and this is presumably not immoral. In terms of the military context, the exoskeleton would be comparable to a Humvee or a tank, both of which seem morally unproblematic as well.

It might be objected that the use of exoskeletons would give wealthier nations an unfair advantage in war. The easy and obvious response to this is that, unlike in sports and games, gaining an “unfair” advantage in war is not immoral. After all, there is not a moral expectation that combatants will engage in a fair fight rather than making use of advantages in such things as technology and numbers.

It might be objected that the advantage provided by exoskeletons would encourage countries that had them to engage in aggressions that they would not otherwise engage in. The easy reply to this is that despite the hype of video games and movies, any exoskeleton available in the near future would most likely not provide a truly spectacular advantage to infantry. This advantage would, presumably, be on par with existing advantages such as those the United States enjoys over almost everyone else in the world. As such, the use of exoskeletons would not seem morally problematic in this regard.

One point of possible concern is what might be called the “Iron Man Syndrome” (to totally make something up). The idea is that soldiers equipped with exoskeletons might become overconfident (seeing themselves as being like the superhero Iron Man) and thus put themselves and others at risk. After all, unless there are some amazing advances in armor technology that are unmatched by weapon technology, soldiers in powered armor will still be vulnerable to weapons capable of taking on light vehicle armor (which exist in abundance). However, this could be easily addressed by training. And experience.

A second point of possible concern is what could be called the “ogre complex” (also totally made up). An exoskeleton that dramatically boosts a soldier’s strength might encourage some people to act as bullies and abuse civilians or prisoners. While this might be a legitimate concern, it can easily addressed by proper training and discipline.

There are, of course, the usual peripheral issues associated with new weapons technology that could have moral relevance. For example, it is easy to imagine a nation wastefully spending money on exoskeletons, perhaps due to corruption. However, such matters are not specific to exoskeletons and would not be moral problems for the technology as such.

Given the above, it would seem that augmenting soldiers with exoskeletons poses no new moral concerns and is morally comparable to providing soldiers with Humvees, tanks and planes.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ladies & Swearing

swearing in cartoon Suomi: Kiroileva sarjakuva...

(Photo credit: Wikipedia)

Once and future presidential candidate Mike Huckabee recently expressed his concern about the profanity flowing from the mouths of New York Fox News ladies: “In Iowa, you would not have people who would just throw the f-bomb and use gratuitous profanity in a professional setting. In New York, not only do the men do it, but the women do it! This would be considered totally inappropriate to say these things in front of a woman. For a woman to say them in a professional setting that’s just trashy!”

In response, Erin Gloria Ryan posted a piece on Jezebel.com. As might be suspected, the piece utilized the sort of language that Mike dislikes and she started off with “listen up, cunts: folksy as balls probable 2016 Presidential candidate Mike Huckabee has some goddamn opinions about what sort of language women should use. And guess the fuck what? You bitches need to stop with this swearing shit.” While the short article did not set a record for OD (Obscenity Density), the author did make a good go at it.

I am not much for swearing. In fact, I used to say “swearing is for people who don’t how to use words.” That said, I do recognize that there are proper uses of swearing.

While I generally do not favor swearing, there are exceptions in which swearing was not only permissible, but necessary. For example, when I was running cross country, one of the other runners was looking super rough. The coach asked him how he felt and he said “I feel like shit coach.” The coach corrected him by saying “no, you feel like crap.” He replied, “No, coach, I feel like shit.” And he was completely right. Inspired by the memory of this exchange, I will endeavor to discuss proper swearing. I am, of course, not developing a full theory of swearing—just a brief exploration of the matter.

I do agree with some of what Huckabee said, namely the criticism of swearing in a professional context. However, my professional context is academics and I am doing my professional thing in front of students and other faculty—not exactly a place where gratuitous f-bombing would be appropriate or even useful. It would also make me appear sloppy and stupid—as if I could not express ideas or keep the attention of the class or colleagues without the cheap shock theatrics of swearing.

I am certainly open to the idea that such swearing could be appropriate in certain professional contexts. That is, that the vocabulary of swearing would be necessary to describe professional matters accurately and doing so would not make a person seem sloppy, disrespectful or stupid. Perhaps Fox News and Jezebel.com are such places.

While I was raised with certain patriarchal views, I have shed all but their psychological residue. Hearing a woman swear “feels” worse than hearing a man swear, but I know this is just the dregs of the past. If it is appropriate for a man to swear, the same right of swearing applies to a woman equally. I’m gender neutral, at least in principle.

Outside of the professional setting, I still have a general opposition to casual and repetitive swearing. The main reason is that I look at words and phrases as tools. As with any tool, they have the suitable and proper uses. While a screwdriver could be used to pound in nails, that is a poor use.  While a shotgun could be used to kill a fly, that is excessive and will cause needless collateral damage. Likewise, swear words have specific functions and using them poorly can show not only a lack of manners and respect, but a lack of artistry.

In general, the function of swear words is to serve as dramatic tools—that is, they are intended to shock and to convey something rather strong, such as great anger. To use them casually and constantly is rather like using a scalpel for every casual cutting task—while it will work, the blade will grow dull from repeated use and will no longer function well when it is needed for its proper task. So, I reserve my swear words not because I am prudish, but because if I wear them out, they will not serve me when I really need them most. For example, if I say “we are fucked” all the time for any minor problem, then when a situation in which we are well and truly fucked arrives, I will not be able to use that phrase effectively. But, if I save it for when the fuck hits the fan, then people who know me will know that it has gotten truly serious—I have broken out the “it is serious” words.

As another example, swear words should be saved for when a powerful insult or judgment is needed. If I were to constantly call normal people “fuckers” or describe not-so-bad things as being “shit”, then I would have little means of describing truly bad people and truly bad things. While I generally avoid swearing, I do need those words from time to time, such as when someone really is a fucker or something truly is shit.

Of course, swear words can also be used for humorous purposes. This is not really my sort of thing, but their shock value can serve well here—to make a strong point or to shock. However, if the words are too worn by constant use, then they can no longer serve this purpose. And, of course, it can be all too easy and inartistic to get a laugh simply by being crude—true artistry involves being able to get laughs using the same language one would use in front of grandpa in church. Of course, there is also an artistry to swearing—but that is more than just doing it all the time.

I would not dream of imposing on others—folks who wish to communicate normally using swear words have every right to do so, just as someone is free to pound nails with a screwdriver or whittle with a scalpel. However, it does bother me a bit that these words are being dulled and weakened by excessive use. If this keeps up, we will need to make new words and phrases to replace them—and then, no doubt, new words to replace those.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Impossible, the Improbable, the Flash & the Hobbit

Captain Cold

Captain Cold (Photo credit: Wikipedia)

As a fan of the genres of fantasy, science fiction, and superheroes I have no difficulty in suspending my disbelief when it comes to such seemingly impossible things as wizards, warp drives and Wonder Woman. But, when watching movies and TV shows, I find myself being rather critical of things that are merely very unlikely. As a philosopher, I find this rather interesting and find that it wants an explanation.

To focus the discussion, I will use examples from movies and TV shows I have recently watched. The movies are the first two in the Hobbit “trilogy” (I have not seen the third movie yet) and CW’s The Flash TV show.

The Hobbit movies include what is now standard fare in fantasy: wizards, magic swords, immortal elves, dragons, enchanted rings, and other such things that are most likely impossible in the actual world.  The Flash features a superhero who, in the opening sequence, explicitly claims to be the impossible. I, as noted above, have no problem accepting these aspects of the fantasy and superhero “realities.”

Given my ready acceptance of the impossible, it might be surprising to learn that I am rather critical of certain aspects of these movies and the TV show. In the case of the first Hobbit movie, my main complaint is about the incidents with the goblins and their king. I have no issue with goblins as such, but with the physics of the falling and such in those scenes. While I am not a physicist, I am rather familiar with falling and gravity and those scenes were, on my view, were so implausible that they prevented me from suspending my disbelief.

In the case of the second Hobbit movie, I have issues with the barrel ride scenes and the battle between the dwarfs and Smaug. In the case of the barrel ride, the events were so wildly implausible that I could not accept them. Ironically, the moves were too awesome and the fight too easy—it was analogous to watching a video game being played in “god mode”: there is no feeling of risk and the outcome is assured.

In the case of the battle with Smaug, the implausibility was largely a matter of the fact that every implausible step had to work perfectly to result in Smaug being in exactly the right place to have the gold “statue” spill onto him. Oddly enough, the incredible difficulty made it seem too easy. What I mean by this is that since everything so incredibly unlikely worked so perfectly it was evident that the events were completely scripted—I had no feeling that any step could have failed. Naturally, it might be said that every part of a movie is, by definition, scripted. This is true—but if the audience realizes this, then the movie is doing a poor job.

In the case of The Flash, I have two main issues. The first is with how Flash fights his super opponents. It is established in the show that Flash can move so fast that anyone without super speed is effectively motionless relative to him. For example, in one episode he simply pulls all the keys from the Royal Flush gang’s motorcycles and they can do nothing. However, when he fights a main villain, he suddenly unable to use that same tactic. For example, when fighting Captain Cold and Heatwave he runs around, barely able to keep ahead of their attacks. But these two villains are just guys with fancy guns—they have no super speed or ability to slow the Flash. Given the speed shown in other scenes, the Flash would be able to zip in and take their guns. Since no reason was given as to why this would not work, the battles seem to be contrived—as if the writers cannot think of a good reason why Flash cannot do this, but they need a fight to fill up show time, so they just make it happen for no good reason.

The second issue is with the police response to the villains. In the same episode where Flash fights Captain Cold and Heatwave, the police are confronting the two villains, yet are utterly helpless—until one detective manages a lucky shot that puts the heat gun out of operation. The villains, however, easily get away. However, the fancy weapons are very short range, do not really provide any defensive powers and the users are just normal guys. As such, the police could have simply shot them down easily—yet, for no apparent reason, they do not do so. The only reason would seem to be that the writers could not come up with a plausible reason why they would not shoot or use snipers yet they needed to fill up show time with a fight. Now that I have set the stage, it is time to turn to the philosophy.

In the Poetics Aristotle discusses the possible, the probable and the impossible. As he notes, a plot is supposed to go from the beginning, through the middle and to the end with plausible events. He does consider the role of the impossible and contents that “the impossible must be justified by artistic requirements, higher reality, or received opinion” and that that “a probable impossibility is preferable to an improbable possibility.”

In the case of the impossibilities of the Hobbit movies and the Flash TV show, these are justified by the artistic requirements of the fantasy and superhero genres: they, by their very nature, require the impossible, albeit certain types of impossibilities. In the case of the fantasy genre, the impossibilities of magic and the supernatural must be accepted. Of course, it is easy to accept these things since it is not actually certain that the supernatural is actually impossible.

In the case of the superhero genre, the powers of heroes are typically physically impossible. However, they are what make the genre what it is—so to accept stories of superheroes is to willingly accept the impossible as plausible in that context. The divergence from reality is acceptable because of this.

Some of the events in the show I was critical of are not actually impossible—just incredibly implausible. For example, it is not impossible for the police to simply decide to not deploy snipers against a criminal armed with a flamethrower. However, accepting this requires accepting that while the police in the show are otherwise like police in our world, they differ in one key way: they are incapable of deploying snipers against people armed with exotic weapons. It is also not impossible that a person would refuse to use her full abilities against people intending to kill her. However, accepting these things requires accepting things that do not improve the aesthetic experience, but rather detract by requiring the audience to accept the implausible without artistic justification.

To be fair, there is one plausible avenue of justification for these things. Aristotle writes that “to justify the irrational, we appeal to what is commonly said to be.” In the comics from which the Flash TV show is drawn, the battles between heroes and villains always go that way—that is, the show matches the comic reality. Likewise for the police—in the typical comic the police are ineffective and pretty much never just take out villains with sniper rifles—even when they easily could do so. As such, the show could be defended on the grounds that it is just following the implausible genre of comics aimed at kids. That said, I think the show would be better if the writers were able to come up with reasonable justifications for why the Flash cannot use his full speed against the villain of the week and why the police are so inept against normal people with fancy guns. Of course, I will keep on watching.

In the case of the Hobbit movies, accepting the battle in the goblin caves would require accepting that physics is different in those scenes than it is everywhere else in the world. However, Middle Earth is not depicted elsewhere as having such wonky physics and the difference is not justified. In regards to the barrel ride battle and the battle with Smaug, the problem is the probability—the events are not individually impossible, but accepting them requires accepting the incredibly unlikely without justification or need. Those who have read the book will know that those events are not in the actual book and are not, in fact, needed for the story. Also, there is the problem of consistency: the spectacular dwarfs of the barrels and Smaug fight are also the seemingly mundane dwarfs in so many other situations. Since these things detract from the movie, they should not have been included. But, of course, I did enjoy the movies.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

What is the Worst Thing You Should (Be Allowed to) Say?

Members of Westboro Baptist Church have been s...

Members of Westboro Baptist Church have been specifically banned from entering Canada for hate speech. Church members enter Canada, aiming to picket bus victim’s funeral (Photo credit: Wikipedia)

The murders at Charlie Hedbo and their aftermath raised the issue of freedom of expression in a dramatic and terrible manner. In response to these deaths, there was an outpouring of support for this basic freedom and, somewhat ironically, a crackdown on some people expressing their views.

This situation raises two rather important issues. The first is the matter of determining the worst thing that a person should express. The second is the matter of determining the worst thing that a person should be allowed to express. While these might seem to be the same issue, they are not. The reason for this is that there is a distinction between what a person should do and what is morally permissible to prevent a person from doing. The main focus will be on using the coercive power of the state in this role.

As an illustration of the distinction, consider the example of a person lying to his girlfriend about running strikes all day in the video game Destiny when he was supposed to be doing yard work. It seems reasonable to think that he should not lie to her (although exceptions are easy to imagine). However, it also seems reasonable to think that the police should not be sent to coerce him into telling her the truth. So, he should not lie to her about playing the game but he should be allowed to do so by the state (that is, it should not use its police powers to stop him).

This view can be disputed and there are those who argue in favor of complete freedom from the state (anarchists) and those who argue that the state should control every aspect of life (totalitarians). However, the idea that that there are some matters that are not the business of the state seems to be an intuitively plausible position—at least in democratic states such as the United States. What follows will rest on this assumption and the challenge will be to sort out these two issues.

One rather plausible and appealing approach is to take a utilitarian stance on the matter and accept the principle of harm as the foundation for determining the worst thing that a person should express and also the worst thing that a person should be allowed to express. The basic idea behind this is that the right of free expression is bounded by the stock liberal right of others not to be harmed in their life, liberty and property without due justification.

In the case of the worst thing that a person should express, I am speaking in the context of morality. There are, of course, non-moral meanings of “should.” To use the most obvious example, there is the “pragmatic should”: what a person should or should not do in regards to advancing his practical self-interest. For example, a person should not tell her boss what she really thinks of him if doing so would cost her the job she desperately needs. To use another example, there is also the “should of etiquette”: what a person should do or not do in order to follow the social norms. For example, a person should not go without pants at a formal wedding, even to express his opposition to the tyranny of pants.

Returning to the matter of morality, it seems reasonable to go with the stock approach of weighing the harm the expression generates against the right of free expression (assuming there is such a right). Obviously enough, there is not an exact formula for calculating the worst thing a person should express and this will vary according to the circumstances. For example, the worst thing one should express to a young child would presumably be different from the worst thing one should express to adult. In terms of the harms, these would include the obvious things such as offending the person, scaring her, insulting her, and so on for the various harms that can be inflicted by mere expression.

While I do not believe that people have a right not to be offended, people do seem to have a right not to be unjustly harmed by other people expressing themselves. To use an obvious example, men should not catcall women who do not want to be subject to this verbal harassment. This sort of behavior certainly offends, upsets and even scares many women and the men’s right to free expression does not give them a moral pass that exempts them from what they should or should not do.

To use another example, people should not intentionally and willfully insult another person’s deeply held beliefs simply for the sake of insulting or provoking the person. While the person does have the right to mock the belief of another, his right of expression is not a moral free pass to be abusive.

As a final example, people should not engage in trolling. While a person does have the right to express his views so as to troll others, this is clearly wrong. Trolling is, by definition, done with malice and contributes nothing of value to the conversation. As such, it should not be done.

It is rather important to note that while I have claimed that people should not unjustly harm others by expressing themselves, I have not made any claims about whether or not people should or should not be allowed to express themselves in these ways. It is to this that I now turn.

If the principle of harm is a reasonable principle (which can be debated), then a plausible approach would be to use it to sketch out some boundaries. The first rough boundary was just discussed: this is the boundary between what people should express and what people should (morally) not. The second rough boundary begins at the point where other people should be allowed to prevent a person from expressing himself and ends just before the point at which the state has the moral right to use its coercive power to prevent expression.

This area is the domain of interactions between people that does not fall under the authority of the state, yet still permits people to be prevented from expressing their views. To use an obvious example, the workplace is such a domain in which people can be justly prevented from expressing their views without the state being involved. To use a specific example, the administrators of my university have the right to prevent me from expressing certain things—even if doing so would not fall under the domain of the state. To use another example, a group of friends would have the right, among themselves, to ban someone from their group for saying racist, mean and spiteful things to one of their number. As a final example, a blog administrator would have the right to ban a troll from her site, even though the troll should not be subject to the coercive power of the state.

The third boundary is the point at which the state can justly use its coercive power to prevent a person from engaging in expression. As with the other boundaries, this would be set (roughly) by the degree of harm that the expression would cause others. There are many easy and obvious example where the state would act rightly in imposing on a person: threats of murder, damaging slander, incitements to violence against the innocent, and similar such unquestionably harmful expressions.

Matters do, of course, get complicated rather quickly. Consider, for example, a person who does not call for the murder of cartoonists who mock Muhammad but tweets his approval when they are killed. While this would certainly seem to be something a person should not do (though this could be debated), it is not clear that it crosses the boundary that would allow the state to justly prevent the person from expressing this view. If the approval does not create sufficient harm, then it would seem to not warrant coercive action against the person by the state.

As another example, consider the expression of racist views via social media. While people should not say such things (and would be justly subject to the consequences), as long as they do not engage in actual threats, then it would seem that the state does not have the right to silence the person. This is because the expression of racist views (without threats) would not seem to generate enough harm to warrant state coercion. Naturally, it could justify action on the part of the person’s employer, friends and associates: he might be fired and shunned.

As a third example, consider a person who mocks the dominant or even official religion of the state. While the rulers of such states usually think they have the right to silence such an infidel, it is not clear that this would create enough unjust harm to warrant silencing the person. Being an American, I think that it would not—but I believe in both freedom of religion and the freedom to mock religion.  There is, of course, the matter of the concern that such mockery would provoke others to harm the mocker, thus warranting the state to stop the person—for her own protection. However, the fact that people will act wrongly in response to expressions would not seem to warrant coercing the person into silence.

In general, I favor erring on the side of freedom: unless the state can show that silencing expression is needed to prevent a real and unjust harm, the state does not have the moral right to silence expression.

I have merely sketched out a general outline of this matter and have presented three rough boundaries in regards to what people should say and what they should be allowed to say. Much more work would be needed to develop a full and proper account.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Should Two Year Colleges Be Free?

Tallahassee County Community College Seal

Tallahassee County Community College Seal (Photo credit: Wikipedia)

While Germany has embraced free four year college education for its citizens, President Obama has made a more modest proposal to make community college free for Americans. He is modeling his plan on that of Republican Governor Bill Haslam. Haslam has made community college free for citizen of Tennessee, regardless of need or merit. Not surprisingly, Obama’s proposal has been attacked by both Democrats and Republicans. Having some experience in education, I will endeavor to assess this proposal in a rational way.

First, there is no such thing as a free college education (in this context). Rather, free education for a student means that the cost is shifted from the student to others. After all, the staff, faculty and administrators will not work for free. The facilities of the schools will not be maintained, improved and constructed for free. And so on, for all the costs of education.

One proposed way to make education free for students is to shift the cost onto “the rich”, a group which is easy to target but somewhat harder to define. As might be suspected, I think this is a good idea. One reason is that I believe that education is the best investment a person can make in herself and in society. This is why I am fine with paying property taxes that go to education, although I have no children of my own. In addition to my moral commitment to education, I also look at it pragmatically: money spent on education (which helps people advance) means having to spend less on prisons and social safety nets. Of course, there is still the question of why the cost should be shifted to the rich.

One obvious answer is that they, unlike the poor and what is left of the middle class, have the money. As economists have noted, an ongoing trend in the economy is that wages are staying stagnant while capital is doing well. This is manifested in the fact that while the stock market has rebounded from the crash, workers are, in general, doing worse than before the crash.

There is also the need to address the problem of income inequality. While one might reject arguments grounded in compassion or fairness, there are some purely practical reasons to shift the cost. One is that the rich need the rest of us to keep the wealth, goods and services flowing to them (they actually need us way more than we need them). Another is the matter of social stability. Maintaining a stable state requires that the citizens believe that they are better off with the way things are then they would be if they engaged in a revolution. While deceit and force can keep citizens in line for quite some time, there does come a point at which these fail. To be blunt, it is in the interest of the rich to help restore the faith of the middle class. One of the nastier alternatives is being put against the wall after the revolution.

Second, the reality of education has changed over the years. In the not so distant past, a high-school education was sufficient to get a decent job. I am from a small town and Maine and remember well that people could get decent jobs with just that high school degree (or even without one). While there are still some decent jobs like that, they are increasingly rare.

While it might be a slight exaggeration, the two-year college degree is now the equivalent of the old high school degree. That is, it is roughly the minimum education needed to have a shot at a decent job. As such, the reasons that justify free (for students) public K-12 education would now justify free (for students) K-14 public education. And, of course, arguments against free (for the student) K-12 education would also apply.

While some might claim that the reason the two-year degree is the new high school degree because education has been in a decline, there is also the obvious reason that the world has changed. While I grew up during the decline of the manufacturing economy, we are now in the information economy (even manufacturing is high tech now) and more education is needed to operate in this new economy.

It could, of course, be argued that a better solution would be to improve K-12 education so that a high school degree would be sufficient for a decent job in the information economy. This would, obviously enough, remove the need to have free two-year college. This is certainly an option worth considering, though it does seem unlikely that it would prove viable.

Third, the cost of college has grown absurdly since I was a student. Rest assured, though, that this has not been because of increased pay for professors. This has been addressed by a complicated and sometimes bewildering system of financial aid and loads. However, free two year college would certainly address this problem in a simple way.

That said, a rather obvious concern is that this would not actually reduce the cost of college—as noted above, it would merely shift the cost. A case can certainly be made that this will actually increase the cost of college (for those who are paying). After all, schools would have less incentive to keep their costs down if the state was paying the bill.

It can be argued that it would be better to focus on reducing the cost of public education in a rational way that focuses on the core mission of colleges, namely education. One major reason for the increase in college tuition is the massive administrative overhead that vastly exceeds what is actually needed to effectively run a school. Unfortunately, since the administrators are the ones who make the financial choices it seems unlikely that they will thin their own numbers. While state legislatures have often applied magnifying glasses to the academic aspects of schools, the administrative aspects seem to somehow get less attention—perhaps because of some interesting connections between the state legislatures and school administrations.

Fourth, while conservative politicians have been critical of the general idea of the state giving away free stuff to regular people rather than corporations and politicians, liberals have also been critical of the proposal. While liberals tend to favor the idea of the state giving people free stuff, some have taken issue with free stuff being given to everyone. After all, the proposal is not to make two-year college free for those who cannot afford it, but to make it free for everyone.

It is certainly tempting to be critical of this aspect of the proposal. While it would make sense to assist those in need, it seems unreasonable to expend resources on people who can pay for college on their own. That money, it could be argued, could be used to help people in need pay for four-year colleges. It can also be objected that the well-off would exploit the system.

One easy and obvious reply is that the same could be said of free (for the student) K-12 education. As such, the reasons that exist for free public K-12 education (even for the well-off) would apply to the two-year college plan.

In regards to the well-off, they can already elect to go to lower cost state schools. However, the wealthy tend to pick the more expensive schools and usually opt for four-year colleges. As such, I suspect that there would not be an influx of rich students into two-year programs trying to “game the system.” Rather, they will tend to continue to go to the most prestigious four year schools their money can buy.

Finally, while the proposal is for the rich to bear the cost of “free” college, it should be looked at as an investment. The rich “job creators” will benefit from having educated “job fillers.” Also, the college educated will tend to get better jobs which will grow the economy (most of which will go to the rich) and increase tax-revenues (which can help offset the taxes on the rich). As such, the rich might find that their involuntary investment will provide an excellent return.

Overall, the proposal for “free” two-year college seems to be a good idea, although one that will require proper implementation (which will be very easy to screw up).

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Is Everyone a Little Bit Racist?

One in a series of posters attacking Radical R...

One in a series of posters attacking Radical Republicans on the issue of black suffrage, issued during the Pennsylvania gubernatorial election of 1866. (Photo credit: Wikipedia)

It has been argued that everyone is a little bit racist. Various studies have shown that black America are treated rather differently than white Americans. Examples of this include black students being more likely to be suspended than white students, blacks being arrested at a higher rate than whites, and job applications with “black sounding” names being less likely to get callbacks than those with “white sounding” names. Interestingly, studies have shown that the alleged racism is not confined to white Americans: black Americans also seem to share this racism. One study involves a simulator in which the participant takes on the role of a police officer and must decide to shoot or holster her weapon when confronted by simulated person. The study indicates that participants, regardless of race, shoot more quickly at blacks than whites and are more likely to shoot an unarmed black person than an unarmed white person. There are, of course, many other studies and examples that support the claim that everyone is a little bit racist.

Given the evidence, it would seem reasonable to accept the claim that everyone is a little bit racist. It is, of course, also an accepted view in certain political circles. However, there seems to be something problematic with claiming that everyone is racist, even if it is the claim that the racism is of the small sort.

One point of logical concern is that inferring that all people are at least a little racist on the basis of such studies would be problematic. Rather, what should be claimed is that the studies indicate the presence of racism and that these findings can be generalized to the entire population. But, this could be dismissed as a quibble about induction.

Some people, as might be suspected, would take issue with this claim because to be accused of racism is rather offensive. Some, as also might be suspected, would take issue with this claim because they claim that racism has ended in America, hence people are not racist. Not even a little bit. Other might complain that the accusation is a political weapon that is wielded unjustly. I will not argue about these matters, but will instead focus on another concern, that of the concept of racism in this context.

In informal terms, racism is prejudice, antagonism or discrimination based on race. Since various studies show that people have prejudices linked to race and engage in discrimination along racial lines, it seems reasonable to accept that everyone is at least a bit racist.

To use an analogy, consider the matter of lying. A liar, put informally, is someone who makes a claim that she does not believe with the intention of getting others to accept it as true. Since there is considerable evidence that people engage in this behavior, it can be claimed that everyone is a little bit of a liar. That is, everyone has told a lie.

Another analogy would be to being an abuser. Presumably each person has been at least a bit mean or cruel to another person she has been in a relationship with (be it a family relationship, a friendship or a romantic relationship). This would thus entail that everyone is at least a little bit abusive.

The analogies could continue almost indefinitely, but it will suffice to end them here, with the result that we are all racist, abusive liars.

On the one hand, the claim is true. I have been prejudiced. I have lied. I have been mean to people I love. I have engaged in addictive behavior. The same is likely to be true of even the very best of us. Since we have lied, we are liars. Since we have abused, we are abusers. Since we have prejudice and have discriminated based on race, we are racists.

On the other hand, the claim is problematic. After all, to judge someone to be a racist, an abuser, or a liar is to make a strong moral judgment of the person. For example, imagine the following conversation:

Sam: “I’m interested in your friend Sally. You know her pretty well…what is she like?”

Me: “She is a liar and a racist.”

Sam: “But…she seems so nice.”

Me: “She is. In fact, she’s one of the best people I know.”

Sam: “But you said she is a liar and a racist.”

Me: “Oh, she is. But just a little bit.”

Sam: “What?”

Me: “Well, she told me that when she was in college, she lied to a guy to avoid going on a date. She also said that when she was a kid, she thought white people were all racists and would not be friends with them. So, she is a liar and a racist.”

Sam: “I don’t think you know what those words mean.”

The point is, of course, that terms like “racist”, “abuser” and “liar” have what can be regarded as proper moral usage. To be more specific, because these are such strong terms, they should be applied in cases in which they actually fit. For example, while anyone who lies is technically a liar, the designation of being a liar should only apply to someone who routinely engages in that behavior. That is, a person who has a moral defect in regards to honesty. Likewise, anyone who has a prejudice based on race or discriminates based on race is technically a racist. However, the designation of racist should be reserved for those who have the relevant moral defect—that is, racism is their way of being, as opposed to failing to be perfectly unbiased. As such, using the term “racist” (or “liar”) in claiming that “everyone is a little bit racist” (or “everyone is little bit of a liar”) either waters down the moral term or imposes too harsh a judgment on the person. Either way would be problematic.

So, if the expression “we are all a little bit racist” should not be used, what should replace it? My suggestion is to speak instead of people being subject to race linked biases. While saying “we are all subject to race linked biases” is less attention grabbing than “we are all a little bit racist”, it seems more honest as a description.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Group Responsibility



After the murders in France, people were once again discussing the matter of group responsibility. In the case of these murders, some contend that all Muslims are responsible for the actions of the few who committed murder. In most cases people do not claim that all Muslims support the killings, but there is a tendency to still put a special burden of responsibility upon Muslims as a group.

Some people do take the killings and other terrible events as evidence that Islam itself is radical and violent. This sort of “reasoning” is, obviously enough, the same sort used when certain critics of the Tea Party drew the conclusion that the movement was racist because some individuals in the Tea Party engaged in racist behavior. It is also the same “reasoning” used to condemn all Christians or Republicans based on the actions of a very few.

To infer that an entire group has a certain characteristic (such as being violent or prone to terrorism) based on the actions of a few would generally involve committing the fallacy of hasty generalization. It can also be seen as the fallacy of suppressed evidence in that evidence contrary to the claim is simply ignored. For example, to condemn Islam as violent based on the actions of terrorists would be to ignore the fact that the vast majority of Muslims are as peaceful as people of other faiths, such as Christians and Jews.

It might be objected that a group can be held accountable for the misdeeds of its members even when those misdeeds are committed by a few and even when these misdeeds are supposed to not be in accord with the real beliefs of the group. For example, if I were to engage in sexual harassment while on the job, Florida A&M University can be held accountable for my actions. Thus, it could be argued, all Muslims are accountable for the killings in France and these killings provide just more evidence that Islam itself is a violent and murderous religion.

In reply, Islam (like Christianity) is not a monolithic faith with a single hierarchy over all Muslims. After all, there are various sects of Islam and a multitude of diverse Muslim hierarchies. For example, the Moslems of Saudi Arabia do not fall under the hierarchy of the Moslems of Iran.

As such, treating all of Islam as an organization with a chain of command and a chain of responsibility that extends throughout the entire faith would be rather problematic. To use an analogy, sports fans sometimes go on violent rampages after events. While the actions of the violent fans should be condemned, the peaceful fans are not accountable for those actions. After all, while the fans are connected by their being fans of a specific team this is not enough to form a basis for accountability. So, if some fans of a team set fire to cars, this does not make all the fans of that team responsible. Also, if people unassociated with the fans decide to jump into action and destroy things, it would be even more absurd to claim that the peaceful fans are accountable for their actions. As such, to condemn all of Islam based on what happened in France would be both unfair and unreasonable. As such, the people who murdered in France are accountable but Islam cannot have these incidents laid at its collective doorstep.

This, of course, raises the question of the extent to which even an organized group is accountable for its members. One intuitive guide is that the accountability of the group is proportional to the authority the group has over the individuals. For example, while I am a philosopher and belong to the American Philosophical Association, other philosophers have no authority over me. As such, they have no accountability for my actions. In contrast, my university has considerable authority over my work life as a professional philosopher and hence can be held accountable should I, for example, sexually harass a student or co-worker.

The same principle should be applied to Islam (and any faith). Being a Moslem is analogous to being a philosopher in that there is a recognizable group. As with being a philosopher, merely being a Moslem does not make a person accountable for all other Moslems.

But, just as I belong to an organization with a hierarchy, a Moslem can belong to an analogous organization, such as a mosque or ISIS. To the degree that the group has authority over the individual, the group is accountable. So, if the killers in France were acting as members of ISIS or Al-Qaeda, then the group would be accountable. However, while groups like ISIS and Al-Qaeda might delude themselves into thinking they have legitimate authority over all Moslems, they obviously do not. After all, they are opposed by most Moslems.

So, with a religion as vast and varied as Islam, it cannot be reasonably be claimed that there is a central earthly authority over its members and this would serve to limit the collective responsibility of the faith. Naturally, the same would apply to other groups with a similar lack of overall authority, such as Christians, conservatives, liberals, Buddhists, Jews, philosophers, runners, and satirists.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Euphemism

With the start of a new semester, I have gotten a bit behind on my blogging. But, since I am working on a book on rhetorical devices, I have an easy solution; here is an example from the book:

When I was a kid, people bought used cars. These days, people buy fine pre-owned vehicles. There is (usually) no difference between the meanings of “used car” and “pre-owned” car—both refer to the same thing, namely a car someone else has owned and used. However, “used” sounds a bit nasty, perhaps suggesting that the car might be a bit sticky in places. By substituting “pre-owned” for “used”, the car sounds somehow better, although it is the same car whether it is described as used or pre-owned.

If you need to make something that is negative sound positive without actually making it better, then a euphemism would be your tool of choice. A euphemism is a pleasant or at least inoffensive word or phrase that is substituted for a word or phrase that means the same thing but is unpleasant, offensive otherwise negative in terms of its connotation. To use an analogy, using a euphemism is like coating a bitter pill with sugar, making it easier to swallow.

Euphemisms and some other rhetorical devices make use of the fact that words or phrases have connotations as well as denotations. Put a bit simply, the denotation of a term is the literal meaning of the term. The connotation of the term is its emotional association. Terms can have the same denotation but very different connotations. For example “child” and “rug rat” have rather different emotional associations.

The way to use a euphemism is to replace the key words or phrases that are negative in their connotation with those that are positive (or at least neutral). Naturally, it helps to know what the target audience regards as positive words, but generically positive words can do the trick quite well.

The defense against a euphemism is to replace the positive term with a neutral term that has the same meaning. For example, for “an American citizen was inadvertently neutralized during a drone strike”, the neutral presentation would be “An American citizen was killed during a drone strike.” While “killed” does have a negative connotation, it does describe the situation with more neutrality.

In some cases, euphemisms are used for commendable reasons, such as being polite in social situations or to avoid exposing children to “adult” concepts. For example, at a funeral it is considered polite to refer the dead person as “the departed” rather than “the corpse.”

 

Examples

“Pre-owned” for “used.”

“Neutralization” for “killing.”

“Freedom fighter” for “terrorist”

“Revenue enhancement” for “tax increase.”

“Down-sized” for “fired.”

“Between jobs” for “unemployed.”

“Passed” for “dead.”

“Office manager” for “secretary.”

“Custodian” for “janitor.”

“Detainee” for “prisoner.”

“Enhanced interrogation” for “torture.”

“Self-injurious behavior incidents” for “suicide attempts.”

“Adult entertainment” or “adult material” for “pornography.”

“Sanitation engineer” for “garbage man.”

“Escort”, “call girl”, or “lady of the evening” for “prostitute.”

“Gentlemen’s club” for “strip club.”

“Exotic dancer” for “stripper”

“A little thin on top” for “bald.”

“In a family way” for “pregnant.”

“Sleeping with” for “having sex with.”

“Police action” for “undeclared war.”

“Downsized” for “fired.”

“Wardrobe malfunction” for “exposure.”

“Commandeer” for “steal.”

“Modify the odds in my favor” for “cheat.”

A Bubble of Digits

A look back at the American (and world) economy shows a “pastscape” of exploded economic bubbles. The most recent was the housing bubble, but the less recent .com bubble serves as a relevant reminder that bubbles can be technological. This is a reminder well worth keeping in mind for we are, perhaps, blowing up a new bubble.

In “The End of Economic Growth?” Oxford’s Carl Frey discusses the new digital economy and presents some rather interesting numbers regarding the value of certain digital companies relative to the number of people they employ. One example is Twitch, which streams videos of people playing games (and people commenting on people playing games). Twitch was purchased by Amazon for $970 million. Twitch has 170 employees. The multi-billion dollar company Facebook had 8,348 employees as of September 2014. Facebook bought WhatsApp for $19 billion. WhatsApp employed 55 people at the time of this acquisition. In an interesting contrast, IBM employed 431,212 people in 2013.

While it is tempting to explain the impressive value to employee ratio in terms of grotesque over-valuation (which does have its merits as a criticism), there are other factors involved. One, as Frey notes, is that the (relatively) new sort of digital businesses require relatively little capital. The above-mentioned WhatsApp started out with $250,000 and this was actually rather high for an app—the average cost to develop one is $6,453. As such, a relatively small investment can create a huge return.

Another factor is an old one, namely the efficiency of technology in replacing human labor. The development of the plow reduced the number of people required to grow food, the development of the tractor reduced it even more, and the refinement of mechanized farming has enabled the number of people required in agriculture to be reduced dramatically. While it is true that people have to do work to create such digital companies (writing the code, for example), much of the “labor” is automated and done by computers rather than people.

A third factor, which is rather critical, is the digital aspect. Companies like Facebook, Twitch and WhatsApp do not manufacture objects that need to manufactured, shipped and sold. As such, they do not (directly) create jobs in these areas. These companies do make use of existing infrastructure: Facebook does need companies like Comcast to provide the internet connection and companies like Apple to make the devices. But, rather importantly, they do not employ the people who work for Comcast and Apple (and even these companies employ relatively few people).

One of the most important components of the digital aspect is the multiplier effect. To illustrate this, consider two imaginary businesses in the health field. One is a walk-in clinic which I will call Nurse Tent. The other is a health app called RoboNurse. If a patient goes to Nurse Tent, the nurse can only tend to one patient at a time and he can only work so many hours per day. As such, Nurse Tent will need to employ multiple nurses (as well as the support staff). In contrast, the RoboNurse app can be sold to billions of people and does not require the sort of infrastructure required by Nurse Tent. If RoboNurse takes off as a hot app, the developer could sell it for millions or even billions.

Nurse Tent could, of course, become a franchise (the McDonald’s of medicine). But, being very labor intensive and requiring considerable material outlay, it will not be able to have the value to employee ratio of a digital company like WhatsApp or Facebook. It would, however, employ more people. However, the odds are that most of the employees would not be well paid—while the digital economy is producing millionaire and billionaires, wages for labor are rather lacking. This helps to explain why the overall economy is doing great, while the majority of workers are worse off than before the last bubble.

It might be wondered why this matters. There are, of course, the usual concerns about the terrible inequality of the economy. However, there is also the concern that a new bubble is being inflated, a bubble filled with digits. There are some good reasons to be concerned.

First, as noted above, the digital companies seem to be grotesquely overvalued. While the situation is not exactly like the housing bubble, overvaluation should be a matter of concern. After all, if the value of these companies is effectively just “hot digits” inflating a thin skin, then a bubble burst seems likely.

This can be countered by arguing that the valuation is accurate or even that all valuation is essentially a matter of belief and as long as we believe, all will be fine. Until, of course, it is no longer fine.

Second, the current digital economy increases the income inequality mentioned above, widening the gap between the rich and the poor. Laying aside the fact that such a gap historically leads to social unrest and revolution, there is the more immediate concern that the gap will cause the bubble to burst—the economy cannot, one would presume, endure without a solid middle and base to help sustain the top of the pyramid.

This can be countered by arguing that the new digital economy will eventually spread the wealth. Anyone can make an app, anyone can create a startup, and anyone can be a millionaire. While this does have an appeal to it, there is the obvious fact that while it is true that (almost) anyone can do these things, it is also true that most people will fail. One just needs to consider all the failed startups and the millions of apps that are not successful.

There is also the obvious fact that civilization requires more than WhatsApp, Twitch and Facebook and people need to work outside of the digital economy (which lives atop the non-digital economy). Perhaps this can be handled by an underclass of people beneath the digital (and financial) elite, who toil away at low wages to buy smartphones so they can update their status on Facebook and watch people play games via Twitch. This is, of course, just a digital variant on a standard sci-fi dystopian scenario.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Student Evaluations of Faculty

http://www.gettyimages.com/detail/464675323

While college students have been completing student evaluations of faculty since the 1960s, these evaluations have taken on considerable importance. There are various reasons for this. One is a conceptual shift towards the idea that a college is primarily a business and students are customers. On this model, student evaluations of faculty are part of the customer satisfaction survey process. A second is an ideological shift in regards to education. Education is seen more as a private good and something that needs to be properly quantified. This is also tied into the notion that the education system is, like a forest or oilfield, a resource to be exploited for profit. Student evaluations provide a cheap method of assessing the value provided by faculty and, best of all, provide numbers (numbers usually based on subjective assessments, but pay that no mind).

Obviously enough, I agree with the need to assess performance. As a gamer and runner, I have a well-developed obsession with measuring my athletic and gaming performances and I am comfortable with letting that obsession spread freely into my professional life. I want to know if my teaching is effective, what is working, what is not, and what impact I am having on the students. Of course, I want to be confident that the methods of assessment that I am using are actually useful. Having been in education quite some time, I do have some concerns about the usefulness of student evaluations of faculty.

The first and most obvious concern is that students are, almost by definition, not experts in regards to assessing education. While they obviously take classes and observe (when not Facebooking) faculty, they typically lack any formal training in assessment and one might suspect that having students evaluate faculty is on par with having sports fans assessing coaching. While fans and students often have strong opinions, this does not really qualify them to provide meaningful professional assessment.

Using the sports analogy, this can be countered by pointing out that while a fan might not be a professional in regards to coaching, a fan usually knows good or bad coaching when she sees it. Likewise, a student who is not an expert at education can still recognize good or bad teaching.

A second concern is the self-selection problem. While students have access to the evaluation forms and can easily go to Rate My Professors, students who take the time to show up and fully complete the forms or go to the website will tend to have stronger feelings about the professor. These feelings will tend to bias the results so that they are more positive or more negative than they should be.

The counter to this is that the creation of such strong feelings is relevant to the assessment of the professor. A practical way to counter the bias is to ensure that most (if not all) students in a course complete the evaluations.

Third, people often base their assessments on irrelevant factors about the professor. These include such things as age, gender, appearance, and personality. The concern is that this factor makes evaluations a form of popularity contest: professors that are liked will be evaluated by better professors who are not as likeable. There is also the concern that students tend to give younger professors and female professors worse evaluations than older professors and male professors and these sorts of gender and age biases lower the credibility of such evaluations.

A stock reply to this is that these factors do not influence students as strongly as critics might claim. So, for example, a professor might be well-liked, yet still get poor evaluations in regards to certain aspects of the course. There are also those who question the impact of alleged age and gender bias.

Fourth, people often base assessments on irrelevant factors about the course, such as how easy it is, the specific grade received,  or whether they like the subject or not. Not surprisingly, it is commonly held that students give better evaluations to professors who they regard as easy and downgrade those they see as hard.

Given that people generally base assessments on irrelevant factors (a standard problem in critical thinking), this does seem to be a real concern. Anecdotally, my own experience indicates that student assessment can vary a great deal based on irrelevant factors they explicitly mention. I have a 4.0 on Rate my Professors, but there is quite a mix in regards to the review content. What is striking, at least to me, is the inconsistencies between evaluations. Some students claim that my classes are incredibly easy (“he is so easy”), while others claim they are incredibly hard (“the hardest class I have ever taken”). I am also described as being very boring and very interesting, helpful and unhelpful and so on. This sort of inconsistency in evaluations is not uncommon and does raise the obvious concern about the usefulness of such evaluations.

A counter to this is that the information is still useful. Another counter is that the appropriate methods of statistical analysis can be used to address this concern. Those who defend evaluations point out that students tend to be generally consistent in their assessments. Of course, consistency in evaluations does not entail accuracy.

To close, there are two final general concerns about evaluations of faculty. One is the concern about values. That is, what is it that makes a good educator? This is a matter of determining what it is that we are supposed to assess and to use as the standard of assessment. The second is the concern about how well the method of assessment works.

In the case of student evaluations of faculty, we do not seem to be entirely clear about what it is that we are trying to assess nor do we seem to be entirely clear about what counts as being a good educator. In the case of the efficacy of the evaluations, to know whether or not they measure well we would need to have some other means of determining whether a professor is good or not. But, if there were such a method, then student evaluations would seem unnecessary—we could just use those methods. To use an analogy, when it comes to football we do not need to have the fans fill out evaluation forms to determine who is a good or bad athlete: there are clear, objective standards in regards to performance.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter