Tag Archives: barack obama

Guns on Campus

As I write this, the Florida state legislature is considering a law that will allow concealed carry permit holders to bring their guns to college campuses. As is to be expected, some opponents and some proponents are engaging in poor reasoning, hyperbole and other such unhelpful means of addressing the issue. As a professor and a generally pro-gun person, I have more than academic interest in this matter. My goal is, as always, is to consider this issue rationally, although I do recognize the role of emotions in this matter.

From an emotional standpoint, I am divided in my heart. On the pro-gun feeling side, all of my gun experiences have been positive. I learned to shoot as a young man and have many fond memories of shooting and hunting with my father. Though I now live in Florida, we still talk about guns from time to time. As graduate student, I had little time outside of school, but once I was a professor I was able to get in the occasional trip to the range. I have, perhaps, been very lucky: the people I have been shooting with and hunting with have all been competent and responsible people. No one ever got hurt. I have never been a victim of gun crime.

On the anti-gun side, like any sane human I am deeply saddened when I hear of people being shot down. While I have not seen gun violence in person, Florida State University (which is just across the tracks from my university) recently had a shooter on campus. I have spoken with people who have experienced gun violence and, not being callous, I can understand their pain. Roughly put, I can feel the two main sides in the debate. But, feeling is not a rational way to settle a legal and moral issue.

Those opposed to guns on campus are concerned that the presence of guns carried by permit holders would result in increase in injuries and deaths. Some of these injuries and deaths would be intentional, such as suicide, fights escalating to the use of guns, and so on. Some of these injuries and deaths, it is claimed, would be the result of an accidental discharge. From a moral standpoint, this is obviously a legitimate concern. However, it is also a matter for empirical investigation: would allowing concealed carry on campus increase the likelihood of death or injury to a degree that would justify banning guns?

Some states already allow licensed concealed carry on campus and there is, of course, considerable data available about concealed carry in general. The statistically data would seem to indicate that allowing concealed carry on campus would not result in an increase in injuries and death on campus. This is hardly surprising: getting a permit requires providing proof of competence with a firearm as well as a thorough background check—considerably more thorough than the background check to purchase a firearm. Such permits are also issued at the discretion of the state. As such, people who have such licenses are not likely engage in random violence on campus.

This is, of course, an empirical matter. If it could be shown that allowing licensed conceal carry on campus would result in an increase in deaths and injuries, then this would certainly impact the ethics of allowing concealed carry.

Those who are opposed to guns on campus are also rightfully concerned that someone other than the license holder will get the gun and use it. After all, theft is not uncommon on college campuses and someone could grab a gun from a licensed holder.

While these concerns are not unreasonable, someone interested in engaging in gun violence can easily acquire a gun without stealing it from a permit holder on campus. She could buy one or steal one from somewhere else. As far as grabbing a gun from a person carrying it legally, attacking an armed person is generally not a good idea—and, of course, someone who is prone to gun grabbing would presumably also try to grab a gun from a police officer. In general, these do not seem to be compelling reasons to ban concealed carry on campus.

Opponents of allowing guns on campus also point to psychological concerns: people will feel unsafe knowing that people around them might be legally carry guns. This might, it is sometimes claimed, result in a suppression of discussion in classes and cause professors to hand out better grades—all from fear that a student is legally carrying a gun.

I do know people who are actually very afraid of this—they are staunchly anti-gun and are very worried that students and other faculty will be “armed to the teeth” on campus and “ready to shoot at the least provocation.” The obvious reply is that someone who is dangerously unstable enough to shoot students and faculty over such disagreements would certainly not balk at illegally bringing a gun to campus. Allowing legal concealed carry by permit holders would, I suspect, not increase the odds of such incidents. But, of course, this is a matter of emotions and fear is rarely, if ever, held at bay by reason.

Opponents of legal carry on campus also advance a reasonable argument: there is really no reason for people to be carrying guns on campus. After all, campuses are generally safe, typically have their own police forces and are places of learning and not shooting ranges.

This does have considerable appeal. When I lived in Maine, I had a concealed weapon permit but generally did not go around armed. My main reason for having it was convenience—I could wear my gun under my jacket when going someplace to shoot. I must admit, of course, that as a young man there was an appeal in being able to go around armed like James Bond—but that wore off quickly and I never succumbed to gun machismo. I did not wear a gun while running (too cumbersome) or while socializing (too…weird). I have never felt the need to be armed with a gun on campus, though all the years I have been a student and professor. So, I certainly get this view.

The obvious weak point for this argument is that the lack of a reason to have a gun on campus (granting this for the sake of argument) is not a reason to ban people with permits from legally carrying on campus. After all, the permit grants the person the right to carry the weapon legally and more is needed to deny the exercise of that right than just the lack of need.

Another obvious weak point is that a person might need a gun on campus for legitimate self-defense. While this is not likely, that is true in most places. After all, a person going to work or out for a walk in the woods is not likely to need her gun. I have, for example, never needed one for self-defense. As such, there would seem to be as much need to have a gun on campus as many other places where it is legal to carry. Of course, this argument could be turned around to argue that there is no reason to allow concealed carry at all.

Proponents of legal concealed carry on campus often argue that “criminals and terrorists” go to college campuses in order to commit their crimes, since they know no one will be armed. There are two main problems with this. The first is that college campuses are, relative to most areas, very safe. So, criminals and terrorists do not seem to be going to them that often. As opponents of legal carry on campus note, while campus shootings make the news, they are actually very rare.

The second is that large campuses have their own police forces—in the shooting incident at FSU, the police arrived rapidly and shot the shooter. As such, I do not think that allowing concealed carry will scare away criminals and terrorists. Especially since they do not visit campuses that often already.

Proponents of concealed carry also sometimes claim that the people carrying legally on campus will serve as the “good guy with guns” to shoot the “bad guys with guns.” While there is a chance that a good guy will be able to shoot a bad guy, there is the obvious concern that the police will not be able to tell the good guy from the bad guy and the good guy will be shot. In general, the claims that concealed carry permit holders will be righteous and effective vigilantes on campus are more ideology and hyperbole than fact. Not surprisingly, most reasonable pro-gun people do not use that line of argumentation. Rather, they focus on more plausible scenarios of self-defense and not wild-west vigilante style shoot-outs.

My conclusion is that there is not a sufficiently compelling reason to ban permit holders from carrying their guns on campus. But, there does not seem to be a very compelling reason to carry a gun on campus.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Should You Attend a For-Profit College?

The rise of for-profit universities have given students increased choices when it comes to picking schools. Since college is rather expensive and schools vary in regards to the success of their graduates, it is wise to carefully consider the options before writing those checks. Or, more likely these days, going into debt.

While there is a popular view that the for-profit free-market will consistently create better goods and services at ever lower prices, it is wisest to accept facts over ideological theory. As such, when picking between public, non-profit, and for-profit schools one should look at the numbers. Fortunately, ProPublica has been engaged in crunching the numbers.

Today most people go to college in order to have better job prospects. As such, one rather important consideration is the likelihood of getting a job after graduation and the likely salary. While for-profit schools spend about $4.2 billion in 2009 for recruiting and marketing and pay their own college presidents an average of $7.3 million per year, the typical graduate does rather poorly. According to the U.S. Department of Education 74% of the programs at for-profit colleges produced graduates whose average pay is less than that of high-school dropouts. In contrast, graduates of non-profit and public colleges do better financially than high school graduates.

Another important consideration is the cost of education. While the free-market is supposed to result in higher quality services at lower prices and the myth of public education is that it creates low quality services at high prices, the for-profit schools are considerably more expensive than their non-profit and public competition. A two-year degree costs, on average, $35,000 at a for-profit school. The average community college offers that degree at a mere $8,300. In the case of four year degrees, the average is $63,000 at a for-profit and $52,000 for a “flagship” state college. For certificate programs, public colleges will set a student back $4,250 while a for-profit school will cost the student $19,806 on average. By these numbers, the public schools offer a better “product” at a much lower price—thus making public education the rational choice over the for-profit option.

Student debt and loans, which have been getting considerable attention in the media, are also a matter of consideration. The median debt of the average student at a for-profit college is $32,700 and 96% of the students at such schools take out loans. At non-profit private colleges, the amount is $24,600 and 57%. For public colleges, the median debt is $20,000 and 48% of students take out loans. Only 13% of community college students take out loans (thanks, no doubt, to the relatively low cost of community college).

For those who are taxpayers, another point of concern is how much taxpayer money gets funneled into for-profit schools. In a typical year, the federal government provides $6 billion in Pell Grants and $16 billion in student loans to students attending for-profit colleges. In 2010 there were 2.4 million students enrolled in these schools. It is instructive to look at the breakdown of how the for-profits expend their money.

As noted above, the average salary of the president of a for-profit college was $7.3 million in 2009. The five highest paid presidents of non-profit colleges averaged $3 million and the five highest paid presidents at public colleges were paid $1 million.

The for-profit colleges also spent heavily in marketing, spending $4.2 billion in recruiting, marketing and admissions staffing in 2009. In 2009 thirty for-profit colleges hired 35,202 recruiters which is about 1 recruiter per 49 students. As might be suspected, public schools do not spend that sort of money. My experience with recruiting at public schools is that a common approach is for a considerable amount of recruiting to fall to faculty—who do not, in general, get extra compensation for this extra work.

In terms of what is spent per student, for-profit schools average $2,050 per student per year. Public colleges spend, on average, $7,239 per student per year. Private non-profit schools spend the mots and average $15,321 per student per year. This spending does seem to yield results: at for-profit schools only 20% of students complete the bachelor’s degree within four years. Public schools do somewhat better with 31% and private non-profits do best at 52%. As such, a public or non-profit school would be the better choice over the for-profit school.

Because so much public money gets funneled into for-profit, public and private schools, there has been a push for “gainful employment” regulation. The gist of this regulation is that schools will be graded based on the annual student loan payments of their graduates relative to their earnings. A school will be graded as failing if its graduates have annual student loan payments that exceed 12% of total earnings or 30% of discretionary earnings. The “danger zone” is 8-12% of total earnings or 20-30% of discretionary earnings. Currently, there are about 1,400 programs with about 840,000 enrolled students in the “danger zone” or worse. 99% of them are, shockingly enough, at for-profit schools.

For those who speak of accountability, these regulations should seem quite reasonable. For those who like the free-market, the regulation’s target is the federal government: the goal is to prevent the government from dumping more taxpayer money into failing programs. Schools will need to earn this money by success.

However, this is not the first time that there has been an attempt to link federal money to success. In 2010 regulations were put in place that included a requirement that a school have at least 35% of its students actively repaying student loans. As might be guessed, for-profit schools are the leaders in loan defaults. In 2012 lobbyists for the for-profit schools (who have the highest default rates) brought a law suit to federal court. The judge agreed with them and struck down the requirement.

In November of 2014 an association of for-profit colleges brought a law suit against the current gainful employment requirements, presumably on the principle that it is better to pay lawyers and lobbyists rather than addressing problems with their educational model. If this lawsuit succeeds, which is likely, for-profits will be rather less accountable and this will serve to make things worse for their students.

Based on the numbers, you should definitely not attend the typical for-profit college. On average, it will cost you more, you will have more debt, and you will make less money. For the most for the least cost, the two year community college is the best deal. For the four year degree, the public school will cost less, but private non-profits generally have more successful results. But, of course, much depends on you.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics III: Pharmaceuticals

Steve Rogers' physical transformation, from a ...

Steve Rogers’ physical transformation, from a reprint of Captain America Comics #1 (May 1941). Art by Joe Simon and Jack Kirby. (Photo credit: Wikipedia)

Humans have many limitations that make them less than ideal as weapons of war. For example, we get tired and need sleep. As such, it is no surprise that militaries have sought various ways to augment humans to counter these weaknesses. For example, militaries routinely make use of caffeine and amphetamines to keep their soldiers awake and alert. There have also been experiments

In science fiction, militaries go far beyond these sorts of drugs and develop far more potent pharmaceuticals. These chemicals tend to split into two broad categories. The first consists of short-term enhancements (what gamers refer to as “buffs”) that address a human weakness or provide augmented abilities. In the real world, the above-mentioned caffeine and amphetamines are short-term drugs. In fiction, the classic sci-fi role-playing game Traveller featured the aptly (though generically) named combat drug. This drug would boost the user’s strength and endurance for about ten minutes. Other fictional drugs have far more dramatic effects, such as the Venom drug used by the super villain Bane. Given that militaries already use short-term enhancers, it is certainly reasonable to think they are and will be interested in more advanced enhancers of the sort considered in science fiction.

The second category is that of the long-term enhancers. These are chemicals that enable or provide long-lasting effects. An obvious real-world example is steroids: these allow the user to develop greater muscle mass and increased strength. In fiction, the most famous example is probably the super-soldier serum that was used to transform Steve Rogers into Captain America.

Since the advantages of improved soldiers are obvious, it seems reasonable to think that militaries would be rather interested in the development of effective (and safe) long-term enhancers. It does, of course, seem unlikely that there will be a super-soldier serum in the near future, but chemicals aimed at improving attention span, alertness, memory, intelligence, endurance, pain tolerance and such would be of great interest to militaries.

As might be suspected, these chemical enhancers do raise moral concerns that are certainly worth considering. While some might see discussing enhancers that do not yet (as far as we know) exist as a waste of time, there does seem to be a real advantage in considering ethical issues in advance—this is analogous to planning for a problem before it happens rather than waiting for it to occur and then dealing with it.

One obvious point of concern, especially given the record of unethical experimentation, is that enhancers will be used on soldiers without their informed consent. Since this is a general issue, I addressed it in its own essay and reached the obvious conclusion: in general, informed consent is morally required. As such, the following discussion assumes that the soldiers using the enhancers have been honestly informed of the nature of the enhancers and have given their consent.

When discussing the ethics of enhancers, it might be useful to consider real world cases in which enhancers are used. One obvious example is that of professional sports. While Major League Baseball has seen many cases of athletes using such enhancers, they are used worldwide and in many sports, from running to gymnastics. In the case of sports, one of the main reasons certain enhancers, such as steroids, are considered unethical is that they provide the athlete with an unfair advantage.

While this is a legitimate concern in sports, it does not apply to war. After all, there is no moral requirement for a fair competition in battle. Rather, one important goal is to gain every advantage over the enemy in order to win. As such, the fact that enhancers would provide an “unfair” advantage in war does not make them immoral. One can, of course, discuss the relative morality of the sides involved in the war, but this is another matter.

A second reason why the use of enhancers is regarded as wrong in sports is that they typically have rather harmful side effects. Steroids, for example, do rather awful things to the human body and brain. Given that even aspirin has potentially harmful side effects, it seems rather likely that military-grade enhancers will have various harmful side effects. These might include addiction, psychological issues, organ damage, death, and perhaps even new side effects yet to be observed in medicine. Given the potential for harm, a rather obvious way to approach the ethics of this matter is utilitarianism. That is, the benefits of the enhancers would need to be weighed against the harm caused by their use.

This assessment could be done with a narrow limit: the harms of the enhancer could be weighed against the benefits provided to the soldier. For example, an enhancer that boosted a combat pilot’s alertness and significantly increased her reaction speed while having the potential to cause short-term insomnia and diarrhea would seem to be morally (and pragmatically) fine given the relatively low harms for significant gains. As another example, a drug that greatly boosted a soldier’s long-term endurance while creating a significant risk of a stroke or heart attack would seem to be morally and pragmatically problematic.

The assessment could also be done more broadly by taking into account ever-wider considerations. For example, the harms of an enhancer could be weighed against the importance of a specific mission and the contribution the enhancer would make to the success of the mission. So, if a powerful drug with terrible side-effects was critical to an important mission, its use could be morally justified in the same way that taking any risk for such an objective can be justified. As another example, the harms of an enhancer could be weighed against the contribution its general use would make to the war. So, a drug that increased the effectiveness of soldiers, yet cut their life expectancy, could be justified by its ability to shorten a war. As a final example, there is also the broader moral concern about the ethics of the conflict itself. So, the use of a dangerous enhancer by soldiers fighting for a morally good cause could be justified by that cause (using the notion that the consequences justify the means).

There are, of course, those who reject using utilitarian calculations as the basis for moral assessment. For example, there are those who believe (often on religious grounds) that the use of pharmaceuticals is always wrong (be they used for enhancement, recreation or treatment). Obviously enough, if the use of pharmaceuticals is wrong in general, then their specific application in the military context would also be wrong. The challenge is, of course, to show that the use of pharmaceuticals is simply wrong, regardless of the consequences.

In general, it would seem that the military use of enhancers should be assessed morally on utilitarian grounds, weighing the benefits of the enhancers against the harm done to the soldiers.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics II: Informed Consent

One general moral subject that is relevant to the augmentation of soldiers by such things as pharmaceuticals, biologicals or cybernetics is the matter of informed consent. While fiction abounds with tales of involuntary augmentation, real soldiers and citizens of the United States have been coerced or deceived into participating in experiments. As such, there do seem to be legitimate grounds for being concerned that soldiers and citizens could be involuntarily augmented as part of experiments or actual “weapon deployment.”

Assuming the context of a Western democratic state, it seems reasonable to hold that augmenting a soldier without her informed consent would be immoral. After all, the individual has rights against the democratic state and these include the right not to be unjustly coerced or deceived. Socrates, in the Crito, also advanced reasonable arguments that the obedience of a citizen required that the state not coerce or deceive the citizen into the social contract and this would certainly apply to soldiers in a democratic state.

It is certainly tempting to rush to the position that informed consent would make the augmentation of soldiers morally acceptable. After all, the soldier would know what she was getting into and would volunteer to undergo the process in question. In popular fiction, one example of this would be Steve Rogers volunteering for the super soldier conversion. Given his consent, such an augmentation would seem morally acceptable.

There are, of course, some cases where informed consent makes a critical difference in ethics. One obvious example is the moral difference between sex and rape—the difference is a matter of informed and competent consent. If Sam agrees to have sex with Sally, then Sally is not raping Sam. But if Sally drugs Sam and has her way, then that would be rape.  Another obvious example is the difference between theft and receiving a gift—this is also a matter of informed consent. If Sam gives Sally a diamond ring that is not theft. If Sally takes the ring by force or coercion, then that is theft—and presumably wrong.

Even when informed consent is rather important, there are still cases in which the consent does not make the action morally acceptable. For example, Sam and Sally might engage in consensual sex, but if they are siblings or one is the parent of the other, the activity could still be immoral. As another example, Sam might consent to give Sally an heirloom ring that has been in the family for untold generations, but it might still be the wrong thing to do—especially when Sally hocks the ring to buy heroin.

There are also cases in which informed consent is not relevant because of the morality of the action itself. For example, Sam might consent to join in Sally’s plot to murder Ashley (rather than being coerced or tricked) but this would not be relevant to the ethics of the murder. At best it could be said that Sally did not add to her misdeed by coercing or tricking her accomplices, but this would not make the murder itself less bad.

Turning back to the main subject of augmentation, even if the soldiers gave their informed consent, the above consideration show that there would still be the question of whether or not the augmentation itself is moral or not. For example, there are reasonable moral arguments against genetically modifying human beings. If these arguments hold up, then even if a soldier consented to genetic modification, the modification itself would be immoral.  I will be addressing the ethics of pharmaceutical, biological and cybernetic augmentation in later essays.

While informed consent does seem to be a moral necessity, this position can be countered. One stock way to do this is to make use of a utilitarian argument: if the benefits gained from augmenting soldiers without their informed consent outweighed the harms, then the augmentation would be morally acceptable. For example, imagine that a war against a wicked enemy is going rather badly and that an augmentation method has been developed that could turn the war around. The augmentation is dangerous and has awful long term side-effects that would deter most soldiers from volunteering. However, losing to the wicked enemy would be worse—so it could thus be argued that the soldiers should be deceived so that the war could be won. As another example, a wicked enemy is not needed—it could simply be argued that the use of augmented soldiers would end the war faster, thus saving lives, albeit at the cost of those terrible side-effects.

Another stock approach is to appeal to the arguments used by democracies to justify conscription in time of war. If the state (or, rather, those who expect people to do what they say) can coerce citizens into killing and dying in war, then the state can surely coerce and citizens to undergo augmentation. It is easy to imagine a legislature passing something called “the conscription and augmentation act” that legalizes coercing citizens into being augmented to serve in the military. Of course, there are those who are suspicious of democratic states so blatantly violating the rights of life and liberty. However, not all states are democratic.

While democratic states would seem to face some moral limits when it comes to involuntary augmentation, non-democratic states appear to have more options. For example, under fascism the individual exists to serve the state (that is, the bastards that think everyone else should do what they say). If this political system is morally correct, then the state would have every right to coerce or deceive the citizens for the good of the state. In fiction, these states tend to be the ones to crank out involuntary augmented soldiers (that still manage to lose to the good guys).

Naturally, even if the state has the right to coerce or deceive soldiers into becoming augmented, it does not automatically follow that the augmentation itself is morally acceptable—this would depend on the specific augmentations. These matters will be addressed in upcoming essays.



My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics I: Exoskeletons

US-Army exoskeleton

US-Army exoskeleton (Photo credit: Wikipedia)

One common element of military science fiction is the powered exoskeleton, also known as an exoframe, exosuit or powered armor. The basic exoskeleton is a powered framework that serves to provide the wearer with enhanced strength. In movies such as Edge of Tomorrow and video games such as Call of Duty Advanced Warfare the exoskeletons provide improved mobility and carrying capacity (which can include the ability to carry heavier weapons) but do not provide much in the way of armor. In contrast, the powered armor of science fiction provides the benefits of an exoskeleton while also providing a degree of protection. The powered armor of Starship Troopers, The Forever War, Armor and Iron Man all serve as classic examples of this sort of gear.

Because the exoskeletons of fiction provide soldiers with enhanced strength, mobility and carrying capacity, it is no surprise that militaries are very interested in exoskeletons in the real world. While exoskeletons have yet to be deployed, there are some ethical concerns about the augmentation of soldiers.

On the face of it, the use of exoskeletons in warfare seems to be morally unproblematic. The main reason is that an exoskeleton is analogous to any other vehicle, with the exception that it is worn rather than driven. A normal car provides the driver with enhanced mobility and carrying capacity and this is presumably not immoral. In terms of the military context, the exoskeleton would be comparable to a Humvee or a tank, both of which seem morally unproblematic as well.

It might be objected that the use of exoskeletons would give wealthier nations an unfair advantage in war. The easy and obvious response to this is that, unlike in sports and games, gaining an “unfair” advantage in war is not immoral. After all, there is not a moral expectation that combatants will engage in a fair fight rather than making use of advantages in such things as technology and numbers.

It might be objected that the advantage provided by exoskeletons would encourage countries that had them to engage in aggressions that they would not otherwise engage in. The easy reply to this is that despite the hype of video games and movies, any exoskeleton available in the near future would most likely not provide a truly spectacular advantage to infantry. This advantage would, presumably, be on par with existing advantages such as those the United States enjoys over almost everyone else in the world. As such, the use of exoskeletons would not seem morally problematic in this regard.

One point of possible concern is what might be called the “Iron Man Syndrome” (to totally make something up). The idea is that soldiers equipped with exoskeletons might become overconfident (seeing themselves as being like the superhero Iron Man) and thus put themselves and others at risk. After all, unless there are some amazing advances in armor technology that are unmatched by weapon technology, soldiers in powered armor will still be vulnerable to weapons capable of taking on light vehicle armor (which exist in abundance). However, this could be easily addressed by training. And experience.

A second point of possible concern is what could be called the “ogre complex” (also totally made up). An exoskeleton that dramatically boosts a soldier’s strength might encourage some people to act as bullies and abuse civilians or prisoners. While this might be a legitimate concern, it can easily addressed by proper training and discipline.

There are, of course, the usual peripheral issues associated with new weapons technology that could have moral relevance. For example, it is easy to imagine a nation wastefully spending money on exoskeletons, perhaps due to corruption. However, such matters are not specific to exoskeletons and would not be moral problems for the technology as such.

Given the above, it would seem that augmenting soldiers with exoskeletons poses no new moral concerns and is morally comparable to providing soldiers with Humvees, tanks and planes.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Should Two Year Colleges Be Free?

Tallahassee County Community College Seal

Tallahassee County Community College Seal (Photo credit: Wikipedia)

While Germany has embraced free four year college education for its citizens, President Obama has made a more modest proposal to make community college free for Americans. He is modeling his plan on that of Republican Governor Bill Haslam. Haslam has made community college free for citizen of Tennessee, regardless of need or merit. Not surprisingly, Obama’s proposal has been attacked by both Democrats and Republicans. Having some experience in education, I will endeavor to assess this proposal in a rational way.

First, there is no such thing as a free college education (in this context). Rather, free education for a student means that the cost is shifted from the student to others. After all, the staff, faculty and administrators will not work for free. The facilities of the schools will not be maintained, improved and constructed for free. And so on, for all the costs of education.

One proposed way to make education free for students is to shift the cost onto “the rich”, a group which is easy to target but somewhat harder to define. As might be suspected, I think this is a good idea. One reason is that I believe that education is the best investment a person can make in herself and in society. This is why I am fine with paying property taxes that go to education, although I have no children of my own. In addition to my moral commitment to education, I also look at it pragmatically: money spent on education (which helps people advance) means having to spend less on prisons and social safety nets. Of course, there is still the question of why the cost should be shifted to the rich.

One obvious answer is that they, unlike the poor and what is left of the middle class, have the money. As economists have noted, an ongoing trend in the economy is that wages are staying stagnant while capital is doing well. This is manifested in the fact that while the stock market has rebounded from the crash, workers are, in general, doing worse than before the crash.

There is also the need to address the problem of income inequality. While one might reject arguments grounded in compassion or fairness, there are some purely practical reasons to shift the cost. One is that the rich need the rest of us to keep the wealth, goods and services flowing to them (they actually need us way more than we need them). Another is the matter of social stability. Maintaining a stable state requires that the citizens believe that they are better off with the way things are then they would be if they engaged in a revolution. While deceit and force can keep citizens in line for quite some time, there does come a point at which these fail. To be blunt, it is in the interest of the rich to help restore the faith of the middle class. One of the nastier alternatives is being put against the wall after the revolution.

Second, the reality of education has changed over the years. In the not so distant past, a high-school education was sufficient to get a decent job. I am from a small town and Maine and remember well that people could get decent jobs with just that high school degree (or even without one). While there are still some decent jobs like that, they are increasingly rare.

While it might be a slight exaggeration, the two-year college degree is now the equivalent of the old high school degree. That is, it is roughly the minimum education needed to have a shot at a decent job. As such, the reasons that justify free (for students) public K-12 education would now justify free (for students) K-14 public education. And, of course, arguments against free (for the student) K-12 education would also apply.

While some might claim that the reason the two-year degree is the new high school degree because education has been in a decline, there is also the obvious reason that the world has changed. While I grew up during the decline of the manufacturing economy, we are now in the information economy (even manufacturing is high tech now) and more education is needed to operate in this new economy.

It could, of course, be argued that a better solution would be to improve K-12 education so that a high school degree would be sufficient for a decent job in the information economy. This would, obviously enough, remove the need to have free two-year college. This is certainly an option worth considering, though it does seem unlikely that it would prove viable.

Third, the cost of college has grown absurdly since I was a student. Rest assured, though, that this has not been because of increased pay for professors. This has been addressed by a complicated and sometimes bewildering system of financial aid and loads. However, free two year college would certainly address this problem in a simple way.

That said, a rather obvious concern is that this would not actually reduce the cost of college—as noted above, it would merely shift the cost. A case can certainly be made that this will actually increase the cost of college (for those who are paying). After all, schools would have less incentive to keep their costs down if the state was paying the bill.

It can be argued that it would be better to focus on reducing the cost of public education in a rational way that focuses on the core mission of colleges, namely education. One major reason for the increase in college tuition is the massive administrative overhead that vastly exceeds what is actually needed to effectively run a school. Unfortunately, since the administrators are the ones who make the financial choices it seems unlikely that they will thin their own numbers. While state legislatures have often applied magnifying glasses to the academic aspects of schools, the administrative aspects seem to somehow get less attention—perhaps because of some interesting connections between the state legislatures and school administrations.

Fourth, while conservative politicians have been critical of the general idea of the state giving away free stuff to regular people rather than corporations and politicians, liberals have also been critical of the proposal. While liberals tend to favor the idea of the state giving people free stuff, some have taken issue with free stuff being given to everyone. After all, the proposal is not to make two-year college free for those who cannot afford it, but to make it free for everyone.

It is certainly tempting to be critical of this aspect of the proposal. While it would make sense to assist those in need, it seems unreasonable to expend resources on people who can pay for college on their own. That money, it could be argued, could be used to help people in need pay for four-year colleges. It can also be objected that the well-off would exploit the system.

One easy and obvious reply is that the same could be said of free (for the student) K-12 education. As such, the reasons that exist for free public K-12 education (even for the well-off) would apply to the two-year college plan.

In regards to the well-off, they can already elect to go to lower cost state schools. However, the wealthy tend to pick the more expensive schools and usually opt for four-year colleges. As such, I suspect that there would not be an influx of rich students into two-year programs trying to “game the system.” Rather, they will tend to continue to go to the most prestigious four year schools their money can buy.

Finally, while the proposal is for the rich to bear the cost of “free” college, it should be looked at as an investment. The rich “job creators” will benefit from having educated “job fillers.” Also, the college educated will tend to get better jobs which will grow the economy (most of which will go to the rich) and increase tax-revenues (which can help offset the taxes on the rich). As such, the rich might find that their involuntary investment will provide an excellent return.

Overall, the proposal for “free” two-year college seems to be a good idea, although one that will require proper implementation (which will be very easy to screw up).


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

A Bubble of Digits

A look back at the American (and world) economy shows a “pastscape” of exploded economic bubbles. The most recent was the housing bubble, but the less recent .com bubble serves as a relevant reminder that bubbles can be technological. This is a reminder well worth keeping in mind for we are, perhaps, blowing up a new bubble.

In “The End of Economic Growth?” Oxford’s Carl Frey discusses the new digital economy and presents some rather interesting numbers regarding the value of certain digital companies relative to the number of people they employ. One example is Twitch, which streams videos of people playing games (and people commenting on people playing games). Twitch was purchased by Amazon for $970 million. Twitch has 170 employees. The multi-billion dollar company Facebook had 8,348 employees as of September 2014. Facebook bought WhatsApp for $19 billion. WhatsApp employed 55 people at the time of this acquisition. In an interesting contrast, IBM employed 431,212 people in 2013.

While it is tempting to explain the impressive value to employee ratio in terms of grotesque over-valuation (which does have its merits as a criticism), there are other factors involved. One, as Frey notes, is that the (relatively) new sort of digital businesses require relatively little capital. The above-mentioned WhatsApp started out with $250,000 and this was actually rather high for an app—the average cost to develop one is $6,453. As such, a relatively small investment can create a huge return.

Another factor is an old one, namely the efficiency of technology in replacing human labor. The development of the plow reduced the number of people required to grow food, the development of the tractor reduced it even more, and the refinement of mechanized farming has enabled the number of people required in agriculture to be reduced dramatically. While it is true that people have to do work to create such digital companies (writing the code, for example), much of the “labor” is automated and done by computers rather than people.

A third factor, which is rather critical, is the digital aspect. Companies like Facebook, Twitch and WhatsApp do not manufacture objects that need to manufactured, shipped and sold. As such, they do not (directly) create jobs in these areas. These companies do make use of existing infrastructure: Facebook does need companies like Comcast to provide the internet connection and companies like Apple to make the devices. But, rather importantly, they do not employ the people who work for Comcast and Apple (and even these companies employ relatively few people).

One of the most important components of the digital aspect is the multiplier effect. To illustrate this, consider two imaginary businesses in the health field. One is a walk-in clinic which I will call Nurse Tent. The other is a health app called RoboNurse. If a patient goes to Nurse Tent, the nurse can only tend to one patient at a time and he can only work so many hours per day. As such, Nurse Tent will need to employ multiple nurses (as well as the support staff). In contrast, the RoboNurse app can be sold to billions of people and does not require the sort of infrastructure required by Nurse Tent. If RoboNurse takes off as a hot app, the developer could sell it for millions or even billions.

Nurse Tent could, of course, become a franchise (the McDonald’s of medicine). But, being very labor intensive and requiring considerable material outlay, it will not be able to have the value to employee ratio of a digital company like WhatsApp or Facebook. It would, however, employ more people. However, the odds are that most of the employees would not be well paid—while the digital economy is producing millionaire and billionaires, wages for labor are rather lacking. This helps to explain why the overall economy is doing great, while the majority of workers are worse off than before the last bubble.

It might be wondered why this matters. There are, of course, the usual concerns about the terrible inequality of the economy. However, there is also the concern that a new bubble is being inflated, a bubble filled with digits. There are some good reasons to be concerned.

First, as noted above, the digital companies seem to be grotesquely overvalued. While the situation is not exactly like the housing bubble, overvaluation should be a matter of concern. After all, if the value of these companies is effectively just “hot digits” inflating a thin skin, then a bubble burst seems likely.

This can be countered by arguing that the valuation is accurate or even that all valuation is essentially a matter of belief and as long as we believe, all will be fine. Until, of course, it is no longer fine.

Second, the current digital economy increases the income inequality mentioned above, widening the gap between the rich and the poor. Laying aside the fact that such a gap historically leads to social unrest and revolution, there is the more immediate concern that the gap will cause the bubble to burst—the economy cannot, one would presume, endure without a solid middle and base to help sustain the top of the pyramid.

This can be countered by arguing that the new digital economy will eventually spread the wealth. Anyone can make an app, anyone can create a startup, and anyone can be a millionaire. While this does have an appeal to it, there is the obvious fact that while it is true that (almost) anyone can do these things, it is also true that most people will fail. One just needs to consider all the failed startups and the millions of apps that are not successful.

There is also the obvious fact that civilization requires more than WhatsApp, Twitch and Facebook and people need to work outside of the digital economy (which lives atop the non-digital economy). Perhaps this can be handled by an underclass of people beneath the digital (and financial) elite, who toil away at low wages to buy smartphones so they can update their status on Facebook and watch people play games via Twitch. This is, of course, just a digital variant on a standard sci-fi dystopian scenario.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Of Lies & Disagreements


When people disagree on controversial issues it is not uncommon for one person to accuse another of lying. In some cases this accusation is clearly warranted and in others it is clearly not. Discerning between these cases is clearly a matter of legitimate concern. There is also some confusion of what should count as a lie and what should not.

While this might seem like a matter of mere semantics, the distinction between what is a lie and what is not actually matters. The main reason for this is that to accuse a person of lying is, in general, to lay a moral charge against the person. It is not merely to claim that the person is in error but to claim that the person is engaged in something that is morally wrong. While some people do use “lie” interchangeably with “untruth”, there is clearly a difference.

To use an easy and obvious example, imagine a student who is asked which year the United States dropped an atomic bomb on Hiroshima. The student thinks it was in 1944 and writes that down. She has made an untrue claim, but it would clearly not do for the teacher to accuse her of being a liar.

Now, imagine that one student, Sally, is asking another student, Jane, about when the United States bombed Hiroshima. Jane does not like Sally and wants her to do badly on her exam, so she tells her that the year was 1944, though she knows it was 1945. If Sally tells another student that it was 1944 and also puts that down on her test, Sally could not justly be accused of lying. Jane, however, can be fairly accused. While Sally is saying and writing something untrue, she believes the claim and is not acting with any malicious intent. In contrast, Jane believes she is saying something untrue and is acting from malice. This suggests some important distinctions between lying and making untrue claims.

One obvious distinction is that a lie requires that the person believe she is making an untrue claim. Naturally, there is the practical problem of determining whether a person really believes what she is claiming, but this is not relevant to the abstract distinction: if the person believes the claim, then she would not be lying when she makes that claim.

It can, of course, be argued that a person can be lying even when she believes what she claims—that what matters is whether the claim is true or not. The obvious problem with this is that the accusation of lying is not just a claim the person is wrong, it is also a moral condemnation of wrongdoing. While “lie” could be taken to apply to any untrue claim, there would be a need for a new word to convey not just a statement of error but also of condemnation.

It can also be argued that a person can lie by telling the truth, but by doing so in such a way as to mislead a person into believing something untrue. This does have a certain appeal in that it includes the intent to deceive, but differs from the “stock” lie in that the claim is true (or at least believed to be true).

A second obvious distinction is that the person must have a malicious intent. This is a key factor that distinguishes the untruths of the fictions of movies, stories and shows from lies. When the actor playing Darth Vader says to Luke “No. I am your father.”, he is saying something untrue, yet it would be unfair to say that the actor is thus a liar. Likewise, the references to dragons, hobbits and elves in the Hobbit are all untrue—yet one would not brand Tolkien a liar for these words.

The obvious reply to this is that there is a category of lies that lack a malicious intent. These lies are often told with good intentions, such as a compliment about a person’s appearance that is not true or when parents tell their children about Santa Claus. As such, it would seem that there are lies that are not malicious—these are often called “white lies.” If intent matters, then this sort of lie would seem rather less bad than the malicious lie; although they do meet a general definition of “lie” which involves making an untrue claim with the intent to deceive. In this case, the deceit is supposed to be a positive one. Naturally, there are those who would argue that such deceits are still wrong, even if the intent is a good one. The matter is also complicated by the fact that there seem to be untrue claims aimed at deceit that intuitively seem morally acceptable. The classic case is, of course, misleading a person who is out to commit murder.

In some cases one person will accuse another of lying because the person disagrees with a claim made by the other person. For example, a person might claim that Obamacare will help Americans and be accused of lying about this by a person who is opposed to Obamacare.

In this sort of context, the accusation that the person is lying seems to rest on three clear points. The first is that the accuser thinks that the person does not actually believe his claim. That is, he is engaged in an intentional deceit. The accuser also thinks that the claim is not true. The second is that the accuser believes that the accused intends to deceive—that is, he expects people to believe him. The third is that the accuser thinks that the accused has some malicious intent. This might be merely limited to the intent to deceive, but it typically goes beyond this. For example, the proponent of Obamacare might be suspected of employing his alleged deceit to spread socialism and damage businesses. Or it might be that the person is trolling.

So, in order to be justified in accusing a person of lying, it needs to be shown that the person does not really believe his claim, that he intends to deceive and that there is some malicious intent. Arguing against the claim can show that it is untrue, but this would not be sufficient to show that the person is lying—unless one takes a lie to merely be a claim that is not true (so, if someone made a mistake in a math problem and got the wrong answer, he would be a liar). What would be needed would be adequate evidence that the person is insincere in his claim (that is, he believes he is saying the untrue), that he intends to deceive and that there is some malicious intent.

Naturally, effective criticism of a claim does not require showing that the person making the claim is a liar—this is a matter of arguing about the claim. In fact, the truth or falsity of a claim has no connection to the intent of the person making the claim or what he actually believes about it. An accusation of lying, rather, moves from the issue of whether the claim is true or not to a moral dispute about the character of the person making the claim. That is, whether he is a liar or not. It can, of course, be a useful persuasive device to call someone a liar, but it (by itself) does nothing to prove or disprove the claim under dispute.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Education & Gainful Employment

English: Table 3 from the August 4, 2010 GAO r...

English: Table 3 from the August 4, 2010 GAO report. Randomly sampled For-Profit college tuition compared to Public and Private counterparts for similar degrees. (Photo credit: Wikipedia)

Over the years I have written various critical pieces about for-profit schools. As I have emphasized before, I have nothing against the idea of a for-profit school. As such, my criticisms have not been that such schools make money. Rather, I have been critical of the performance of such schools as schools, with their often predatory practices, and the fact that they rely so very heavily on federal funding for their profits. This article is, shockingly enough, also critical of these schools.

Assessment in and of higher education has become the new normal. Some of the assessment standards are set by the federal government, some by the states and some by the schools. At the federal level, one key standard is in the Higher Education Act and it states that career education programs “must prepare students for gainful employment in a recognized occupation.” If a school fails to meet this standard, then it can lose out on federal funds such as Pell Grants and federal loans. Since schools are rather fond of federal dollars, they are rather intent on qualifying under this standard.

One way to qualify is to see to it that students are suitably prepared. Another approach, one taken primarily by the for-profit schools (which rely extremely heavily on federal money for their profits) has been to lobby in order to get the standard set to their liking.  As it now stands, schools are ranked in three categories: passing, probationary, and failing. A passing program is such that its graduates’ annual loan payments are below 8% of their total earnings or below 20% of their discretionary incomes. A program is put on probation when the loan payments are in the 8-12% range of their total earnings or 20-30% of discretionary incomes. A program is failing when the loan payments are more than 12% of their total income or over 30% of their discretionary incomes. Students who do not graduate, which happens more often at for-profit schools than at private and public schools, are not counted in this calculation.

A program is disqualified from receiving federal funds if it fails two out of any three consecutive years or it gets a ranking less than passing for four years in a row. This goes into effect in the 2015-2016 academic year.

Interestingly enough, it is matter of common ideology in America that the for-profit, private sector is inherently superior to the public sector. As with many ideologies, this one falls victim to facts. While the assessment of schools in terms of how well they prepare students for gainful employment does not go into effect until 2015, data is already available (the 2012 data seems to be the latest available). Public higher education, which is routinely bashed in some quarters, is amazingly successful in this regard: 99.72% of the programs were rated as passing, 0.18% were rated as being on probation and 0.09% were ranked as failing. Private nonprofit schools also performed admirably with 95.65% of their programs passing, 3.16% being ranked as being on probation and  1.19% rated as failing. So, “A” level work for these schools. In stark contrast, the for-profit schools had 65.87% of their programs ranked as passing, 21.89 ranked as being on probation and 12.23% evaluated as failing. So, these schools would have a grade of “D” if they were students. It is certainly worth keeping in mind that the standards used are the ones that the private, for-profit school lobby pushed for—it seems likely they would do even worse if the more comprehensive standards favored by the AFT were used.

This data certainly seems to indicate that the for-profit schools are not as good a choice for students and for federal funding as the public and non-profit private schools. After all, using the pragmatic measure of student income relative to debt incurred for education, the public and private non-profits are the clear winners. One easy and obvious explanation for this is, of course, that the for-profit schools make a profit—as such, they typically charge considerably more (as I have discussed in other essays) than comparable public and non-profit private schools. Another explanation is that (as discussed in other essays) is that such schools generally do a worse job preparing students for careers and with placing students in jobs. So, a higher cost combined with inferior ability to get students into jobs translates into that “D” grade. So much for the inherent superiority of the for-profit private sector.

It might be objected that there are other factors that explain the poor performance of the for-profit schools in a way that makes them look better. For example, perhaps students who enroll in such programs differ significantly from students in public and non-profit private schools and this helps explain the difference in a way that partially absolves the for-profit schools. As another example, perhaps the for-profit schools suffered from bad luck in terms of the programs they offered. Maybe salaries were unusually bad in these jobs or hiring was very weak. These and other factors are well worth considering. After all, to fail to consider alternative explanations would be poor reasoning indeed. If the for-profits can explain away their poor performance in this area in legitimate ways, then perhaps the standards would need to be adjusted to take into account these factors.

It is also worth considering that schools, public and private, do not have control over the economy. Given that short-term (1-4 year) vagaries of the market could result in programs falling into probation or failure by these standards when such programs are actually “good” in the longer term, it would seem that some additional considerations should be brought into play. Naturally, it can be countered that 3-4 years of probation or failure would not really be short term (especially for folks who think in terms of immediate profit) and that such programs would fully merit their rating.

That said, the latest economic meltdown was somewhat long term and the next one (our bubble based economy makes it almost inevitable) could be even worse. As such, it would seem sensible to consider the broader economy when holding programs accountable. After all, even a great program cannot make companies hire nor compel them to pay better wages.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Food Waste


“CLEAN YOUR PLATE…THERE’S NO FOOD TO WASTE” – NARA – 516248 (Photo credit: Wikipedia)

Like many Americans my age, I was cajoled by my parents to finish all the food on my plate because people were starving somewhere. When I got a bit older and thought about the matter, I realized that my eating (or not eating) the food on my plate would have no effect on the people starving in some far away part of the world. However, I did internalize two lessons. One was that I should not waste food. The other was that there is always someone starving somewhere.

While food insecurity is a problem in the United States, we Americans waste a great deal of food. It is estimated that about 21% of the food that is harvested and available to be consumed is not consumed. This food includes the unconsumed portions tossed into the trash at restaurants, spoiled tomatoes thrown out by families ($900 million worth), moldy leftovers tossed out when the fridge is cleaned and so on. On average, a family of four wastes about 1,160 pounds of food per year—which is a lot of food.

On the national level, it is estimated that one year of food waste (or loss, if one prefers) uses up 2.5% of the energy consumed in the U.S., about 25% of the fresh water used for agriculture, and about 300 million barrels of oil. The loss, in dollars, is estimated to be $115 billion.

The most obvious moral concern is with the waste. Intuitively, throwing away food and wasting it seems to be wrong—especially (as parents used to say) when people are starving. Of course, as I mentioned above, it is quite reasonable to consider whether or not less waste by Americans would translate into more food for other people.

On the one hand, it might be argued that less wasted food would surely make more food available to those in need. After all, there would be more food.

On the other hand, it seems obvious that less waste would not translate into more food for those who are in need. Going back to my story about cleaning my plate, my eating all the food on my plate would certainly not have helped starving people. After all, the food I eat does not help them. Also, if I did not eat the food, then they would not be harmed—they would not get less food because I threw away my Brussel sprouts.

To use another illustration, suppose that Americans conscientiously only bought the exact number of tomatoes that they would eat and wasted none of them. The most likely response is not that the extra tomatoes would be handed out to the hungry. Rather, farmers would grow less tomatoes and markets would stock less in response to the reduced demand.

For the most part, people go hungry not because Americans are wasting food and thus making it unavailable, but because they cannot afford the food they need. To use a metaphor, it is not that the peasants are starving because the royalty are tossing the food into the trash. It is that the peasants cannot afford the food that is so plentiful that the royalty can toss it away.

It could be countered that less waste would actually influence the affordability of food. Returning to the tomato example, farmers might keep on producing the same volume of tomatoes, but be forced to lower the prices because of lower demand and also to seek new markets.

It can also be countered that as the population of the earth grows, such waste will really matter—that food thrown away by Americans is, in fact, taking food away from people. If food does become increasingly scarce (as some have argued will occur due to changes in climate and population growth), then waste will really matter. This is certainly worth considering.

There is, as mentioned above, the intuition that waste is, well, just wrong. After all, “throwing away” all those resources (energy, water, oil and money) is certainly wasteful. There is, of course, also the obvious practical concern: when people waste food, they are wasting money.

For example, if Sally buys a mega meal and throws half of it in the trash, she would have been better off buying a moderate meal and eating all of it. As another example, Sam is throwing away money if he buys steaks and vegetables, then lets them rot. So, not wasting food would certainly make good economic sense for individuals. It would also make sense for businesses—at least to the degree that they do not profit from the waste.

Interestingly, some businesses do profit from the waste. To be specific, consider the snacks, meats, cheese, beverages and such that are purchased and never consumed. If people did not buy them, this would result in less sales and this would impact the economy all the way from the store to the field. While the exact percentage of food purchased and not consumed is not known, the evidence is that it is significant. So, if people did not overbuy, then the food economy would be reduced that percentage—resulting in reduced profits and reduced employment. As such, food waste might actually be rather important for the American food economy (much as planned obsolescence is important in the tech fields). And, interestingly enough, the greater the waste, the greater its importance in maintaining the food economy.

If this sort of reasoning is good, then it might be immoral to waste less food—after all, a utilitarian argument could be crafted showing that less waste would create more harm than good (putting supermarket workers and farmers out of work, for example). As such, waste might be good. At least in the context of the existing economic system, which might not be so good.



My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter