Tag Archives: barack obama

Should Two Year Colleges Be Free?

Tallahassee County Community College Seal

Tallahassee County Community College Seal (Photo credit: Wikipedia)

While Germany has embraced free four year college education for its citizens, President Obama has made a more modest proposal to make community college free for Americans. He is modeling his plan on that of Republican Governor Bill Haslam. Haslam has made community college free for citizen of Tennessee, regardless of need or merit. Not surprisingly, Obama’s proposal has been attacked by both Democrats and Republicans. Having some experience in education, I will endeavor to assess this proposal in a rational way.

First, there is no such thing as a free college education (in this context). Rather, free education for a student means that the cost is shifted from the student to others. After all, the staff, faculty and administrators will not work for free. The facilities of the schools will not be maintained, improved and constructed for free. And so on, for all the costs of education.

One proposed way to make education free for students is to shift the cost onto “the rich”, a group which is easy to target but somewhat harder to define. As might be suspected, I think this is a good idea. One reason is that I believe that education is the best investment a person can make in herself and in society. This is why I am fine with paying property taxes that go to education, although I have no children of my own. In addition to my moral commitment to education, I also look at it pragmatically: money spent on education (which helps people advance) means having to spend less on prisons and social safety nets. Of course, there is still the question of why the cost should be shifted to the rich.

One obvious answer is that they, unlike the poor and what is left of the middle class, have the money. As economists have noted, an ongoing trend in the economy is that wages are staying stagnant while capital is doing well. This is manifested in the fact that while the stock market has rebounded from the crash, workers are, in general, doing worse than before the crash.

There is also the need to address the problem of income inequality. While one might reject arguments grounded in compassion or fairness, there are some purely practical reasons to shift the cost. One is that the rich need the rest of us to keep the wealth, goods and services flowing to them (they actually need us way more than we need them). Another is the matter of social stability. Maintaining a stable state requires that the citizens believe that they are better off with the way things are then they would be if they engaged in a revolution. While deceit and force can keep citizens in line for quite some time, there does come a point at which these fail. To be blunt, it is in the interest of the rich to help restore the faith of the middle class. One of the nastier alternatives is being put against the wall after the revolution.

Second, the reality of education has changed over the years. In the not so distant past, a high-school education was sufficient to get a decent job. I am from a small town and Maine and remember well that people could get decent jobs with just that high school degree (or even without one). While there are still some decent jobs like that, they are increasingly rare.

While it might be a slight exaggeration, the two-year college degree is now the equivalent of the old high school degree. That is, it is roughly the minimum education needed to have a shot at a decent job. As such, the reasons that justify free (for students) public K-12 education would now justify free (for students) K-14 public education. And, of course, arguments against free (for the student) K-12 education would also apply.

While some might claim that the reason the two-year degree is the new high school degree because education has been in a decline, there is also the obvious reason that the world has changed. While I grew up during the decline of the manufacturing economy, we are now in the information economy (even manufacturing is high tech now) and more education is needed to operate in this new economy.

It could, of course, be argued that a better solution would be to improve K-12 education so that a high school degree would be sufficient for a decent job in the information economy. This would, obviously enough, remove the need to have free two-year college. This is certainly an option worth considering, though it does seem unlikely that it would prove viable.

Third, the cost of college has grown absurdly since I was a student. Rest assured, though, that this has not been because of increased pay for professors. This has been addressed by a complicated and sometimes bewildering system of financial aid and loads. However, free two year college would certainly address this problem in a simple way.

That said, a rather obvious concern is that this would not actually reduce the cost of college—as noted above, it would merely shift the cost. A case can certainly be made that this will actually increase the cost of college (for those who are paying). After all, schools would have less incentive to keep their costs down if the state was paying the bill.

It can be argued that it would be better to focus on reducing the cost of public education in a rational way that focuses on the core mission of colleges, namely education. One major reason for the increase in college tuition is the massive administrative overhead that vastly exceeds what is actually needed to effectively run a school. Unfortunately, since the administrators are the ones who make the financial choices it seems unlikely that they will thin their own numbers. While state legislatures have often applied magnifying glasses to the academic aspects of schools, the administrative aspects seem to somehow get less attention—perhaps because of some interesting connections between the state legislatures and school administrations.

Fourth, while conservative politicians have been critical of the general idea of the state giving away free stuff to regular people rather than corporations and politicians, liberals have also been critical of the proposal. While liberals tend to favor the idea of the state giving people free stuff, some have taken issue with free stuff being given to everyone. After all, the proposal is not to make two-year college free for those who cannot afford it, but to make it free for everyone.

It is certainly tempting to be critical of this aspect of the proposal. While it would make sense to assist those in need, it seems unreasonable to expend resources on people who can pay for college on their own. That money, it could be argued, could be used to help people in need pay for four-year colleges. It can also be objected that the well-off would exploit the system.

One easy and obvious reply is that the same could be said of free (for the student) K-12 education. As such, the reasons that exist for free public K-12 education (even for the well-off) would apply to the two-year college plan.

In regards to the well-off, they can already elect to go to lower cost state schools. However, the wealthy tend to pick the more expensive schools and usually opt for four-year colleges. As such, I suspect that there would not be an influx of rich students into two-year programs trying to “game the system.” Rather, they will tend to continue to go to the most prestigious four year schools their money can buy.

Finally, while the proposal is for the rich to bear the cost of “free” college, it should be looked at as an investment. The rich “job creators” will benefit from having educated “job fillers.” Also, the college educated will tend to get better jobs which will grow the economy (most of which will go to the rich) and increase tax-revenues (which can help offset the taxes on the rich). As such, the rich might find that their involuntary investment will provide an excellent return.

Overall, the proposal for “free” two-year college seems to be a good idea, although one that will require proper implementation (which will be very easy to screw up).

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

A Bubble of Digits

A look back at the American (and world) economy shows a “pastscape” of exploded economic bubbles. The most recent was the housing bubble, but the less recent .com bubble serves as a relevant reminder that bubbles can be technological. This is a reminder well worth keeping in mind for we are, perhaps, blowing up a new bubble.

In “The End of Economic Growth?” Oxford’s Carl Frey discusses the new digital economy and presents some rather interesting numbers regarding the value of certain digital companies relative to the number of people they employ. One example is Twitch, which streams videos of people playing games (and people commenting on people playing games). Twitch was purchased by Amazon for $970 million. Twitch has 170 employees. The multi-billion dollar company Facebook had 8,348 employees as of September 2014. Facebook bought WhatsApp for $19 billion. WhatsApp employed 55 people at the time of this acquisition. In an interesting contrast, IBM employed 431,212 people in 2013.

While it is tempting to explain the impressive value to employee ratio in terms of grotesque over-valuation (which does have its merits as a criticism), there are other factors involved. One, as Frey notes, is that the (relatively) new sort of digital businesses require relatively little capital. The above-mentioned WhatsApp started out with $250,000 and this was actually rather high for an app—the average cost to develop one is $6,453. As such, a relatively small investment can create a huge return.

Another factor is an old one, namely the efficiency of technology in replacing human labor. The development of the plow reduced the number of people required to grow food, the development of the tractor reduced it even more, and the refinement of mechanized farming has enabled the number of people required in agriculture to be reduced dramatically. While it is true that people have to do work to create such digital companies (writing the code, for example), much of the “labor” is automated and done by computers rather than people.

A third factor, which is rather critical, is the digital aspect. Companies like Facebook, Twitch and WhatsApp do not manufacture objects that need to manufactured, shipped and sold. As such, they do not (directly) create jobs in these areas. These companies do make use of existing infrastructure: Facebook does need companies like Comcast to provide the internet connection and companies like Apple to make the devices. But, rather importantly, they do not employ the people who work for Comcast and Apple (and even these companies employ relatively few people).

One of the most important components of the digital aspect is the multiplier effect. To illustrate this, consider two imaginary businesses in the health field. One is a walk-in clinic which I will call Nurse Tent. The other is a health app called RoboNurse. If a patient goes to Nurse Tent, the nurse can only tend to one patient at a time and he can only work so many hours per day. As such, Nurse Tent will need to employ multiple nurses (as well as the support staff). In contrast, the RoboNurse app can be sold to billions of people and does not require the sort of infrastructure required by Nurse Tent. If RoboNurse takes off as a hot app, the developer could sell it for millions or even billions.

Nurse Tent could, of course, become a franchise (the McDonald’s of medicine). But, being very labor intensive and requiring considerable material outlay, it will not be able to have the value to employee ratio of a digital company like WhatsApp or Facebook. It would, however, employ more people. However, the odds are that most of the employees would not be well paid—while the digital economy is producing millionaire and billionaires, wages for labor are rather lacking. This helps to explain why the overall economy is doing great, while the majority of workers are worse off than before the last bubble.

It might be wondered why this matters. There are, of course, the usual concerns about the terrible inequality of the economy. However, there is also the concern that a new bubble is being inflated, a bubble filled with digits. There are some good reasons to be concerned.

First, as noted above, the digital companies seem to be grotesquely overvalued. While the situation is not exactly like the housing bubble, overvaluation should be a matter of concern. After all, if the value of these companies is effectively just “hot digits” inflating a thin skin, then a bubble burst seems likely.

This can be countered by arguing that the valuation is accurate or even that all valuation is essentially a matter of belief and as long as we believe, all will be fine. Until, of course, it is no longer fine.

Second, the current digital economy increases the income inequality mentioned above, widening the gap between the rich and the poor. Laying aside the fact that such a gap historically leads to social unrest and revolution, there is the more immediate concern that the gap will cause the bubble to burst—the economy cannot, one would presume, endure without a solid middle and base to help sustain the top of the pyramid.

This can be countered by arguing that the new digital economy will eventually spread the wealth. Anyone can make an app, anyone can create a startup, and anyone can be a millionaire. While this does have an appeal to it, there is the obvious fact that while it is true that (almost) anyone can do these things, it is also true that most people will fail. One just needs to consider all the failed startups and the millions of apps that are not successful.

There is also the obvious fact that civilization requires more than WhatsApp, Twitch and Facebook and people need to work outside of the digital economy (which lives atop the non-digital economy). Perhaps this can be handled by an underclass of people beneath the digital (and financial) elite, who toil away at low wages to buy smartphones so they can update their status on Facebook and watch people play games via Twitch. This is, of course, just a digital variant on a standard sci-fi dystopian scenario.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Of Lies & Disagreements

http://www.gettyimages.com/detail/2119780

When people disagree on controversial issues it is not uncommon for one person to accuse another of lying. In some cases this accusation is clearly warranted and in others it is clearly not. Discerning between these cases is clearly a matter of legitimate concern. There is also some confusion of what should count as a lie and what should not.

While this might seem like a matter of mere semantics, the distinction between what is a lie and what is not actually matters. The main reason for this is that to accuse a person of lying is, in general, to lay a moral charge against the person. It is not merely to claim that the person is in error but to claim that the person is engaged in something that is morally wrong. While some people do use “lie” interchangeably with “untruth”, there is clearly a difference.

To use an easy and obvious example, imagine a student who is asked which year the United States dropped an atomic bomb on Hiroshima. The student thinks it was in 1944 and writes that down. She has made an untrue claim, but it would clearly not do for the teacher to accuse her of being a liar.

Now, imagine that one student, Sally, is asking another student, Jane, about when the United States bombed Hiroshima. Jane does not like Sally and wants her to do badly on her exam, so she tells her that the year was 1944, though she knows it was 1945. If Sally tells another student that it was 1944 and also puts that down on her test, Sally could not justly be accused of lying. Jane, however, can be fairly accused. While Sally is saying and writing something untrue, she believes the claim and is not acting with any malicious intent. In contrast, Jane believes she is saying something untrue and is acting from malice. This suggests some important distinctions between lying and making untrue claims.

One obvious distinction is that a lie requires that the person believe she is making an untrue claim. Naturally, there is the practical problem of determining whether a person really believes what she is claiming, but this is not relevant to the abstract distinction: if the person believes the claim, then she would not be lying when she makes that claim.

It can, of course, be argued that a person can be lying even when she believes what she claims—that what matters is whether the claim is true or not. The obvious problem with this is that the accusation of lying is not just a claim the person is wrong, it is also a moral condemnation of wrongdoing. While “lie” could be taken to apply to any untrue claim, there would be a need for a new word to convey not just a statement of error but also of condemnation.

It can also be argued that a person can lie by telling the truth, but by doing so in such a way as to mislead a person into believing something untrue. This does have a certain appeal in that it includes the intent to deceive, but differs from the “stock” lie in that the claim is true (or at least believed to be true).

A second obvious distinction is that the person must have a malicious intent. This is a key factor that distinguishes the untruths of the fictions of movies, stories and shows from lies. When the actor playing Darth Vader says to Luke “No. I am your father.”, he is saying something untrue, yet it would be unfair to say that the actor is thus a liar. Likewise, the references to dragons, hobbits and elves in the Hobbit are all untrue—yet one would not brand Tolkien a liar for these words.

The obvious reply to this is that there is a category of lies that lack a malicious intent. These lies are often told with good intentions, such as a compliment about a person’s appearance that is not true or when parents tell their children about Santa Claus. As such, it would seem that there are lies that are not malicious—these are often called “white lies.” If intent matters, then this sort of lie would seem rather less bad than the malicious lie; although they do meet a general definition of “lie” which involves making an untrue claim with the intent to deceive. In this case, the deceit is supposed to be a positive one. Naturally, there are those who would argue that such deceits are still wrong, even if the intent is a good one. The matter is also complicated by the fact that there seem to be untrue claims aimed at deceit that intuitively seem morally acceptable. The classic case is, of course, misleading a person who is out to commit murder.

In some cases one person will accuse another of lying because the person disagrees with a claim made by the other person. For example, a person might claim that Obamacare will help Americans and be accused of lying about this by a person who is opposed to Obamacare.

In this sort of context, the accusation that the person is lying seems to rest on three clear points. The first is that the accuser thinks that the person does not actually believe his claim. That is, he is engaged in an intentional deceit. The accuser also thinks that the claim is not true. The second is that the accuser believes that the accused intends to deceive—that is, he expects people to believe him. The third is that the accuser thinks that the accused has some malicious intent. This might be merely limited to the intent to deceive, but it typically goes beyond this. For example, the proponent of Obamacare might be suspected of employing his alleged deceit to spread socialism and damage businesses. Or it might be that the person is trolling.

So, in order to be justified in accusing a person of lying, it needs to be shown that the person does not really believe his claim, that he intends to deceive and that there is some malicious intent. Arguing against the claim can show that it is untrue, but this would not be sufficient to show that the person is lying—unless one takes a lie to merely be a claim that is not true (so, if someone made a mistake in a math problem and got the wrong answer, he would be a liar). What would be needed would be adequate evidence that the person is insincere in his claim (that is, he believes he is saying the untrue), that he intends to deceive and that there is some malicious intent.

Naturally, effective criticism of a claim does not require showing that the person making the claim is a liar—this is a matter of arguing about the claim. In fact, the truth or falsity of a claim has no connection to the intent of the person making the claim or what he actually believes about it. An accusation of lying, rather, moves from the issue of whether the claim is true or not to a moral dispute about the character of the person making the claim. That is, whether he is a liar or not. It can, of course, be a useful persuasive device to call someone a liar, but it (by itself) does nothing to prove or disprove the claim under dispute.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Education & Gainful Employment

English: Table 3 from the August 4, 2010 GAO r...

English: Table 3 from the August 4, 2010 GAO report. Randomly sampled For-Profit college tuition compared to Public and Private counterparts for similar degrees. (Photo credit: Wikipedia)

Over the years I have written various critical pieces about for-profit schools. As I have emphasized before, I have nothing against the idea of a for-profit school. As such, my criticisms have not been that such schools make money. Rather, I have been critical of the performance of such schools as schools, with their often predatory practices, and the fact that they rely so very heavily on federal funding for their profits. This article is, shockingly enough, also critical of these schools.

Assessment in and of higher education has become the new normal. Some of the assessment standards are set by the federal government, some by the states and some by the schools. At the federal level, one key standard is in the Higher Education Act and it states that career education programs “must prepare students for gainful employment in a recognized occupation.” If a school fails to meet this standard, then it can lose out on federal funds such as Pell Grants and federal loans. Since schools are rather fond of federal dollars, they are rather intent on qualifying under this standard.

One way to qualify is to see to it that students are suitably prepared. Another approach, one taken primarily by the for-profit schools (which rely extremely heavily on federal money for their profits) has been to lobby in order to get the standard set to their liking.  As it now stands, schools are ranked in three categories: passing, probationary, and failing. A passing program is such that its graduates’ annual loan payments are below 8% of their total earnings or below 20% of their discretionary incomes. A program is put on probation when the loan payments are in the 8-12% range of their total earnings or 20-30% of discretionary incomes. A program is failing when the loan payments are more than 12% of their total income or over 30% of their discretionary incomes. Students who do not graduate, which happens more often at for-profit schools than at private and public schools, are not counted in this calculation.

A program is disqualified from receiving federal funds if it fails two out of any three consecutive years or it gets a ranking less than passing for four years in a row. This goes into effect in the 2015-2016 academic year.

Interestingly enough, it is matter of common ideology in America that the for-profit, private sector is inherently superior to the public sector. As with many ideologies, this one falls victim to facts. While the assessment of schools in terms of how well they prepare students for gainful employment does not go into effect until 2015, data is already available (the 2012 data seems to be the latest available). Public higher education, which is routinely bashed in some quarters, is amazingly successful in this regard: 99.72% of the programs were rated as passing, 0.18% were rated as being on probation and 0.09% were ranked as failing. Private nonprofit schools also performed admirably with 95.65% of their programs passing, 3.16% being ranked as being on probation and  1.19% rated as failing. So, “A” level work for these schools. In stark contrast, the for-profit schools had 65.87% of their programs ranked as passing, 21.89 ranked as being on probation and 12.23% evaluated as failing. So, these schools would have a grade of “D” if they were students. It is certainly worth keeping in mind that the standards used are the ones that the private, for-profit school lobby pushed for—it seems likely they would do even worse if the more comprehensive standards favored by the AFT were used.

This data certainly seems to indicate that the for-profit schools are not as good a choice for students and for federal funding as the public and non-profit private schools. After all, using the pragmatic measure of student income relative to debt incurred for education, the public and private non-profits are the clear winners. One easy and obvious explanation for this is, of course, that the for-profit schools make a profit—as such, they typically charge considerably more (as I have discussed in other essays) than comparable public and non-profit private schools. Another explanation is that (as discussed in other essays) is that such schools generally do a worse job preparing students for careers and with placing students in jobs. So, a higher cost combined with inferior ability to get students into jobs translates into that “D” grade. So much for the inherent superiority of the for-profit private sector.

It might be objected that there are other factors that explain the poor performance of the for-profit schools in a way that makes them look better. For example, perhaps students who enroll in such programs differ significantly from students in public and non-profit private schools and this helps explain the difference in a way that partially absolves the for-profit schools. As another example, perhaps the for-profit schools suffered from bad luck in terms of the programs they offered. Maybe salaries were unusually bad in these jobs or hiring was very weak. These and other factors are well worth considering. After all, to fail to consider alternative explanations would be poor reasoning indeed. If the for-profits can explain away their poor performance in this area in legitimate ways, then perhaps the standards would need to be adjusted to take into account these factors.

It is also worth considering that schools, public and private, do not have control over the economy. Given that short-term (1-4 year) vagaries of the market could result in programs falling into probation or failure by these standards when such programs are actually “good” in the longer term, it would seem that some additional considerations should be brought into play. Naturally, it can be countered that 3-4 years of probation or failure would not really be short term (especially for folks who think in terms of immediate profit) and that such programs would fully merit their rating.

That said, the latest economic meltdown was somewhat long term and the next one (our bubble based economy makes it almost inevitable) could be even worse. As such, it would seem sensible to consider the broader economy when holding programs accountable. After all, even a great program cannot make companies hire nor compel them to pay better wages.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Food Waste

"CLEAN YOUR PLATE...THERE'S NO FOOD TO WA...

“CLEAN YOUR PLATE…THERE’S NO FOOD TO WASTE” – NARA – 516248 (Photo credit: Wikipedia)

Like many Americans my age, I was cajoled by my parents to finish all the food on my plate because people were starving somewhere. When I got a bit older and thought about the matter, I realized that my eating (or not eating) the food on my plate would have no effect on the people starving in some far away part of the world. However, I did internalize two lessons. One was that I should not waste food. The other was that there is always someone starving somewhere.

While food insecurity is a problem in the United States, we Americans waste a great deal of food. It is estimated that about 21% of the food that is harvested and available to be consumed is not consumed. This food includes the unconsumed portions tossed into the trash at restaurants, spoiled tomatoes thrown out by families ($900 million worth), moldy leftovers tossed out when the fridge is cleaned and so on. On average, a family of four wastes about 1,160 pounds of food per year—which is a lot of food.

On the national level, it is estimated that one year of food waste (or loss, if one prefers) uses up 2.5% of the energy consumed in the U.S., about 25% of the fresh water used for agriculture, and about 300 million barrels of oil. The loss, in dollars, is estimated to be $115 billion.

The most obvious moral concern is with the waste. Intuitively, throwing away food and wasting it seems to be wrong—especially (as parents used to say) when people are starving. Of course, as I mentioned above, it is quite reasonable to consider whether or not less waste by Americans would translate into more food for other people.

On the one hand, it might be argued that less wasted food would surely make more food available to those in need. After all, there would be more food.

On the other hand, it seems obvious that less waste would not translate into more food for those who are in need. Going back to my story about cleaning my plate, my eating all the food on my plate would certainly not have helped starving people. After all, the food I eat does not help them. Also, if I did not eat the food, then they would not be harmed—they would not get less food because I threw away my Brussel sprouts.

To use another illustration, suppose that Americans conscientiously only bought the exact number of tomatoes that they would eat and wasted none of them. The most likely response is not that the extra tomatoes would be handed out to the hungry. Rather, farmers would grow less tomatoes and markets would stock less in response to the reduced demand.

For the most part, people go hungry not because Americans are wasting food and thus making it unavailable, but because they cannot afford the food they need. To use a metaphor, it is not that the peasants are starving because the royalty are tossing the food into the trash. It is that the peasants cannot afford the food that is so plentiful that the royalty can toss it away.

It could be countered that less waste would actually influence the affordability of food. Returning to the tomato example, farmers might keep on producing the same volume of tomatoes, but be forced to lower the prices because of lower demand and also to seek new markets.

It can also be countered that as the population of the earth grows, such waste will really matter—that food thrown away by Americans is, in fact, taking food away from people. If food does become increasingly scarce (as some have argued will occur due to changes in climate and population growth), then waste will really matter. This is certainly worth considering.

There is, as mentioned above, the intuition that waste is, well, just wrong. After all, “throwing away” all those resources (energy, water, oil and money) is certainly wasteful. There is, of course, also the obvious practical concern: when people waste food, they are wasting money.

For example, if Sally buys a mega meal and throws half of it in the trash, she would have been better off buying a moderate meal and eating all of it. As another example, Sam is throwing away money if he buys steaks and vegetables, then lets them rot. So, not wasting food would certainly make good economic sense for individuals. It would also make sense for businesses—at least to the degree that they do not profit from the waste.

Interestingly, some businesses do profit from the waste. To be specific, consider the snacks, meats, cheese, beverages and such that are purchased and never consumed. If people did not buy them, this would result in less sales and this would impact the economy all the way from the store to the field. While the exact percentage of food purchased and not consumed is not known, the evidence is that it is significant. So, if people did not overbuy, then the food economy would be reduced that percentage—resulting in reduced profits and reduced employment. As such, food waste might actually be rather important for the American food economy (much as planned obsolescence is important in the tech fields). And, interestingly enough, the greater the waste, the greater its importance in maintaining the food economy.

If this sort of reasoning is good, then it might be immoral to waste less food—after all, a utilitarian argument could be crafted showing that less waste would create more harm than good (putting supermarket workers and farmers out of work, for example). As such, waste might be good. At least in the context of the existing economic system, which might not be so good.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Torture

English: John McCain official photo portrait.

English: John McCain official photo portrait. (Photo credit: Wikipedia)

In December of 2014 the US Senate issued its report on torture. While there has been some criticism of the report, the majority of pundits and politicians have not come out in defense of torture. However, there have been attempts to justify the use of torture and this essay will address some of these arguments.

One criticism of the report is not a defense of torture as such. The talking point is a question, typically of the form “why bring this up now?” The argument lurking behind this point seems to be that since the torture covered in the report occurred years ago, it should not be discussed now. This is similar to another stock remark made to old wrongs, namely “get over it.”

This does raise a worthwhile concern, namely the expiration date of moral concern. Or, to use an analogy to law, the matter of the moral statute of limitations on misdeeds. On the face of it, it is reasonable to accept that the passage of time can render a wrong morally irrelevant to today. While an exact line can probably never be drawn, a good rule of thumb is that when the morally significant consequences of the event have attenuated to insignificance, then the moral concern can be justly laid aside. In the case of the torture employed in the war on terror, that seems to be “fresh” enough to still be unexpired.

Interestingly, many of the same folks who insist that torture should not be brought up now still bring up 9/11 to justify the current war on terror. On the face of it, if 9/11 is still morally relevant, then so is the torture it was used to justify. I agree that 9/11 is still morally relevant and also the torture.

One of the stock defenses of the use of torture is a semantic one: that the techniques used are not torture. One way to reply is to stick with the legal definitions, such as those in agreements the United States has signed and crimes it has prosecuted—especially the prosecution of German and Japanese soldiers after WWII. Many of the techniques used in the war on terror meet these definitions. As such, it seems clear that as a nation we accept that these acts are, in fact, torture. I will admit that there are gray areas—but we clearly crossed over into the darkness.

Perhaps the best moral defense of torture is a utilitarian one: while torture is harmful, if it produces good consequences that outweigh the harm, then it is morally acceptable. It has been claimed that the torture of prisoners produced critical information that could not have been acquired by other means.

However, the senate report includes considerable evidence that this is not true—including information from the CIA itself regarding the infectiveness of torture as a means of gathering reliable intelligence. As John McCain said, “I know from personal experience that the abuse of prisoners will produce more bad than good intelligence. I know that victims of torture will offer intentionally misleading information if they think their captors will believe it. I know they will say whatever they think their torturers want them to say if they believe it will stop their suffering.”

As such, the utilitarian justification for torture fails on the grounds that it does not work. As such, it produces harms with no benefits, thus making it evil.

Another stock defense of torture is that the enemy is so bad that we can do anything to them.  No doubt the terrorists tell themselves the same thing when they murder innocent people. This justification is often combined with the utilitarian argument, otherwise it is just a defense of torture on the grounds of retaliation.

This notion is founded on a legitimate moral principle, namely that the actions of one’s enemy can justify actions against that enemy.  To use the easy and obvious example, if someone tries to unjustly kill me, I have a moral right to use lethal force in order to save my life.

However, the badness of one’s enemy is not sufficient to morally justify everything that might be done to that enemy. After all, while self-defense can be morally justified, there are still moral boundaries in regards to what one can do. This is especially important if we wish to claim that we are better than the terrorists. As McCain says, “”the use of torture compromises that which most distinguishes us from our enemies, our belief that all people, even captured enemies, possess basic human rights.” He is right about this—if we claim that we are better, we must be better. If we claim that we are good, we must accept moral limits on what we will do. In short, we must not torture.

A final stock argument worth considering is the idea that America’s exceptionalism allows us to do anything, yet remain good. Or, as one pundit on Fox News put it, be “awesome.” The idea that such exceptionalism allows one to do terrible things while remaining righteous is a common one—terrorists typically also believe this about themselves.

This justification is, obviously enough, terrible. After all, being really good and exceptional means that one will not do awful things. That is what it is to be morally exceptional and awesome. The idea that one can be so good that one can be bad is obviously absurd.

I do agree that America is awesome. Part of what makes us awesome is that we (eventually) admit our sins and we take our moral struggles seriously. To the degree that we live up to our fine principles, we are awesome. As Churchill said, ”you can always count on Americans to do the right thing-after they’ve tried everything else.”

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Slogan-Industrial Complex

University of South Florida Seal

University of South Florida Seal (Photo credit: Wikipedia)

Higher education in the United States has been pushed steadily towards the business model. One obvious example of this is the brand merchandizing of schools. In 2011, schools licensed their names and logos for a total of $4.6 billion. Inspired by this sort of brand-based profits, schools started trademarking their slogans. Impressively, there are over 10,000 trademarked slogans.

These slogans include “project safety” (University of Texas), “ready to be heard” (Chatham University), “power” (University of North Dakota), “rise above” (University of the Rockies), “students with diabetes” (University of South Florida), “student life” (Washington University in St. Louis) and “resolve” (Lehigh University). Those not familiar with trademark law might be surprised by some of these examples. After all, “student life” seems to be such a common phrase on campuses that it would be insane for a school to be allowed to trademark it. But, one should never let sanity be one’s guide when considering how the law works.

While the rabid trademarking undertaken by schools might be seen as odd but harmless, the main purpose of a trademark is so that the owner enjoys an exclusive right to what is trademarked and can sue others for using it. This is, of course, limited to certain contexts. So, for example, if I write about student life at Florida A&M University in a blog, Washington University would (I hope) not be able to sue me. However, in circumstances in which the trademark protection applies, then lawsuits are possible (and likely). For example, Eastern Carolina University sued Cisco Systems because of Cisco’s use of the phrase “tomorrow begins here.”

One practical and moral concern about universities’ enthusiasm for trademarking is that it further pushes higher education into the realm of business. One foundation for this concern is that universities should be focused on education rather than being focused on business—after all, an institution that does not focus on its core mission tends to do worse at that mission. This would also be morally problematic, assuming that schools should (morally) focus on education.

An easy and obvious reply is that a university can wear many hats: educator, business, “professional in all but name” sport franchise and so on provided that each function is run properly and not operated at the expense of the core mission. Naturally, it could be added that the core mission of the modern university is not education, but business—branding, marketing and making money.

Another reply is that the trademarks protect the university brand and also allow them to make money by merchandizing their slogans and suing people for trademark violations. This money could then be used to support the core mission of the school.

There is, naturally enough, the worry that universities should not be focusing on branding and suing. While this can make them money, it is not what a university should be doing—which takes the conversation back to the questions of the core mission of universities as well as the question about whether schools can wear many hats without becoming jacks of all trades.

A second legal and moral concern is the impact such trademarks have on free speech. On the one hand, United States law is fairly clear about trademarks and the 1st Amendment. The gist is that noncommercial usage is protected by the 1st Amendment and this allows such things as using trademarked material in protests or criticism. So, for example, the 1st Amendment allows me to include the above slogans in this essay. Not surprisingly, commercial usage is subject to the trademark law. So, for example, I could not use the phrase “the power of independent thinking” as a slogan for my blog since that belongs to Wilkes University. In general, this seems reasonable. After all, if I created and trademarked a branding slogan for my blog, then I would certainly not want other people making use of my trademarked slogan. But, of course, I would be fine with people using the slogan when criticizing my blog—that would be acceptable use under freedom of expression.

On the other hand, trademark holders do endeavor to exploit their trademarks and people’s ignorance of the law to their advantage. For example, threats made involving claims of alleged trademark violations are sometimes used as a means of censorship and silencing critics.

The obvious reply is that this is not a problem with trademarks as such. It is, rather, a problem with people misusing the law. There is, of course, the legitimate concern that the interpretation of the law will change and that trademark protection will be allowed to encroach into the freedom of expression.

What might be a somewhat abstract point of concern is the idea that what seem to be stock phrases such as “the first year experience” (owned by University of South Carolina) can be trademarked and thus owned. This diminishes the public property that is language and privatizes it in favor of those with the resources to take over tracts of linguistic space. While the law currently still allows non-commercial use, this also limits the language other schools and businesses can legally use. It also requires that they research all the trademarks before using common phrases if they wish to avoid a lawsuit from a trademark holder.

The obvious counter, which I mentioned above, is that trademarks have a legitimate function. The obvious response is that there is still a reasonable concern about essentially allowing private ownership over language and thus restricting freedom of expression. There is a need to balance the legitimate need to own branding slogans with the legitimate need to allow the use of stock and common phrases in commercial situations. The challenge is to determine the boundary between the two and where a specific phrase or slogan falls.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Protests, Peaceful & Otherwise

http://www.gettyimages.com/detail/459550624

In response to the nighttime announcement of the Ferguson verdict in which officer Wilson was not indicted, some people attacked the police and damaged property. Some experts have been critical of the decision to make the announcement at night, since the time of day does actually influence how people behave. In general, making such an announcement at night is a bad idea—unless one intends to increase the chances that people will respond badly.

Obviously enough, peacefully protesting is a basic right and in a democratic state the police should not interfere with that right. However, protests do escalate and violence can occur. In the United States it is all too common for peaceful protests to be marred by violence—most commonly damage to businesses and looting.

When considering reports of damage and looting during protests it is reasonable to consider whether or not the damage and looting is being done by actual protestors or by people who are opportunists using the protest as cover or an excuse. An actual protestor is someone whose primary motivation is a moral one—she is there to express her moral condemnation of something she perceives as wrong. Not all people who go to protests are actual protestors—some are there for other reasons, some of which are not morally commendable. Some people, not surprisingly, know that a protest can provide an excellent opportunity to engage in criminal activity—to commit violence, to damage property and to loot. Protests do, sadly, attract such people and often these are people who are not from the area.

Of course, actual protesters can engage in violence and damage property. Perhaps they can even engage in looting (though that almost certainly crosses a moral line). Anger and rage are powerful things, especially righteous anger. A protestor who is motivated by her moral condemnation of a perceived wrong can give in to her anger and do damage to others or their property. When people damage the businesses in their own community, this sort of behavior seems irrational—probably because it is. After all, setting a local gas station on fire is hardly morally justified by the alleged injustice of the grand jury’s verdict in regards to not indicting Officer Wilson for the shooting of Brown. However, anger tends to impede rationality. I, and I assume most people, have seen people angry enough to break their own property.

While I am not a psychologist, I do suspect that people do such damage when they are angry because they cannot actually reach the target of their anger. Alternatively, they might be damaging property to vent their rage in place of harming people. I have seen people do just that. For example, I saw a person hit a metal door frame (and break his hand) rather than hit the person he was mad at. Anger does summon up a need to express itself and this can easily take the form of property damage.

When a protest becomes destructive (or those using it for cover start destroying things), the police do have a legitimate role to play at protests. While protests are intended to draw attention and often aim to do so by creating a disruption of the normal course of events, a state of protest does not grant protestors a carte blanche right to interfere with the legitimate rights of others. As such, the police have a legitimate right to prevent protestors from violating the rights of others and this can correctly involve the use of force.

That said, the role of rage needs to be considered. When property is destroyed during protests, some people immediately condemn the destruction and wonder why people are destroying their own neighborhoods. In some cases, as noted above, the people doing the damage might not be from the neighborhood at all and might be there to destroy rather than to protest. If such people can be identified, they should be dealt with as the criminals they are. What becomes somewhat more morally problematic are people who are driven to such destruction by moral rage—that is, they have been pushed to a point at which they believe they must use violence and destruction to express their moral condemnation.

When looked at from the cool and calm perspective of distance, such behavior seems irrational and unwarranted.  And, I think, it usually is. However, it is well worth it to think of something that has caused the fire of righteous anger to ignite your soul. Think of that and consider how you might respond if you believed that you have been systematically denied justice. Over. And over. Again.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Leadership & Responsibility

English: Official image of Secretary of Vetera...

English: Official image of Secretary of Veterans Affairs Eric Shinseki (Photo credit: Wikipedia)

The recent resignation of Eric Shinseki from his former position as the head of the Department of Veteran Affairs raised, once again, the issue of the responsibilities of a leader. While I will not address the specific case of Shinseki, I will use this opportunity discuss leadership and responsibility in general terms.

Not surprisingly, people often assign responsibility based on ideology. For example, Democrats would be more inclined to regard a Republican leader as being fully responsible for his subordinates while being more forgiving of fellow Democrats. However, judging responsibility based on political ideology is obviously a poor method of assessment. What is needed is, obviously enough, some general principles that can be used to assess the responsibility of leaders in a consistent manner.

Interestingly (or boringly) enough, I usually approach the matter of leadership and responsibility using an analogy to the problem of evil. Oversimplified quite a bit, the problem of evil is the problem of reconciling God being all good, all knowing and all powerful with the existence of evil in the world. If God is all good, then he would tolerate no evil. If God was all powerful, He could prevent all evil. And if God was all knowing, then He would not be ignorant of any evil. Given God’s absolute perfection, He thus has absolute responsibility as a leader: He knows what every subordinate is doing, knows whether it is good or evil and has the power to prevent or cause any behavior. As such, when a subordinate does evil, God has absolute accountability. After all, the responsibility of a leader is a function of what he can know and the extent of his power.

In stark contrast, a human leader (no matter how awesome) falls rather short of God. Such leaders are clearly not perfectly good and they are obviously not all knowing or all powerful. These imperfections thus lower the responsibility of the leader.

In the case of goodness, no human can be expected to be morally perfect. As such, failures of leadership due to moral imperfection can be excusable—within limits. The challenge is, of course, sorting out the extent to which imperfect humans can legitimately be held morally accountable and to what extent our unavoidable moral imperfections provide a legitimate excuse. These standards should be applied consistently to leaders so as to allow for the highest possible degree of objectivity.

In the case of knowledge, no human can be expected to be omniscient—we have extreme limits on our knowledge. The practical challenge is sorting out what a leader can reasonably be expected to know and the responsibility of the leader should be proportional to that extent of knowledge. This is complicated a bit by the fact that there are at least two factors here, namely the capacity to know and what the leader is obligated to know. Obligations to know should not exceed the human capacity to know, but the capacity to know can often exceed the obligation to know. For example, the President could presumably have everyone spied upon (which is apparently what he did do) and thus could, in theory, know a great deal about his subordinates. However, this would seem to exceed what the President is obligated to know (as President) and probably exceeds what he should know.

Obviously enough, what a leader can know and what she is obligated to know will vary greatly based on the leader’s position and responsibilities. For example, as the facilitator of the philosophy & religion unit at my university, my obligation to know about my colleagues is very limited as is my right to know about them. While I have an obligation to know what courses they are teaching, I do not have an obligation or a right to know about their personal lives or whether they are doing their work properly on outside committees. So, if a faculty member skipped out on committee meetings, I would not be responsible for this—it is not something I am obligated to know about.

As another example, the chair of the department has greater obligations and rights in this regard. He has the right and obligation to know if they are teaching their classes, doing their assigned work and so on. Thus, when assessing the responsibility of a leader, sorting out what the leader could know and what she was obligated to know are rather important matters.

In regards to power (taken in a general sense), even the most despotic dictator’s powers are still finite. As such, it is reasonable to consider the extent to which a leader can utilize her authority or use up her power to compel subordinates to obey. As with knowledge, responsibility is proportional to power. After all, if a leader lacks to power (or authority) to compel obedience in regards to certain matters, then the leader cannot be accountable for not making the subordinates do or not do certain actions. Using myself as an example, my facilitator position has no power: I cannot demote, fire, reprimand or even put a mean letter into a person’s permanent record. The extent of my influence is limited to my ability to persuade—with no rewards or punishments to offer. As such, my responsibility for the actions of my colleagues is extremely limited.

There are, however, legitimate concerns about the ability of a leader to make people behave correctly and this raises the question of the degree to which a leader is responsible for not being persuasive enough or using enough power to make people behave. That is, the concern is when bad behavior based on resisting applied authority or power is the fault of the leader or the fault of the resistor. This is similar to the concern about the extent to which responsibility for failing to learn falls upon the teacher and to which it falls on the student. Obviously, even the best teacher cannot reach all students and it would seem reasonable to believe that even the best leader cannot make everyone do what they should be doing.

Thus, when assessing alleged failures of leadership it is important to determine where the failures lie (morality, knowledge or power) and the extent to which the leader has failed. Obviously, principled standards should be applied consistently—though it can be sorely tempting to damn the other guy while forgiving the offenses of one’s own guy.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Talking Points & Climate Change

English: Animated global map of monthly long t...

English: Animated global map of monthly long term mean surface air temperature (Mollweide projection). (Photo credit: Wikipedia)

While science and philosophy are supposed to be about determining the nature of reality, politics is often aimed at creating perceptions that are alleged to be reality. This is why it is generally wiser to accept claims supported by science and reason over claims “supported” by ideology and interest.

The matter of climate change is a matter of both science (since the climate is an objective feature of reality) and politics (since perception of reality can be shaped by rhetoric and ideology). Ideally, the facts of climate change would be left to science and sorting out how to address it via policy would fall, in part, to the politicians. Unfortunately, politicians and other non-scientists have taken it on themselves to make claims about the science, usually in the form of unsupported talking points.

On the conservative side, there has been a general shifting in the talking points. Originally, there was one main talking point: there is no climate change and the scientists are wrong. This point was often supported by alleging that the scientists were motivated by ideology to lie about the climate. In contrast, those whose profits could be impacted if climate change was real were taken as objective sources.

In the face of mounting evidence and shifting public opinion, this talking point became the claim that while climate change is occurring, it is not caused by humans. This then shifted to the claim that climate change is caused by humans, but there is nothing we can (or should) do now.

In response to the latest study, certain Republicans have embraced three talking points. These points do seem to concede that climate change is occurring and that humans are responsible. These points do have a foundation that can be regarded as rational and each will be considered in turn.

One talking point is that the scientists are exaggerating the impact of climate change and that it will not be as bad as they claim. This does rest on a reasonable concern about any prediction: how accurate is the prediction? In the case of a scientific prediction based on data and models, the reasonable inquiry would focus on the accuracy of the data and how well the models serve as models of the actual world. To use an analogy, the reliability of predictions about the impact of a crash on a vehicle based on a computer model would hinge on the accuracy of the data and the model and both could be reasonable points of inquiry.

Since the climate scientists have the data and models used to make the predications, to properly dispute the predictions would require showing problems with either the data or the models (or both). Simply saying they are wrong would not suffice—what is needed is clear evidence that the data or models (or both) are defective in ways that would show the predictions are excessive in terms of the predicted impact.

One indirect way to do this would be to find clear evidence that the scientists are intentionally exaggerating. However, if the scientists are exaggerating, then this would be provable by examining the data and plugging it into an accurate model. That is, the scientific method should be able to be employed to show the scientists are wrong.

In some cases people attempt to argue that the scientists are exaggerating because of some nefarious motivation—a liberal agenda, a hatred of oil companies, a desire for fame or some other wickedness. However, even if it could be shown that the scientists have a nefarious motivation, it does not follow that the predictions are wrong. After all, to dismiss a claim because of an alleged defect in the person making the claim is a fallacy. Being suspicious because of a possible nefarious motive can be reasonable, though. So, for example, the fact that the fossil fuel companies have a great deal at stake here does not prove that their claims about climate change are wrong. But the fact that they have considerable incentive to deny certain claims does provide grounds for suspicion regarding their objectivity (and hence credibility).  Naturally, if one is willing to suspect that there is a global conspiracy of scientists, then one should surely be willing to consider that fossil fuel companies and their fellows might be influenced by their financial interests.

One could, of course, hold that the scientists are exaggerating for noble reasons—that is, they are claiming it is worse than it will be in order to get people to take action. To use an analogy, parents sometimes exaggerate the possible harms of something to try to persuade their children not to try it. While this is nicer than ascribing nefarious motives to scientists, it is still not evidence against their claims. Also, even if the scientists are exaggerating, there is still the question about how bad things really would be—they might still be quite bad.

Naturally, if an objective and properly conducted study can be presented that shows the predictions are in error, then that is the study that I would accept. However, I am still waiting for such a study.

The second talking point is that the laws being proposed will not solve the problems. Interestingly, this certainly seems to concede that climate change will cause problems. This point does have a reasonable foundation in that it would be unreasonable to pass laws aimed at climate change that are ineffective in addressing the problems.

While crafting the laws is a matter of politics, sorting out whether such proposals would be effective does seem to fall in the domain of science. For example, if a law proposes to cut carbon emissions, there is a legitimate question as to whether or not that would have a meaningful impact on the problem of climate change. Showing this would require having data, models and so on—merely saying that the laws will not work is obviously not enough.

Now, if the laws will not work, then the people who confidently make that claim should be equally confident in providing evidence for their claim. It seems reasonable to expect that such evidence be provided and that it be suitable in nature (that is, based in properly gathered data, examined by impartial scientists and so on).

The third talking point is that the proposals to address climate change will wreck the American economy. As with the other points, this does have a rational basis—after all, it is sensible to consider the impact on the economy.

One way to approach this is on utilitarian grounds: that we can accept X environmental harms (such as coastal flooding) in return for Y (jobs and profits generated by fossil fuels). Assuming that one is a utilitarian of the proper sort and that one accepts this value calculation, then one can accept that enduring such harms could be worth the advantages. However, it is well worth noting that as usual, the costs will seem to fall heavily on those who are not profiting. For example, the flooding of Miami and New York will not have a huge impact on fossil fuel company profits (although they will lose some customers).

Making the decisions about this should involve openly considering the nature of the costs and benefits as well as who will be hurt and who will benefit. Vague claims about damaging the economy do not allow us to make a proper moral and practical assessment of whether the approach will be correct or not. It might turn out that staying the course is the better option—but this needs to be determined with an open and honest assessment. However, there is a long history of this not occurring—so I am not optimistic about this occurring.

It is also worth considering that addressing climate change could be good for the economy. After all, preparing coastal towns and cities for the (allegedly) rising waters could be a huge and profitable industry creating many jobs. Developing alternative energy sources could also be profitable as could developing new crops able to handle the new conditions. There could be a whole new economy created, perhaps one that might rival more traditional economic sectors and newer ones, such as the internet economy. If companies with well-funded armies of lobbyists got into the climate change countering business, I suspect that a different tune would be playing.

To close, the three talking points do raise questions that need to be answered:

  • Is climate change going to be as bad as it is claimed?
  • What laws (if any) could effectively and properly address climate change?
  • What would be the cost of addressing climate change and who would bear the cost?

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta