Incentives, Taxes, & Inequality

One stock argument against increasing taxes on the rich in order to address income inequality is a disincentive argument. The gist of the argument is that if taxes are raised on the rich, then they will lose the incentive to invest, innovate, create jobs and so on.  Most importantly, in regards to addressing the income inequality problem, the consequences of this disincentive will have the greatest impact on those who are not rich. For example, it has been claimed that the job creators will create less jobs and pay lower wages if they are taxed more to address income inequality. As such, the tax increase will be both harmful and self-defeating: the less rich will be no better off than they were before (and perhaps even worse off). As such, there would seem to be good utilitarian moral grounds for not increasing taxes on the rich.

Naturally, there is the question of whether or not this disincentive effect would be warranted or not. If the rich simply retaliated from spite, then the moral argument would fall apart—while there would be negative consequences for such a tax increase, these consequences would be harms intentionally inflicted. As such, not increasing taxes because of fear of retaliation would be morally equivalent to paying protection money so that criminals elect to not break things in one’s business or home.

If, however, the rich act because the tax increase is not fair, then the ethics of the situation would be different. To use an obvious analogy, if wealthy customers at a restaurant were forced to pay some of the bills for the less wealthy customers by the management, it would be hard to fault them for leaving smaller tips on the table. While the matter of what counts as a fair tax is rather controversial, it is certainly easy enough to accept an unfair increase would be unfair by definition. One approach would be to define unfairness in terms of the taxes cutting too much into what the person is entitled to in dint of her efforts, ability and productivity relative to what she owes to the country. This seems reasonable in that it provides considerable room for argumentation and does not beg and obvious questions (after all, the amount one owes one’s country could be as low as nothing).

Interestingly, the fairness argument would also apply to workers in regards to their salary. When a worker produces value, the employer pays the worker some of that value and keeps some of it. What the employer keeps can be seen as analogous to the tax imposed by the state on the rich person. As with the taxes on the rich person, there is the general question of what is fair to take from workers. Bringing in the disincentive argument, if it works to justify imposing only a fair tax on the rich, it should also do the same for the less rich. That is, those who argue against raising taxes on the rich to address income inequality by using the disincentive argument should also accept that the less rich should be paid in accord with the same principles used to judge how much income should be taken from the rich.

The obvious counter to this approach is to endeavor to break the analogy between the two situations: this would involve showing that the rich differ from the less rich in relevant ways or that taking income by taxes is relevantly different from taking money from employees. The challenge is, of course, to show that the differences really are relevant.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Incentive & Inequality

One of the stock arguments used to justify income inequality is the incentive argument. The gist is that income inequality is necessary as a motivating factor—crudely put, if people could not get (very) rich, then they would not have the incentive to do such things as work hard, innovate, invent and so on. The argument requires the assumption that hard work, innovation, inventing and so on are good; an assumption that has a certain general plausibility.

This argument does have considerable appeal. In terms of psychology, it is reasonable to make the descriptive claim that people are primarily motivated by the possibility of gain (and also glory). This view was held by Thomas Hobbes and numerous other thinkers on the grounds that it does match the observed behavior of many (but not all) people. If this view is correct, then achieving the goods of hard work, innovation, invention and so on would require income inequality.

There is, of course, the counter that some people seem to be very motivated by factors other than achieving an inequality in financial gain. Some are motivated by altruism, by a desire to improve, by curiosity, by the love of invention, by the desire to create things of beauty, to solve problems and so many other motives that do not depend on income. These sort of motivations do suggest that income inequality is not necessary as a motivating factor—at least for some people.

Since this is a matter of fact regarding human psychology, it is something that can (in theory) be settled by the right sort of empirical research. It is well worth noting that even if income inequality is necessary as a motivating factor, there remain many other concerns, such as the question of how much income inequality is necessary (and also how much is morally acceptable).

Interestingly, the incentive argument is something of a two-edged sword: while it can be used to justify income inequality, it can also be used to argue against the sort of economic inequality that exists in the United States and almost all other countries. The argument is as follows.

While worker productivity has increased significantly in the United States (and other countries) income for workers has not matched this productivity. This is a change from the past—income of workers went up more proportionally to the increase in productivity. This explains, in part, why CEO (and upper management in general) salaries have seen a significant increase relative to the income of workers: the increased productivity of the workers generates more income for the upper management than it does for the workers doing the work.

If it is assumed that gain is necessary for motivation and that inequality is justified by the results (working harder, innovating, producing and so on), then the workers should receive a greater proportion of the returns on their productivity. After all, if high executive compensation is justified on the grounds of its motivation in regards to productivity, innovation and so on, then the same principle would also apply to the workers. They, too, should receive compensation proportional to their productivity, innovation and so on. If they do not, then the incentive argument would entail that they would not have the incentive to be as productive, etc.

It could, of course, be argued, that top management earns its higher income by being primarily responsible for the increase in worker productivity—that is, the increase in worker productivity is not because of the workers but because of the leadership which is motivated by the possibility of gain on the part of the leadership. If this is the case, then the disparity would be fully justified by the incentive argument: the workers are more productive because the CEO is motivated to make them more productive so she can have even greater income.

However, if the increased productivity is due mainly to the workers, then this seems to counter the incentive argument: if workers are more productive than before with less relative compensation, then there does not seem to be that alleged critical connection between incentive and productivity required by the incentive argument. That is, if workers will increase productivity while receiving less compensation relative to their productivity, then the same would presumably hold for the top executives. While there are many other ways to warrant extreme income inequality, the incentive argument does seem to have a problem.

One possible response is to argue for important differences between the executives and workers such that executives need the incentive provided by the extreme inequality and workers are motivated sufficiently by other factors (like being able to buy food). It could also be contended that the workers are motivated by the extreme inequality as well—they would not be as productive if they did not have the (almost certainly false) belief that they will become rich.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

David Bowie

David Bowie 2016David Bowie, the artist and actor, died on January 11, 2016. While I would not categorize myself as a fan of any artist, I do admit that I felt some sadness when I learned of his death. I must also confess that I listened to several Bowie songs today.

While Bowie’s art is clearly worthy of philosophical examination, I will instead focus on the philosophical subject of feeling for the death of a celebrity. I have written briefly about this in the past, on the occasion of the death of Michael Jackson. When Jackson died, many of his devoted fans were devastated by his death. The death of David Bowie has also caused a worldwide response, albeit of a somewhat different character.

People, obviously enough, simply feel what they do. However, there is still the question of whether the feeling is appropriate or not. That is, whether it is morally virtuous to feel in such a way and to such a degree. This view is, of course, taken from Aristotle: virtue involves having the right sort of feeling, in the right way, to the right degree, towards the right person, and so on through all the various factors considered by Aristotle.

In the case of the death of a celebrity, one (perhaps cynical) approach is to contend that overly strong emotional responses are not virtuous. Part of the reason is that virtue theorists always endorse the view that the right way to feel is the mean—between excess and deficiency. Another part of the reason is that the response should be in the right way towards the right person.

In the case of the death of a celebrity, it could be contended that a strong reaction, however sincere, is not morally appropriate. This assumes that the person responding lacks a two-way relationship with the celebrity. That is, that the person is not a relative or friend of the celebrity. In that case, the proper response would not be a matter of reacting to the death of a celebrity, but the death of a relative or friend. As such, what would be appropriate for David Bowie’s friends and relatives to feel is different from what would be appropriate for his fans to feel.

It could be contended that fans (who are not friends and relatives) do not have a meaningful connection with a celebrity as a person (a reciprocated relationship) and, as such, strong feelings upon the death of the celebrity would not be appropriate. From the standpoint of the fan, the celebrity is analogous to a fictional character in a book or movie—the fan observes the celebrity, but there is no reciprocity or true interaction. As such, to be unduly impacted by the death of a celebrity would not be a proper response—it would be similar to being unduly impacted by the death of a character in a movie.

One obvious response is that a celebrity is a real person and hence the death of a celebrity is real and not like the death of a fictional character—David Bowie is really dead. One cynical counter is that many thousands of real people have died today, people with whom the vast majority of the rest of us have no more personal relationship with than we had with David Bowie. As such, the real death of a celebrity should warrant no more emotional response than the death of anyone we do not know personally. It is, of course, proper to feel some sadness upon hearing of the death of a person (who did not merit death). However, feeling each death strongly would destroy us—which is no doubt why we feel so little in regards to the deaths of non-celebrities who are not connected to us.

Another option, which would require considerable development, is to argue that there can be proper emotional responses to the deaths of fictional characters—to be sad, for example, at the passing of Romeo and Juliet. This is, of course, exactly the sort of thing that Plato warned us about in the Republic.

A better reply is that a celebrity can have a meaningful impact on a person’s life, even when there is no actual personal interaction. In the case of David Bowie, people have been strongly affected by his music (and his acting) and this has played an important role in their lives. While a person might have never met Bowie, that person can be grateful for what Bowie created and his influence. As such, a person can justly and properly feel sadness at the death of a person they do not really know. That said, it could be contended that people do get to know an artist through the works. To use an analogy, it is similar to how one can know a long dead person through her writings (or writings about her). For example, one might develop a liking for Socrates by reading the Platonic dialogues and feel justly saddened by his death in the Apology. As such, people can feel justly sad about the death of a person they never met.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Occupying & Protesting

Ammon Bundy and fellow “militia” members occupied the Malheur National Wildlife Refuge in Oregon as a protest of federal land use policies. Ammon Bundy is the son of Cliven Bundy—the rancher who was involved in another armed stand-off with the federal government. Cliven Bundy still owes the American taxpayers over $1 million for grazing his cattle on public land—the sort of sponging off the public that would normally enrage conservatives. While that itself is an interesting issue, my focus will be on discussing the ethics of protest through non-violent armed occupation.

Before getting to the main issue, I will anticipate some concerns about the discussion. First, I will not be addressing the merits of the Bundy protest. Bundy purports to be protesting against the tyranny of the federal government in regards to its land-use policies. Some critics have pointed out that Bundy has benefitted from the federal government, something that seems a bit reminiscent of the infamous cry of “keep your government hands off my Medicare.” While the merit of a specific protest is certainly relevant to the moral status of the protest, my focus is on the general subject of occupation as a means of protest.

Second, I will not be addressing the criticism that if the federal land had been non-violently seized by Muslims protesting Donald Trump or Black Lives Matter activists protesting police treatment of blacks, then the response would have been very different. While the subject of race and protest is important, it is not my focus here. I now turn to the matter of protesting via non-violent armed occupation.

The use of illegal occupation is well established as a means of protest in the United States and was used during the civil rights movement. But, of course, an appeal to tradition is a fallacy—the mere fact that something is well-established does not entail that it is justified. As such, an argument is needed to morally justify occupation as a means of protest.

One argument for occupation as a means of protest is that protestors do not give up their rights simply because they are engaged in a protest. Assuming that they wish to engage in their protest where they would normally have the right to be, then it would seem to follow that they should be allowed to protest there.

 

One obvious reply to this argument is that people do not automatically have the right to engage in protest in all places they have a right to visit. For example, a public library is open to the public, but it does not follow that people thus have a right to occupy a public library and interfere with its operation. This is because the act of protest would violate the rights of others in a way that would seem to warrant not allowing the protest.

People also protest in areas that are not normally open to the public—or whose use by the public is restricted. This would include privately owned areas as well as public areas that have restrictions. In the case of the Bundy protest, public facilities are being occupied rather than private facilities. However, Bundy and his fellows are certainly using the area in a way that would normally not be allowed—people cannot, in the normal course of things, just take up residence in public buildings. This can also be regarded as a conflict of rights—the right of protest versus the right of private ownership or public use.

These replies can, of course, be overcome by showing that the protest does more good than harm or by showing that the right to protest outweighs the rights of others to use the area that is occupied.  After all, to forbid protests simply because they might inconvenience or annoy people would be absurd. However, to accept protests regardless of the imposition on others would also be absurd. Being a protestor does not grant a person special rights to violate the rights of others, so a protestor who engages in such behavior would be acting wrongly and the protest would thus be morally wrong. After all, if rights are accepted to justify a right to protest, then this would provide a clear foundation for accepting the rights of those who would be imposed upon by the protest. If the protestor who is protesting tyranny becomes a tyrant to others, then the protest certainly loses its moral foundation.

This provides the theoretical framework for assessing whether the Bundy protest is morally acceptable or not: it is a matter of weighing the merit of the protest against the harm done to the rights of other citizens (especially those in the surrounding community).

The above assumes a non-violent occupation of the sort that can be classified as classic civil disobedience of the sort discussed by Thoreau. That is, non-violently breaking the rules (or law) in an act of disobedience intended to bring about change. This approach was also adopted by Gandhi and Dr. King. Bundy has added a new factor—while the occupation has (as of this writing) been peaceful, the “militia” on the site is well armed. It has been claimed that the weapons are for self-defense, which indicates that the “militia” is willing to escalate from non-violent (albeit armed) to violent occupation in response to the alleged tyranny of the federal government. This leads to the matter of the ethics of armed resistance as a means of protest.

Modern political philosophy does provide a justification of such resistance. John Locke, for example, emphasized the moral responsibilities of the state in regards to the good of the people. That is, he does not simply advocate obedience to whatever the laws happen to be, but requires that the laws and the leaders prove worthy of obedience. Laws or leaders that are tyrannical are not to be obeyed, but are to be defied and justly so. He provides the following definition of “tyranny”: “Tyranny is the exercise of power beyond right, which nobody can have a right to.  And this is making use of the power any one has in his hands, not for the good of those who are under it, but for his own private separate advantage.” When the state is acting in a tyrannical manner, it can be justly resisted—at least on Locke’s view. As such, Bundy does have a clear theoretical justification for armed resistance. However, for this justification to be actual, it would need to be shown that federal land use policies are tyrannical to a degree that warrants the use of violence as a means of resistance.

Consistency does, of course, require that the framework be applied to all relevantly similar cases of protests—be they non-violent occupations or armed resistance.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Utility & The Martian

I recently got around to watching The Martian, a science fiction film about the effort to rescue an astronaut from Mars. Matt Damon, who is often rescued in movies, plays “astrobotanist” Mark Watney. The discussion that follows contains some spoilers, so those who have yet to see the film might wish to stop reading now. Those who have seen the film might also wish to stop reading, albeit for different reasons.

At the start of the movie Watney is abandoned on Mars after the rest of his team believes he died during the evacuation of the expedition. The rest of the movie details his efforts at survival (including potato farming in space poop) and the efforts of NASA and the Chinese space agency to save him.

After learning that Watney is not dead, NASA attempts to send a probe loaded with food to Mars. The launch fails, strewing rocket chunks and incinerated food over a large area. The next attempt involved resupplying the returning main space ship, the Hermes, using a Chinese rocket and sending it on a return trip to pick up Watney. This greatly extends the crew’s mission time. Using a ship that NASA already landed on Mars for a future mission, Watney blasts up into space and is dramatically rescued.

While this situation is science fiction, it does address a real moral concern about weighing the costs and risks of saving a life. While launch costs are probably cheaper in the fictional future of the movie, the lost resupply rocket and the successful Chinese resupply rocket presumably cost millions of dollars. The cached rocket Watney used was also presumably fairly expensive. There is also the risk undertaken by the crew of the Hermes.

Looked at from a utilitarian standpoint, a case can be made that the rescue was morally wrong. The argument for this is fairly straightforward: for the “generic” utilitarian, the right action is the one that generates the greatest utility for the being that are morally relevant. While Watney is certainly morally relevant, the fictional future of the film is presumably a world that is still very similar to this world. As such, there are presumably still millions of people living in poverty, millions who need health care, and so on. That is, there are presumably millions of people who are at risk of dying and some of them could be saved by the expenditure of millions (or even billions) of dollars in resources.

Expending so many resources to save one person, Watney, would seem to be morally wrong: those resources could have been used to save many more people on earth and would thus have greater utility. As such, the right thing to do would have been to let Watney die—at least on utilitarian grounds.

There are, of course, many ways this argument could be countered on utilitarian grounds. One approach begins with how important Watney’s rescue became to the people of earth—the movie shows vast crowds who are very concerned about Watney. Letting Watney die would presumably make these people sad and angry, thus generating considerable negative consequences. This, of course, rests on the psychological difference between abstract statistics about people dying (such as many people dying due to lacking proper medical care) and the possible death of someone who has been made into a celebrity. As such, the emotional investment of the crowds could be taken as imbuing Watney with far greater moral significance relative to the many who could have been saved from death with the same monetary expenditure.

One obvious problem with this sort of view is that it makes moral worth dependent on fame and the feelings of others rather than on qualities intrinsic to the person. But, it could be replied, fame and the feelings of others do matter—at least when making a utilitarian calculation about consequences.

A second approach is to focus on the broader consequences: leaving Watney to die on Mars could be terribly damaging to the future of manned space exploration and humanity’s expansion into space. As such, while Watney himself is but a single person with only the moral value of one life, the consequences of not saving him would outweigh the consequences of not saving many others on earth. That is, Watney is not especially morally important as a person, but in terms of his greater role he has great significance. This would morally justify sacrificing the many (by not saving them) to save the one—as an investment in future returns. This does raise various concerns about weighing actual people against future consequences—but these are not unique to this situation.

There is also the meta-concern about the fact that Watney is played by Matt Damon—some have contended that this would justify leaving Watney to die on Mars. But, I will leave this to the film critics to settle.

(Apologies on falling behind on the blog, this was due to the holidays and surgery on my hand).

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Against accommodationism: How science undermines religion

Faith versus Fact
There is currently a fashion for religion/science accommodationism, the idea that there’s room for religious faith within a scientifically informed understanding of the world.

Accommodationism of this kind gains endorsement even from official science organizations such as, in the United States, the National Academy of Sciences and the American Association for the Advancement of Science. But how well does it withstand scrutiny?

Not too well, according to a new book by distinguished biologist Jerry A. Coyne.

Gould’s magisteria

The most famous, or notorious, rationale for accommodationism was provided by the celebrity palaeontologist Stephen Jay Gould in his 1999 book Rocks of Ages. Gould argues that religion and science possess separate and non-overlapping “magisteria”, or domains of teaching authority, and so they can never come into conflict unless one or the other oversteps its domain’s boundaries.

If we accept the principle of Non-Overlapping Magisteria (NOMA), the magisterium of science relates to “the factual construction of nature”. By contrast, religion has teaching authority in respect of “ultimate meaning and moral value” or “moral issues about the value and meaning of life”.

On this account, religion and science do not overlap, and religion is invulnerable to scientific criticism. Importantly, however, this is because Gould is ruling out many religious claims as being illegitimate from the outset even as religious doctrine. Thus, he does not attack the fundamentalist Christian belief in a young earth merely on the basis that it is incorrect in the light of established scientific knowledge (although it clearly is!). He claims, though with little real argument, that it is illegitimate in principle to hold religious beliefs about matters of empirical fact concerning the space-time world: these simply fall outside the teaching authority of religion.

I hope it’s clear that Gould’s manifesto makes an extraordinarily strong claim about religion’s limited role. Certainly, most actual religions have implicitly disagreed.

The category of “religion” has been defined and explained in numerous ways by philosophers, anthropologists, sociologists, and others with an academic or practical interest. There is much controversy and disagreement. All the same, we can observe that religions have typically been somewhat encyclopedic, or comprehensive, explanatory systems.

Religions usually come complete with ritual observances and standards of conduct, but they are more than mere systems of ritual and morality. They typically make sense of human experience in terms of a transcendent dimension to human life and well-being. Religions relate these to supernatural beings, forces, and the like. But religions also make claims about humanity’s place – usually a strikingly exceptional and significant one – in the space-time universe.

It would be naïve or even dishonest to imagine that this somehow lies outside of religion’s historical role. While Gould wants to avoid conflict, he creates a new source for it, since the principle of NOMA is itself contrary to the teachings of most historical religions. At any rate, leaving aside any other, or more detailed, criticisms of the NOMA principle, there is ample opportunity for religion(s) to overlap with science and come into conflict with it.

Coyne on religion and science

The genuine conflict between religion and science is the theme of Jerry Coyne’s Faith versus Fact: Why Science and Religion are Incompatible (Viking, 2015). This book’s appearance was long anticipated; it’s a publishing event that prompts reflection.

In pushing back against accommodationism, Coyne portrays religion and science as “engaged in a kind of war: a war for understanding, a war about whether we should have good reasons for what we accept as true.” Note, however, that he is concerned with theistic religions that include a personal God who is involved in history. (He is not, for example, dealing with Confucianism, pantheism or austere forms of philosophical deism that postulate a distant, non-interfering God.)

Accommodationism is fashionable, but that has less to do with its intellectual merits than with widespread solicitude toward religion. There are, furthermore, reasons why scientists in the USA (in particular) find it politically expedient to avoid endorsing any “conflict model” of the relationship between religion and science. Even if they are not religious themselves, many scientists welcome the NOMA principle as a tolerable compromise.

Some accommodationists argue for one or another very weak thesis: for example, that this or that finding of science (or perhaps our scientific knowledge base as a whole) does not logically rule out the existence of God (or the truth of specific doctrines such as Jesus of Nazareth’s resurrection from the dead). For example, it is logically possible that current evolutionary theory and a traditional kind of monotheism are both true.

But even if we accept such abstract theses, where does it get us? After all, the following may both be true:

1. There is no strict logical inconsistency between the essentials of current evolutionary theory and the existence of a traditional sort of Creator-God.

AND

2. Properly understood, current evolutionary theory nonetheless tends to make Christianity as a whole less plausible to a reasonable person.

If 1. and 2. are both true, it’s seriously misleading to talk about religion (specifically Christianity) and science as simply “compatible”, as if science – evolutionary theory in this example – has no rational tendency at all to produce religious doubt. In fact, the cumulative effect of modern science (not least, but not solely, evolutionary theory) has been to make religion far less plausible to well-informed people who employ reasonable standards of evidence.

For his part, Coyne makes clear that he is not talking about a strict logical inconsistency. Rather, incompatibility arises from the radically different methods used by science and religion to seek knowledge and assess truth claims. As a result, purported knowledge obtained from distinctively religious sources (holy books, church traditions, and so on) ends up being at odds with knowledge grounded in science.

Religious doctrines change, of course, as they are subjected over time to various pressures. Faith versus Fact includes a useful account of how they are often altered for reasons of mere expediency. One striking example is the decision by the Mormons (as recently as the 1970s) to admit blacks into its priesthood. This was rationalised as a new revelation from God, which raises an obvious question as to why God didn’t know from the start (and convey to his worshippers at an early time) that racial discrimination in the priesthood was wrong.

It is, of course, true that a system of religious beliefs can be modified in response to scientific discoveries. In principle, therefore, any direct logical contradictions between a specified religion and the discoveries of science can be removed as they arise and are identified. As I’ve elaborated elsewhere (e.g., in Freedom of Religion and the Secular State (2012)), religions have seemingly endless resources to avoid outright falsification. In the extreme, almost all of a religion’s stories and doctrines could gradually be reinterpreted as metaphors, moral exhortations, resonant but non-literal cultural myths, and the like, leaving nothing to contradict any facts uncovered by science.

In practice, though, there are usually problems when a particular religion adjusts. Depending on the circumstances, a process of theological adjustment can meet with internal resistance, splintering and mutual anathemas. It can lead to disillusionment and bitterness among the faithful. The theological system as a whole may eventually come to look very different from its original form; it may lose its original integrity and much of what once made it attractive.

All forms of Christianity – Catholic, Protestant, and otherwise – have had to respond to these practical problems when confronted by science and modernity.

Coyne emphasizes, I think correctly, that the all-too-common refusal by religious thinkers to accept anything as undercutting their claims has a downside for believability. To a neutral outsider, or even to an insider who is susceptible to theological doubts, persistent tactics to avoid falsification will appear suspiciously ad hoc.

To an outsider, or to anyone with doubts, those tactics will suggest that religious thinkers are not engaged in an honest search for truth. Rather, they are preserving their favoured belief systems through dogmatism and contrivance.

How science subverted religion

In principle, as Coyne also points out, the important differences in methodology between religion and science might (in a sense) not have mattered. That is, it could have turned out that the methods of religion, or at least those of the true religion, gave the same results as science. Why didn’t they?

Let’s explore this further. The following few paragraphs are my analysis, drawing on earlier publications, but I believe they’re consistent with Coyne’s approach. (Compare also Susan Haack’s non-accommodationist analysis in her 2007 book, Defending Science – within Reason.)

At the dawn of modern science in Europe – back in the sixteenth and seventeenth centuries – religious worldviews prevailed without serious competition. In such an environment, it should have been expected that honest and rigorous investigation of the natural world would confirm claims that were already found in the holy scriptures and church traditions. If the true religion’s founders had genuinely received knowledge from superior beings such as God or angels, the true religion should have been, in a sense, ahead of science.

There might, accordingly, have been a process through history by which claims about the world made by the true religion (presumably some variety of Christianity) were successively confirmed. The process might, for example, have shown that our planet is only six thousand years old (give or take a little), as implied by the biblical genealogies. It might have identified a global extinction event – just a few thousand years ago – resulting from a worldwide cataclysmic flood. Science could, of course, have added many new details over time, but not anything inconsistent with pre-existing knowledge from religious sources.

Unfortunately for the credibility of religious doctrine, nothing like this turned out to be the case. Instead, as more and more evidence was obtained about the world’s actual structures and causal mechanisms, earlier explanations of the appearances were superseded. As science advances historically, it increasingly reveals religion as premature in its attempts at understanding the world around us.

As a consequence, religion’s claims to intellectual authority have become less and less rationally believable. Science has done much to disenchant the world – once seen as full of spiritual beings and powers – and to expose the pretensions of priests, prophets, religious traditions, and holy books. It has provided an alternative, if incomplete and provisional, image of the world, and has rendered much of religion anomalous or irrelevant.

By now, the balance of evidence has turned decisively against any explanatory role for beings such as gods, ghosts, angels, and demons, and in favour of an atheistic philosophical naturalism. Regardless what other factors were involved, the consolidation and success of science played a crucial role in this. In short, science has shown a historical, psychological, and rational tendency to undermine religious faith.

Not only the sciences!

I need to be add that the damage to religion’s authority has come not only from the sciences, narrowly construed, such as evolutionary biology. It has also come from work in what we usually regard as the humanities. Christianity and other theistic religions have especially been challenged by the efforts of historians, archaeologists, and academic biblical scholars.

Those efforts have cast doubt on the provenance and reliability of the holy books. They have implied that many key events in religious accounts of history never took place, and they’ve left much traditional theology in ruins. In the upshot, the sciences have undermined religion in recent centuries – but so have the humanities.

Coyne would not tend to express it that way, since he favours a concept of “science broadly construed”. He elaborates this as: “the same combination of doubt, reason, and empirical testing used by professional scientists.” On his approach, history (at least in its less speculative modes) and archaeology are among the branches of “science” that have refuted many traditional religious claims with empirical content.

But what is science? Like most contemporary scientists and philosophers, Coyne emphasizes that there is no single process that constitutes “the scientific method”. Hypothetico-deductive reasoning is, admittedly, very important to science. That is, scientists frequently make conjectures (or propose hypotheses) about unseen causal mechanisms, deduce what further observations could be expected if their hypotheses are true, then test to see what is actually observed. However, the process can be untidy. For example, much systematic observation may be needed before meaningful hypotheses can be developed. The precise nature and role of conjecture and testing will vary considerably among scientific fields.

Likewise, experiments are important to science, but not to all of its disciplines and sub-disciplines. Fortunately, experiments are not the only way to test hypotheses (for example, we can sometimes search for traces of past events). Quantification is also important… but not always.

However, Coyne says, a combination of reason, logic and observation will always be involved in scientific investigation. Importantly, some kind of testing, whether by experiment or observation, is important to filter out non-viable hypotheses.

If we take this sort of flexible and realistic approach to the nature of science, the line between the sciences and the humanities becomes blurred. Though they tend to be less mathematical and experimental, for example, and are more likely to involve mastery of languages and other human systems of meaning, the humanities can also be “scientific” in a broad way. (From another viewpoint, of course, the modern-day sciences, and to some extent the humanities, can be seen as branches from the tree of Greek philosophy.)

It follows that I don’t terribly mind Coyne’s expansive understanding of science. If the English language eventually evolves in the direction of employing his construal, nothing serious is lost. In that case, we might need some new terminology – “the cultural sciences” anyone? – but that seems fairly innocuous. We already talk about “the social sciences” and “political science”.

For now, I prefer to avoid confusion by saying that the sciences and humanities are continuous with each other, forming a unity of knowledge. With that terminological point under our belts, we can then state that both the sciences and the humanities have undermined religion during the modern era. I expect they’ll go on doing so.

A valuable contribution

In challenging the undeserved hegemony of religion/science accommodationism, Coyne has written a book that is notably erudite without being dauntingly technical. The style is clear, and the arguments should be understandable and persuasive to a general audience. The tone is rather moderate and thoughtful, though opponents will inevitably cast it as far more polemical and “strident” than it really is. This seems to be the fate of any popular book, no matter how mild-mannered, that is critical of religion.

Coyne displays a light touch, even while drawing on his deep involvement in scientific practice (not to mention a rather deep immersion in the history and detail of Christian theology). He writes, in fact, with such seeming simplicity that it can sometimes be a jolt to recognize that he’s making subtle philosophical, theological, and scientific points.

In that sense, Faith versus Fact testifies to a worthwhile literary ideal. If an author works at it hard enough, even difficult concepts and arguments can usually be made digestible. It won’t work out in every case, but this is one where it does. That’s all the more reason why Faith versus Fact merits a wide readership. It’s a valuable, accessible contribution to a vital debate.

Russell Blackford, Conjoint Lecturer in Philosophy, University of Newcastle

This article was originally published on The Conversation. Read the original article.

2016

Today marks the start of 2016. One tradition of New Year’s Eve is to make resolutions. One tradition of the New Year is to break those resolutions. I am proud to say that I kept my 2015 resolution and plan to keep my 2016 resolution. The secret to this success is, of course, accepting a very low standard for resolutions.

In 2015 I resolved to not die in 2015. Up until midnight, the year could have won by killing me. But, the year failed. Once again. As such, I have made my 2016 resolution: do not die in 2016. I plan to make this sort of resolution every year.

This resolution has many virtues, but I will only consider the two most important. The first is that I really do not need to do anything-not dying is a fairly automatic sort of thing. Mostly.

Obviously enough, one year (hopefully not 2016, but I’ve had a pretty good run) will end with my resolution being broken-or, rather, I will end before the year does. This is where the second virtue comes into play. When I finally break the resolution by dying, I will not be around to face the judgment of those who rather like to judge people for breaking New Year’s Resolutions. Also, even such judgmental folks might feel it a bit too harsh to judge a person for breaking a resolution to not die. Or maybe not. People can be rather harsh.

On a more positive note, here is to a great 2016. Still waiting for that moon base-I really want that before the robot apocalypse. So, Elon Musk, I am counting on you to get it done before Google’s kill bots kill us all.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Age of Awkwardness

Some ages get cool names, such as the Iron Age or the Gilded Age. Others are dubbed with word mantles less awesome. An excellent example of the latter is the designation of our time as the Awkward Age. Since philosophers are often willing to cash in on trends, it is not surprising that there is now a philosophy of awkwardness.

Various arguments have been advanced in support of the claim that this is the Awkward Age. Not surprisingly, a key argument is built on the existence of so many TV shows and movies that center on awkwardness. There is a certain appeal to this sort of argument and the idea that art expresses the temper, spirit, and social conditions of its age is an old one. I recall, from an art history class I took as an undergraduate, this standard approach to art. For example, the massive works of the ancient Egyptians is supposed to reveal their views of the afterlife as the harmony of the Greek works is supposed to reveal the soul of ancient Greece.

Wilde, in his dialogue “The New Aesthetics” considers this very point. Wilde takes the view that “Art never expresses anything but itself.” Naturally enough, Wilde provides an account of why people think art is about the ages. His explanation is best put by Carly Simon: “You’re so vain, I’ll bet you think this song is about you.” Less lyrically, the idea is that vanity causes people to think that the art of their time is about them. Since the people of today were not around in the way back times of old, they cannot say that past art was about them—so they assert that the art of the past was about the people of the past. This does have the virtue of consistency.

While Wilde does not offer a decisive argument in favor of his view, it does have a certain appeal. It also is worth considering that it is problematic to draw an inference about the character of an age from what TV shows or movies happen to be in vogue with a certain circle (there are, after all, many shows and movies that are not focused on awkwardness). While it is reasonable to draw some conclusions about that specific circle, leaping beyond to the general population and the entire age would be quite a leap—after all, there are many non-awkward shows and movies that could be presented as contenders to defining the age. It seems sensible to conclude that it is vanity on the part of the members of such a circle to regard what they like as defining the age. It could also be seen as a hasty generalization—people infer that what they regard as defining must also apply to the general population.

A second, somewhat stronger, sort of argument for this being the Awkward Age is based on claims about extensive social changes. To use an oversimplified example, consider the case of gender in the United States. The old social norms had two fairly clearly defined genders and sets of rules regarding interaction. Such rules included those that made it clear that the man asked the woman out on the date and that the man paid for everything. Now, or so the argument goes, the norms are in disarray or have been dissolved. Sticking with gender, Facebook now recognizes over 50 genders which rather complicates matters relative to the “standard” two of the past. Going with the dating rules once again, it is no longer clear who is supposed to do the asking and the paying.

In terms of how this connects to awkwardness, the idea is that when people do not have established social norms and rules to follow, ignorance and error can easily lead to awkward moments. For example, there could be an awkward moment on a date when the check arrives as the two people try to sort out who pays: Dick might be worried that he will offend Jane if he pays and Jane might be expecting Dick to pick up the tab—or she might think that each should pay their own tab.

To use an analogy, consider playing a new and challenging video game. When a person first plays, she will be trying to figure out how the game works and this will typically involve numerous failures. By analogy, when society changes, it is like being in a new game—one does not know the rules. Just as a person can look for guides to a new game online (like YouTube videos on how to beat tough battles), people can try to turn to guides to behavior. However, new social conditions mean that such guides are not yet available or, if they are, they might be unclear or conflict with each other. For example, a person who is new to contemporary dating might try to muddle through on her own or try to do some research—most likely finding contradictory guides to correct dating behavior.

Eventually, of course, the norms and rules will be worked out—as has happened in the past. This indicates a point well worth considering—today is obviously not the first time that society has undergone considerable change, thus creating opportunities for awkwardness. As Wilde noted, our vanity contributes to the erroneous belief that we are special in this regard. That said, it could be contended that people today are reacting to social change in a way that is different and awkward. That is, this is truly the Age of Awkwardness. My own view is that this is one of many times of awkwardness—what has changed is the ability and willingness to broadcast awkward events. Plus, of course, Judd Apatow.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ethics (for Free)

LaBossiere EthicsThe following provides links to my Ethics course, allowing a person to get some ethics for free. Also probably works well as a sleep aid.*

Notes & Readings

Practice Tests

Power Point


Class YouTube Videos

These are unedited videos from the Fall 2015 Ethics class. Spoiler: I do not die at the end.

Part One Videos: Introduction & Moral Reasoning

Video 1:  It covers the syllabus.

Video 2: It covers the introduction to ethics, value, and the dreaded spectrum of morality.

Video 3: It covers the case paper.

Video 4: No video. Battery failure.

Video 5: It covers inductive arguments and the analogical argument.

Video 6: It covers Argument by/from Example and Argument from Authority.

Video 7:  It covers Inconsistent Application and Reversing the Situation.

Video 8:  It covers Argument by Definition, Appeal to Intuition, and Apply a Moral Principle. The death of the battery cuts this video a bit short.

Video 9:  It covers Applying Moral Principles, Applying Moral Theories, the “Playing God” Argument and the Unnatural Argument.

Video 10: It covers Appeal to Consequences and Appeal to Rules.

Video 11:  It covers Appeal to Rights and Mixing Norms.

Part Two Videos: Moral Theories

Video 12:  It covers the introduction to Part II and the start of virtue theory.

Video 13: It covers Confucius and Aristotle.

Video 14:  This continues Aristotle’s virtue theory.

Video 15: It covers the intro to ethics and religion as well as the start of Aquinas’ moral theory.

Video 16: It covers St. Thomas Aquinas, divine command theory, and John Duns Scotus.

Video 17: It covers the end of religion & ethics and the beginning of consequentialism.

Video 18: It covers Thomas Hobbes and two of the problems with ethical egoism.

Video 19:  It covers the third objection to ethical egoism, the introduction to utilitarianism and the start of the discussion of J.S. Mill. Includes reference to Jeremy “Headless” Bentham.

Video 20: This video covers the second part of utilitarianism, the objections against utilitarianism and the intro to deontology.

Video 21: It covers the categorical imperative.

Part Three Videos: Why Be Good?, Moral Education & Equality

Video 22: It covers the question of “why be good?” and Plato’s Ring of Gyges.

Video 23: It covers the introduction to moral education and the start of Aristotle’s theory of moral education.

Video 24: It covers more of Aristotle’s theory of moral education.

Video 25: It covers the end of Rousseau and the start of equality.

Video 26: It covers the end of Rousseau and the start of equality.

Video 27:  It covers Mary Wollstonecraft’s Vindication of the Rights of Women.

Video 28:  This video covers the second part of Wollstonecraft and gender equality.

Video 29: It covers the start of ethics and race.

Video 30: It covers St. Thomas Aquinas’ discussion of animals and ethics.

Video 31: It covers Descartes’ discussion of animals. Includes reference to Siberian Huskies.

Video 32: It covers the end of Kant’s animal ethics and the utilitarian approach to animal ethics.

Part IV: Rights, Obedience & Liberty

Video 33: It covers the introduction to rights and a bit of Hobbes.

Video 34: It covers Thomas Hobbes’ view of rights and the start of John Locke’s theory of rights.

Video 35:  It covers John Locke’s state of nature and theory of natural rights.

Video 36:  It covers Locke’s theory of property and tyranny. It also covers the introduction to obedience and disobedience.

Video 37: It covers the Crito and the start of Thoreau’s theory of civil disobedience.

Video 38:  It covers the second part of Thoreau’s essay on civil disobedience.

Video 39:  It covers the end of Thoreau’s civil disobedience, Mussolini’s essay on fascism and the start of J.S. Mill’s theory of Liberty.

Video 40:  It covers Mill’s theory of liberty.

Narration YouTube Videos

These videos consist of narration over Powerpoint slides. Good for naptime.

Part One Videos

Part Two Videos

Part Three Videos

Part Four Videos

*This course has not been evaluated by the FDA as a sleep aid. Use at your own risk. Side effects might include Categorical Kidneys, Virtuous Spleen, and Spontaneous Implosion.

Owning Asteroids

While asteroid mining is still just science fiction, companies such as Planetary Resources are already preparing to mine the sky. While space mining sounds awesome, lawyers are already hard at work murdering the awesomeness with legalize. President Obama recently signed the U.S. Commercial Space Launch Competitiveness Act which seems to make asteroid mining legal. The key part of the law is that “Any asteroid resources obtained in outer space are the property of the entity that obtained them, which shall be entitled to all property rights to them, consistent with applicable federal law and existing international obligations.” More concisely, the law makes it so that asteroid mining by U.S. citizens would not violate U.S. law.

While this would seem to open up the legal doors to asteroid mining, there are still legal barriers. The various space treaties, such as the Outer Space Treaty of 1967, do not give states sovereign rights in space. As such, there is no legal foundation for a state conferring space property rights to its citizens on the basis of its sovereignty. However, the treaties do not forbid private ownership in space—as such, any other nation could pass a similar law that allows its citizens to own property in space without violating the laws of that nation.

One obvious concern is that if multiple nations pass such laws and citizens from these nations start mining asteroids, then there will be the very real possibility of conflict over valuable resources. In some ways this will be a repeat of the past: the more technological advanced nations engaged in a struggle to acquire resources in an area where they lack sovereignty. These past conflicts tended to escalate into actual wars, which is something that must be considered in the final frontier.

One way to try to avoid war over asteroid resources is to work out new treaties governing the use of space resources. This is, obviously enough, a matter that will be handled by space lawyers, governments, and corporations. Unless, of course, the automated killing machines resolve it first.

While the legal aspects of space ownership are interesting, the moral aspects of ownership in space are also of considerable concern. While it might be believed that property rights in space is something entirely new, this is clearly not the case. While the location is clearly different than in the original, the matter of space property matches the state of nature scenarios envisioned by thinkers like Hobbes and Locke. To be specific, there is an abundance of resources and an absence of authority. As it now stands, while no one can hear you scream in space, there is also no one who can arrest you for space thievery.

Using the state of nature model, it can be claimed that there are currently no rightful owners of the asteroids or it could be claimed that we are all the rightful owners (the asteroids are the common property of all of humanity).

If there are currently no rightful owners, then it would seem that the asteroids are there for the taking: an asteroid belongs to whoever can take and hold it. This is on par with Hobbes’ state of nature—practical ownership is a matter of possession. As Hobbes saw it, everyone has the right to all things, but this is effectively a right to nothing—other than what a person can defend from others. As Hobbes noted, in such a scenario profit is the measure of right and who is right is to be settled by the sword.

While this is practical, brutal and realistic, it does seem a bit morally problematic in that it would, as Hobbes also noted, lead to war. His solution, which would presumably work as well in space as on earth, would be to have sovereignty in space. This would shift the war of all against all in space (of the sort that is common in science fiction about asteroid mining) to a war of nations in space (which is also common in science fiction). The war could, of course, be a cold one fought economically and technologically rather than a hot one fought with mass drivers and lasers.

If the asteroids are regarded as the common property of humanity, then Locke’s approach could be taken. As Locke saw it, God gave everything to humans in common, but people have to acquire things from the common property to make use of it. Locke gives the terrestrial example of how a person needs to make an apple her own before she can benefit from it. In the case of space, a person would need to make an asteroid her own in order to benefit from the materials it contains.

Locke sketched out a basic labor theory of ownership—whatever a person mixes her labor with becomes her property. As such, if asteroid miners located an asteroid and started mining it, then the asteroid would belong to them.  This does have some appeal: before the miners start extracting the minerals from the asteroid, it is just a rock drifting in space. Now it is a productive mine, improved from is natural state by the labor of the miners. If mining is profitable, then the miners would have a clear incentive to grab as many asteroids as they can, which leads to a rather important moral problem—the limits of ownership.

Locke does set limits on what people can take in his proviso.: those who take from the common resources must leave as much and as good for others. When describing this to my students, I always use the analogy to food at a party: since the food is for everyone, everyone has a right to the food. However, taking it all or taking the very best would be wrong (and rude). While this proviso is ignored on earth, the asteroids provide us with a fresh start in regards to dividing up the common property of humanity. After all, no one has any special right to claim the asteroids—so we all have equal good claims to the resources they contain.

As with earth resources, some will probably contend that there is no obligation to leave as much and as good for others in space. Instead, those who get there first will contend that ownership should be on the principle of whoever grabs it first and can keep it is the “rightful” owner.

Those who take this view would probably argue that those who get their equipment into space would have done the work (or put up the money) and hence (as argued above) would be entitled to all they can grab and use or sell. Other people are free to grab what they can, provided that they have access to the resources needed to mine the asteroids. Naturally, the folks who lack the resources to compete will remain poor—their poverty will, in fact, disqualify them from owning any of the space resources much in the way poverty disqualifies people on earth from owning earth resources.

While the selfish approach is certainly appealing, arguments can be made for sharing asteroid resources. One reason is that those who will mine the asteroids did not create the means to do so from nothing on their own. Reaching the asteroids will be the result of centuries of human civilization that made such technology possible. As such, there would seem to be a general debt owed to human civilization and paying this off would involve also contributing to the general good of humanity. Naturally, this line of reasoning can be countered by arguing that the successful miners will benefit humanity when their profits “trickle down” from space.

Another way to argue for sharing the resources is to use an analogy to a buffet line. Suppose I am first in line at a buffet. This does not give me the right to devour everything I can with no regard for the people behind me. It also does not give me the right to grab whatever I cannot eat myself in order to sell it to those who had the misfortune to be behind me in line. As such, these resources should be treated in a similar manner, namely fairly and with some concern for those who are behind the first people in line.

Naturally, these arguments for sharing can be countered by the usual arguments in favor of selfishness. While it is tempting to think that the vastness of space will overcome selfishness (that is, there will be so much that people will realize that not sharing would be absurd and petty), this seems unlikely—the more there is, the greater the disparity between those who have and those who have not. On this pessimistic view we already have all the moral and legal tools we need for space—it is just a matter of changing the wording a bit to include “space.”

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter