Tag Archives: Philosophy

Discussing the Shape of Things (that might be) to Come

ThingstocomescifiOne stock criticism of philosophers is their uselessness: they address useless matters or address useful matters in a way that is useless. One interesting specific variation is to criticize a philosopher for philosophically discussing matters of what might be. For example, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another example, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific sort of criticism.

One version of this sort of criticism can be seen as practical: since the shape of what might be cannot be known, philosophical discussions involve a double speculation: the first speculation is about what might be and the second is the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value—and this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).

This sort of criticism is often used as the foundation for a second sort of criticism. This criticism does assume that philosophy has value and it is this assumption that also provides a foundation for the criticism. The basic idea is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards all philosophy as useless would regard philosophical discussion about what might be as being a waste of time—responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.

As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should have focused on the ethical problems regarding current warfare. After all, there is a multitude of unsolved moral problems in regards to existing warfare—there hardly seems any need to add more unsolved problems until either the existing problems are solved or the possible problems become actual problems.

This does have considerable appeal. To use an analogy, if a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.

As might be suspected, this criticism rests on the principle that resources should be spent effectively and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is certainly reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting time. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.

As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be a fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.

In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation would generally be harm free. That is, it is rather unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game of Scrabble or watching a sunset. It would be preferable to have a somewhat better defense of such philosophical discussions of the shape of things (that might be) to come.

A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives in force. To use the classic analogy, it is much easier to address a rolling snowball than the avalanche that it will cause.

In the case of speculative matters that have ethical aspects, it seems that it would be generally useful to already have moral discussions in place ahead of time. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car—it certainly seems to be a good idea to work out the ethics of such matters of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident is occurring. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems.  Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It would seem to be a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.

Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter—even (or even especially) in regards to speculation about what might be.

To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time? While this might seem to be a purely theoretical concern, it quickly becomes a very practical concern when one is discussing the above mentioned technology. For example, consider a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is a rather different matter from paying so that someone who thinks they are you can go to your house and have sex with your spouse after you are dead.

There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be—in fact, I have already written many of these in previous posts. In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a practical sense.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Mistakes

If you have made a mistake, do not be afraid of admitting the fact and amending your ways.

-Confucius

 

I never make the same mistake twice. Unfortunately, there are an infinite number of mistakes. So, I keep making new ones. Fortunately, philosophy is rather helpful in minimizing the impact of mistakes and learning that crucial aspect of wisdom: not committing the same error over and over.

One key aspect to avoiding the repetition of errors is skill in critical thinking. While critical thinking has become something of a buzz-word bloated fad, the core of it remains as important as ever. The core is, of course, the methods of rationally deciding whether a claim should be accepted as true, rejected as false or if judgment regarding that claim should be suspended. Learning the basic mechanisms of critical thinking (which include argument assessment, fallacy recognition, credibility evaluation, and causal reasoning) is relatively easy—reading through the readily available quality texts on such matters will provide the basic tools. But, as with carpentry or plumbing, merely having a well-stocked tool kit is not enough. A person must also have the knowledge of when to use a tool and the skill with which to use it properly. Gaining knowledge and skill is usually difficult and, at the very least, takes time and practice. This is why people who merely grind through a class on critical thinking or flip through a book on fallacies do not suddenly become good at thinking. After all, no one would expect a person to become a skilled carpenter merely by reading a DIY book or watching a few hours of videos on YouTube.

Another key factor in avoiding the repetition of mistakes is the ability to admit that one has made a mistake. There are many “pragmatic” reasons to avoid admitting mistakes. Public admission to a mistake can result in liability, criticism, damage to one’s reputation and other such harms. While we have sayings that promise praise for those who admit error, the usual practice is to punish such admissions—and people are often quick to learn from such punishments. While admitting the error only to yourself will avoid the public consequences, people are often reluctant to do this. After all, such an admission can damage a person’s pride and self-image. Denying error and blaming others is usually easier on the ego.

The obvious problem with refusing to admit to errors is that this will tend to keep a person from learning from her mistakes. If a person recognizes an error, she can try to figure out why she made that mistake and consider ways to avoid making the same sort of error in the future. While new errors are inevitable, repeating the same errors over and over due to a willful ignorance is either stupidity or madness. There is also the ethical aspect of the matter—being accountable for one’s actions is a key part of being a moral agent. Saying “mistakes were made” is a denial of agency—to cast oneself as an object swept along by the river of fare rather than an agent rowing upon the river of life.

In many cases, a person cannot avoid the consequences of his mistakes. Those that strike, perhaps literally, like a pile of bricks, are difficult to ignore. Feeling the impact of these errors, a person might be forced to learn—or be brought to ruin. The classic example is the hot stove—a person learns from one touch because the lesson is so clear and painful. However, more complicated matters, such as a failed relationship, allow a person room to deny his errors.

If the negative consequences of his mistakes fall entirely on others and he is never called to task for these mistakes, a person can keep on making the same mistakes over and over. After all, he does not even get the teaching sting of pain trying to drive the lesson home. One good example of this is the political pundit—pundits can be endlessly wrong and still keep on expressing their “expert” opinions in the media. Another good example of this is in politics. Some of the people who brought us the Iraq war are part of Jeb Bush’s presidential team. Jeb, infamously, recently said that he would have gone to war in Iraq even knowing what he knows now. While he endeavored to awkwardly walk that back, it might be suspected that his initial answer was the honest one. Political parties can also embrace “solutions” that have never worked and relentless apply them whenever they get into power—other people suffer the consequences while the politicians generally do not directly reap consequences from bad policies. They do, however, routinely get in trouble for mistakes in their personal lives (such as affairs) that have no real consequences outside of this private sphere.

While admitting to an error is an important first step, it is not the end of the process. After all, merely admitting I made a mistake will not do much to help me avoid that mistake in the future. What is needed is an honest examination of the mistake—why and how it occurred. This needs to be followed by an honest consideration of what can be changed to avoid that mistake in the future. For example, a person might realize that his relationships ended badly because he made the mistake of rushing into a relationship too quickly—getting seriously involved without actually developing a real friendship.

To steal from Aristotle, merely knowing the cause of the error and how to avoid it in the future is not enough. A person must have the will and ability to act on that knowledge and this requires the development of character. Fortunately, Aristotle presented a clear guide to developing such character in his Nicomachean Ethics. Put rather simply, a person must do what it is she wishes to be and stick with this until it becomes a matter of habit (and thus character). That is, a person must, as Aristotle argued, become a philosopher. Or be ruled by another who can compel correct behavior, such as the state.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Philosophy, Running, Gaming & the Quantified Self

“The unquantified life is not worth living.”

While the idea of quantifying one’s life is an old idea, one growing tech trend is the use of devices and apps to quantify the self. As a runner, I started quantifying my running life back in 1987—that is when I started keeping a daily running log. Back then, the smartest wearable was probably a Casio calculator watch, so I kept all my records on paper. In fact, I still do—as a matter of tradition.

I use my running log to track my distance, running route, time, conditions, how I felt during the run, the number of time I have run in the shoes and other data I feel like noting at the time. I also keep a race log and a log of my yearly mileage. So, like Ben Franklin, I was quantifying before it became cool. Like Ben, I have found this rather useful—looking at my records allows me to form hypotheses regarding what factors contribute to injury (high mileage, hill work and lots of racing) and what results in better race times (rest and speed work). As such, I am sold on the value of quantification—at least in running.

In addition to my ORD (Obsessive Running/Racing Disorder) I am also a nerdcore gamer—I started with the original D&D basic set and still have shelves (and now hard drive space) devoted to games. In the sort of games I play the most, such as Pathfinder, Call of Cthulu and World of Warcraft the characters are fully quantified. That is, the character is a set of stats such as strength, constitution, dexterity, hit points, and sanity. Such games also feature sets of rules for the effects of the numbers as well as clear optimization paths. Given this background in gaming, it is not surprising that I see the quantified self as an attempt by a person to create, in effect, a character sheet for herself. That is, to see all her stats and to look for ways to optimize this character that is a model of the self. As such, I get the appeal. Naturally, as a philosopher I do have some concerns about the quantified self and how that relates to the qualities of life—but that is a matter for another time. For now, I will focus on a brief critical look at the quantified self.

Two obvious concerns about the quantified data regarding the self (or whatever is being measured) are questions regarding the accuracy of the data and questions regarding the usefulness of the data. To use an obvious example about accuracy, there is the question of how well a wearable really measures sleep.  In regards to usefulness, I wonder what I would garner from knowing how long I chew my food or the frequency of my urination.

The accuracy of the data is primarily a technical or engineering problem. As such, accuracy problems can be addressed with improvements in the hardware and software. Of course, until the data is known to be reasonably accurate, then it should be regarded with due skepticism.

The usefulness of the data is partially a subjective matter. That is, what counts as useful data will vary from person to person based on their needs and goals. For example, knowing how many steps I have taken at work is probably not useful data for me—since I run about 60 miles per week, that little amount of walking is most likely insignificant in regards to my fitness. However, someone who has no other exercise might find such data very useful. As might be suspected, it is easy to be buried under an avalanche of data and a serious challenge for anyone who wants to make use of the slew of apps and devices is to sort out the data that would actually be useful from the thousands or millions of data bits that would not be useful.

Another area of obvious concern is the reasoning applied to the data. Some devices and apps supply raw data, such as miles run or average heartrate. Others purport to offer an analysis of the data—that is, to engage in automated reasoning regarding the data. In any case, the user will need to engage in some form of reasoning to use the data.

In philosophy, the two main basic tools in regards to personal causal reasoning are derived from Mill’s classic methods. One method is commonly known as the method of agreement (or common thread reasoning). Using this method involves considering an effect (such as poor sleep or a knee injury) that has occurred multiple times (at least twice). The basic idea is to consider the factor or factors that are present each time the effect occurs and to sort through them to find the likely cause (or causes). For example, a runner might find that all her knee issues follow times when she takes up extensive hill work, thus suggesting the hill work as a causal factor.

The second method is commonly known as the method of difference. Using this method requires at least two situations: one in which the effect in question has occurred and one in which it has not. The reasoning process involves considering the differences between the two situations and sorting out which factor (or factors) is the likely cause. For example, a runner might find that when he does well in a race, he always gets plenty of rest the week before. When he does poorly, he is always poorly rested due to lack of sleep. This would indicate that there is a connection between the rest and race performance.

There are, of course, many classic causal fallacies that serve as traps for such reasoning. One of the best known is post hoc, ergo propter hoc (after this, therefore because of this). This fallacy occurs when it is inferred that A causes B simply because A is followed by B. For example, a person might note that her device showed that she walked more stairs during the week before doing well at a 5K and simply infer that walking more stairs caused her to run better. There could be a connection, but it would take more evidence to support that conclusion.

Other causal reasoning errors include the aptly named ignoring a common cause (thinking that A must cause B without considering that A and B might both be the effects of C), ignoring the possibility of coincidence (thinking A causes B without considering that it is merely coincidence) and reversing causation (taking A to cause B without considering that B might have caused A).  There are, of course, the various sayings that warn about poor causal thinking, such as “correlation is not causation” and these tend to correlate with named errors in causal reasoning.

People obviously vary in their ability to engage in causal reasoning and this would also apply to the design of the various apps and devices that purport to inform their users about the data they gather. Obviously, the better a person is at philosophical (in this case causal) reasoning, the better she will be able to use the data.

The takeaway, then, is that there are at least three important considerations regarding the quantification of the self in regards to the data. These are the accuracy of the data, the usefulness of the data, and the quality of the reasoning (be it automated or done by the person) applied to the data.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Information Immortality

Most people are familiar with the notion that energy cannot be destroyed. Interestingly, there is also a rule in quantum mechanics that forbids the destruction of information. This principle, called unitarity, is often illustrated by the example of burning a book: though the book is burned, the information still remain—

although it would obviously be much harder to “read” a burned book. This principle has, in recent years, run into some trouble with black holes and they might or might not be able to destroy information. My interest here is not with this specific dispute, but rather with the question of whether or not the indestructibility of information has any implications for immortality.

On the face of it, the indestructibility of information seems rather similar to the conservation of energy. Long ago, when I was an undergraduate, I first heard the argument that because of the conservation of energy, personal immortality must be real (or at least possible). The basic line of reasoning was that a person is energy, energy cannot be destroyed, so a person will exist forever. While this has considerable appeal, the problem is obvious: while energy is conserved, it certainly need not be preserved in the same form. That is, even if a person is composed of energy it does not follow that the energy remains the same person (or even a person). David Hume was rather clear about the problem—an indestructible or immortal substance (or energy) does not entail the immortality of a person. When discussing the possibility of immortality, he claims that nature uses substance like clay: shaping it into various forms, then reshaping the matter into new forms so that the same matter can successively make up the bodies of living creatures.  By analogy, an immaterial substance could successively make up the minds of living creatures—the substance would not be created or destroyed, it would merely change form. However, the person would cease to be.

Prior to Hume, John Locke also noted the same sort of problem: even if, for example, you had the same soul (or energy) as Nestor, you would not be the same person as Nestor any more than you would be the same person as Nestor if, in an amazing coincidence, your body contained at this instant all the atoms that composed Nestor at a specific instant in time.

Hume and Locke certainly seem to be right about this—the indestructibility of the stuff that makes up a person (be it body or soul) does not entail the immortality of the person. If a person is eaten by a bear, the matter and energy that composed him will continue to exist—but the person did not survive being eaten by the bear. If there is a soul, the mere continuance of the soul would also not seem to suffice for the person to continue to exist as the same person (although this can obviously be argued). What would be needed would be the persistence of what makes up the person. This is usually taken to be something other than just stuff, be that stuff matter, energy, or ectoplasm. So, the conservation of energy does not seem to entail personal immortality—but the conservation of information might (or might not).

Put a bit crudely, Locke took this something other to be memory: personal identity extends backwards as far as the memory extends. Since people clearly forget things, Locke did accept the possibility of memory loss. Being consistent in this matter, he accepted that the permanent loss of memory would result in a corresponding failure of identity. Crudely put, if a person truly did not and could never remember doing something, then she was not the person who did it.

While there are many problems with the memory account of personal identity, it certainly suggests a path to quantum immortality through the conservation of information. One approach would be to argue that since information is conserved, the person is conserved even after the death and dissolution of the body. Just like the burned book whose information still exists, the person’s information would still exist.

One obvious reply to this is that a person is an active being and not just a collection of information. To use a rather rough analogy, a person could be seen as being like a computer program—to be is to be running. Or, to use a more artistic analogy, like a play: while the script would persist after the final curtain, the play itself is over. As such, while the person’s information would be conserved, the person would cease to be. This sort of “quantum immortality” is remarkably similar to Spinoza’s view of immortality. While he denied personal immortality, he claimed that “the human mind cannot be absolutely destroyed with the body, but something of it remains which is eternal.” Spinoza, of course, seemed to believe that this should comfort people. Perhaps some comfort should be taken in the fact that one’s information will be conserved (barring an unfortunate encounter with a black hole).

However, people would probably be more comforted by a reason to believe in an afterlife. Fortunately, the conservation of information does provide at least a shot at an afterlife. If information is conserved and all there is to a person can be conserved as information, then a person could presumably be reconstructed after his death. For example, imagine a person, Laz, who died by an accident and was buried. The remains could, in theory, be dug up and the information about the body could be recovered (to a point prior to death, of course). The body could, with suitably advanced technology, be reconstructed. The reconstructed brain could, in theory, have all the memories and such recovered and restored as well. This would be a technological resurrection in the flesh and the person would certainly seem to live again. Assuming that every piece of information was preserved, recovered and restored in the flesh it would be the person—just as if a moment had passed rather than, say, a thousand years. This would be, obviously, in theory. Actual resurrection technology would presumably involve various flaws and limitations. But, the idea seems sound enough.

One potential problem is an old one for philosophers—if a person could be reconstructed from such information, she could also be duplicated from such information. To use the obvious analogy, this would be like 3D printing from a data file, except what would be printed would be a person. Or, to use another analogy, it would be like reconstructing an old computer and reloading all the software. There would certainly not be any reason to wait until the person died, unless there was some sort of copyright or patent held by the person on herself that expired a certain time after her death.

In closing, I leave you with this: some day in the far future, you might find that you (or someone like you) have just been reprinted. In 3D, of course.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

A Philosopher’s Blog: 2014 Free on Amazon

A-Philosopher's-Blog-2014A Philosopher’s Blog: 2014 Philosophical Essays on Many Subjects will be available as a free Kindle book on Amazon from 12/31/2014-1/4/2015. This book contains all the essays from the 2014 postings of A Philosopher’s Blog. The topics covered range from the moral implications of sexbots to the metaphysics of determinism. It is available on all the various national Amazons, such as in the US, UK, and India.

A Philosopher’s Blog: 2014 on Amazon US

A Philosophers Blog: 2014 on Amazon UK

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

 

The Corruption of Academic Research

Synthetic insulin crystals synthesized using r...

Synthetic insulin crystals synthesized using recombinant DNA technology (Photo credit: Wikipedia)

STEM (Science, Technology, Engineering and Mathematics) fields are supposed to be the new darlings of the academy, so I was slightly surprised when I heard an NPR piece on how researchers are struggling for funding. After all, even the politicians devoted to cutting education funding have spoken glowingly of STEM. My own university recently split the venerable College of Arts & Sciences, presumably to allow more money to flow to STEM without risking that professors in the soft sciences and the humanities might inadvertently get some of the cash. As such I was somewhat curious about this problem, but mostly attributed it to a side-effect of the general trend of defunding public education. Then I read “Bad Science” by Llewellyn Hinkes-Jones. This article was originally published in issue 14, 2014 of Jacobin Magazine. I will focus on the ethical aspects of the matters Hinkes-Jones discussed in this article, which is centered on the Bayh-Dole Act.

The Bayh-Dole Act was passed in 1980 and was presented as having very laudable goals. Before the act was passed, universities were limited in regards to what they could do with the fruits of their scientific research. After the act was passes, schools could sell their patents or engage in exclusive licensing deals with private companies (that is, monopolies on the patents). Supporters asserted this act would be beneficial in three main ways. The first is that it would secure more private funding for universities because corporations would provide money in return for the patents or exclusive licenses. The second is that it would bring the power of the profit motive to public research: since researchers and schools could profit, they would be more motivated to engage in research. The third is that the private sector would be motivated to implement the research in the form of profitable products.

On the face of it, the act was a great success. Researchers at Columbia University patented the process of DNA cotransfrormation and added millions to the coffers of the school. A patent on recombinant DNA earned Stanford over $200 million. Companies, in turn, profited greatly. For example, researchers at the University of Utah created Myriad Genetics and took ownership of their patent on the BRCA1 and BRCA2 tests for breast cancer. The current cost of the test is $4,000 (in comparison a full sequencing of human DNA costs $1,000) and the company has a monopoly on the test.

Given these apparent benefits, it is easy enough to advance a utilitarian argument in favor of the act and its consequences. After all, if allows universities to fund their research and corporations to make profits, then its benefits would seem to be considerable, thus making it morally good. However, a proper calculation requires considering the harmful consequences of the act.

The first harm is that the current situation imposes a triple cost on the public. One cost is that the taxpayers fund the schools that conduct the research. The next is that thanks to the monopolies on patents the taxpayers have to pay whatever prices the companies wish to charge, such as the $4,000 for a test that should cost far less. In an actual free market there would be competition and lower prices—but what we have is a state controlled and regulated market. Ironically, those who are often crying the loudest against government regulation and for the value of competition are quite silent on this point.  The final cost of the three is that the corporations can typically write off their contributions on their taxes, thus leaving other taxpayers to pick up their slack. These costs seem to be clear harms and do much to offset the benefits—at least when looked at from the perspective of the whole society and not just focusing on those reaping the benefits.

The second harm is that, ironically, this system makes research more expensive. Since processes, strains of bacteria and many other things needed for research are protected by monopolistic patents the researchers who do not hold these patents have to pay to use them. The costs are usually quite high, so while the patent holders benefit, research in general suffers. In order to pay for these things, researchers need more funding, thus either imposing more cost on taxpayers or forcing them to turn to private funding (which will typically result in more monopolistic patents).

The third harm is the corruption of researchers. Researchers are literally paid to put their names on positive journal articles that advance the interests of corporations. They are also paid to promote drugs and other products while presenting themselves as researchers rather than paid promoters. If the researchers are not simply bought, the money is clearly a biasing factor. Since we are depending on these researchers to inform the public and policy makers about these products, this is clearly a problem and presents a clear danger to the public good.

A fourth harm is that even the honest researchers who have not been bought are under great pressure to produce “sexy science” that will attract grants and funding. While it has always been “publish or perish” in modern academics, the competition is even fiercer in the sciences now. As such, researchers are under great pressure to crank out publications. The effect has been rather negative as evidenced by the fact that the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies.  Far from driving advances in science, the act has served as an engine of corruption, fraud and bad science. This would be bad enough, but there is also the impact on a misled and misinformed public. I must admit that I fell for the antioxidant and omega-3 “research”—I modified my diet to include more antioxidants and omega-3. While this bad science does get debunked, the debunking takes a long time and most people never hear about it. For example, how many people know that the antioxidant and omega-3 “research” is flawed and how many still pop omega-3 “fish oil pills” and drink “antioxidant teas”?

A fifth harm is that universities have rushed to cash in on the research, driven by the success of the research schools that have managed to score with profitable patents. However, setting up research labs aimed at creating million dollar patents is incredibly expensive. In most cases the investment will not yield the hoped for returns, thus leaving many schools with considerable expenses and little revenue.

To help lower costs, schools have turned to employing adjuncts to do the teaching and research, thus creating a situation in which highly educated but very low-paid professionals are toiling away to secure millions for the star researchers, the administrators and their corporate benefactors. It is, in effect, sweat-shop science.

This also shows another dark side to the push for STEM: as the number of STEM graduates increase, the value of the degrees will decrease and wages for the workers will continue to fall. This is great for the elite, but terrible for those hoping that a STEM degree will mean a good job and a bright future.

These harms would seem to outweigh the alleged benefits of the act, thus indicating it is morally wrong. Naturally, it can be countered that the costs are worth it. After all, one might argue, the incredible advances in science since 1980 have been driven by the profit motive and this has been beneficial overall. Without the profit motive, the research might have been conducted, but most of the discoveries would have been left on the shelves. The easy and obvious response is to point to all the advances that occurred due to public university research prior to 1980 as well as the research that began before then and came to fruition.

While solving this problem is a complex matter, there seem to be some easy and obvious steps. The first would be to restore public funding of state schools. In the past, the publicly funded universities drove America’s worldwide dominance in research and helped fuel massive economic growth while also contributing to the public good. The second would be replacing the Bayh-Dole Act with an act that would allow universities to benefit from the research, but prevent the licensing monopolies that have proven so damaging. Naturally, this would not eliminate patents but would restore competition to what is supposed to be a competitive free market by eliminating the creation of monopolies from public university research. The folks who complain about the state regulating business and who praise the competitive free market will surely get behind this proposal.

It might also be objected that the inability to profit massively from research will be a disincentive. The easy and obvious reply is that people conduct research and teach with great passion for very little financial compensation. The folks that run universities and corporations know this—after all, they pay such people very little yet still often get exceptional work. True, there are some people who are solely motivated by profit—but those are typically the folks who are making the massive profit rather than doing the actual research and work that makes it all possible.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Neutral Good

My previous essays on alignments have focused on the evil ones (lawful evil, neutral evil and chaotic evil). Patrick Lin requested this essay. He professes to be a devotee of Neutral Evil to such a degree that he regards being lumped in with Ayn Rand as an insult. Presumably because he thinks she was too soft on the good.

In the Pathfinder version of the game, neutral good is characterized as follows:

A neutral good character is good, but not shackled by order. He sees good where he can, but knows evil can exist even in the most ordered place.

A neutral good character does anything he can, and works with anyone he can, for the greater good. Such a character is devoted to being good, and works in any way he can to achieve it. He may forgive an evil person if he thinks that person has reformed, and he believes that in everyone there is a little bit of good.

In a fantasy campaign realm, the player characters typical encounter neutral good types as allies who render aid and assistance. Even evil player characters are quite willing to accept the assistance of the neutral good, knowing that the neutral good types are more likely to try to persuade them to the side of good than smite them with righteous fury. Neutral good creatures are not very common in most fantasy worlds—good types tend to polarize towards law and chaos.

Not surprisingly, neutral good types are also not very common in the real world. A neutral good person has no special commitment to order or lack of order—what matters is the extent to which a specific order or lack of order contributes to the greater good. For those devoted to the preservation of order, or its destruction, this can be rather frustrating.

While the neutral evil person embraces the moral theory of ethical egoism (that each person should act solely in her self-interest), the neutral good person embraces altruism—the moral view that each person should act in the interest of others. In more informal terms, the neutral good person is not selfish. It is not uncommon for the neutral good position to be portrayed as stupidly altruistic. This stupid altruism is usually cast in terms of the altruist sacrificing everything for the sake of others or being willing to help anyone, regardless of who the person is or what she might be doing. While a neutral good person is willing to sacrifice for others and willing to help people, being neutral good does not require a person to be unwise or stupid. So, a person can be neutral good and still take into account her own needs. After all, the neutral good person considers the interests of everyone and she is part of that everyone. A person can also be selective in her assistance and still be neutral good. For example, helping an evil person do evil things would not be a good thing and hence a neutral good person would not be obligated to help—and would probably oppose the evil person.

Since a neutral good person works for the greater good, the moral theory of utilitarianism tends to fit this alignment. For the utilitarian, actions are good to the degree that they promote utility (what is of value) and bad to the degree that they do the opposite. Classic utilitarianism (that put forth by J.S. Mill) takes happiness to be good and actions are assessed in terms of the extent to which they create happiness for humans and, as far as the nature of things permit, sentient beings. Put in bumper sticker terms, both the utilitarian and the neutral good advocate the greatest good for the greatest number.

This commitment to the greater good can present some potential problems. For the utilitarian, one classic problem is that what seems rather bad can have great utility. For example, Ursula K. Le Guin’s classic short story “The Ones Who Walk Away from Omelas” puts into literary form the question raised by William James:

Or if the hypothesis were offered us of a world in which Messrs. Fourier’s and Bellamy’s and Morris’s utopias should all be outdone, and millions kept permanently happy on the one simple condition that a certain lost soul on the far-off edge of things should lead a life of lonely torture, what except a specifical and independent sort of emotion can it be which would make us immediately feel, even though an impulse arose within us to clutch at the happiness so offered, how hideous a thing would be its enjoyment when deliberately accepted as the fruit of such a bargain?

In Guin’s tale, the splendor, health and happiness that is the land of Omelas depends on the suffering of a person locked away in a dungeon from all kindness. The inhabitants of Omelas know full well the price they pay and some, upon learning of the person, walk away. Hence the title.

For the utilitarian, this scenario would seem to be morally correct: a small disutility on the part of the person leads to a vast amount of utility. Or, in terms of goodness, the greater good seems to be well served.

Because the suffering of one person creates such an overabundance of goodness for others, a neutral good character might tolerate the situation. After all, benefiting some almost always comes at the cost of denying or even harming others. It is, however, also reasonable to consider that a neutral good person would find the situation morally unacceptable. Such a person might not free the sufferer because doing so would harm so many other people, but she might elect to walk away.

A chaotic good type, who is committed to liberty and freedom, would certainly oppose the imprisonment of the innocent person—even for the greater good. A lawful good type might face the same challenge as the neutral good type: the order and well being of Omelas rests on the suffering of one person and this could be seen as an heroic sacrifice on the part of the sufferer. Lawful evil types would probably be fine with the scenario, although they would have some issues with the otherwise benevolent nature of Omelas. Truly subtle lawful evil types might delight in the situation and regard it as a magnificent case of self-delusion in which people think they are selecting the greater good but are merely choosing evil.

Neutral evil types would also be fine with it—provided that it was someone else in the dungeon. Chaotic evil types would not care about the sufferer, but would certainly seek to destroy Omelas. They might, ironically, try to do so by rescuing the sufferer and seeing to it that he is treated with kindness and compassion (thus breaking the conditions of Omelas’ exalted state).

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Anyone Home?

English: man coming out of coma.

English: man coming out of coma. (Photo credit: Wikipedia)

As I tell my students, the metaphysical question of personal identity has important moral implications. One scenario I present is that of a human in what seems to be a persistent vegetative state. I say “human” rather than “person”, because the human body in question might no longer be a person. To use a common view, if a person is her soul and the soul has abandoned the shell, then the person is gone.

If the human is still a person, then it seems reasonable to believe that she has a different moral status than a mass of flesh that was once a person (or once served as the body of a person). This is not to say that a non-person human would have no moral status at all—I do not want to be interpreted as holding that view. Rather, my view is that personhood is a relevant factor in the morality of how an entity is treated.

To use a concrete example, consider a human in what seems to be a vegetative state. While the body is kept alive, people do not talk to the body and no attempt is made to entertain the body, such as playing music or audiobooks. If there is no person present or if there is a person present but she has no sensory access at all, then this treatment would seem to be acceptable—after all it would make no difference whether people talked to the body or not.

There is also the moral question of whether such a body should be kept alive—after all, if the person is gone, there would not seem to be a compelling reason to keep an empty shell alive. To use an extreme example, it would seem wrong to keep a headless body alive just because it can be kept alive. If the body is no longer a person (or no longer hosts a person), then this would be analogous to keeping the headless body alive.

But, if despite appearances, there is still a person present who is aware of what is going on around her, then the matter is significantly different. In this case, the person has been effectively isolated—which is certainly not good for a person.

In regards to keeping the body alive, if there is a person present, then the situation would be morally different. After all, the moral status of a person is different from that of a mass of merely living flesh. The moral challenge, then, is deciding what to do.

One option is, obviously enough, to treat all seemingly vegetative (as opposed to brain dead) bodies as if the person was still present. That is, the body would be accorded the moral status of a person and treated as such.

This is a morally safe option—it would presumably be better that some non-persons get treated as persons rather than risk persons being treated as non-persons. That said, it would still seem both useful and important to know.

One reason to know is purely practical: if people know that a person is present, then they would presumably be more inclined to take the effort to treat the person as a person. So, for example, if the family and medical staff know that Bill is still Bill and not just an empty shell, they would tend to be more diligent in treating Bill as a person.

Another reason to know is both practical and moral: should scenarios arise in which hard choices have to be made, knowing whether a person is present or not would be rather critical. That said, given that one might not know for sure that the body is not a person anymore it could be correct to keep treating the alleged shell as a person even when it seems likely that he is not. This brings up the obvious practical problem: how to tell when a person is present.

Most of the time we judge there is a person present based on appearance, using the assumption that a human is a person. Of course, there might be non-human people and there might be biological humans that are not people (headless bodies, for example). A somewhat more sophisticated approach is to use the Descartes’s test: things that use true language are people. Descartes, being a smart person, did not limit language to speaking or writing—he included making signs of the sort used to communicate with the deaf. In a practical sense, getting an intelligent response to an inquiry can be seen as a sign that a person is present.

In the case of a body in an apparent vegetative state applying this test is quite a challenge. After all, this state is marked by an inability to show awareness. In some cases, the apparent vegetative state is exactly what it appears to be. In other cases, a person might be in what is called “locked-in-syndrome.” The person is conscious, but can be mistaken for being minimally conscious or in a vegetative state. Since the person cannot, typically, respond by giving an external sign some other means is necessary.

One breakthrough in this area is due to Adrian M. Owen. Overs implying things considerably, he found that if a person is asked to visualize certain activities (playing tennis, for example), doing so will trigger different areas of the brain. This activity can be detected using the appropriate machines. So, a person can ask a question such as “did you go to college at Michigan State?” and request that the person visualize playing tennis for “yes” or visualize walking around her house for “no.” This method provides a way of determining that the person is still present with a reasonable degree of confidence. Naturally, a failure to respond would not prove that a person is not present—the person could still remain, yet be unable (or unwilling) to hear or respond.

One moral issue this method can held address is that of terminating life support. “Pulling the plug” on what might be a person without consent is, to say the least, morally problematic. If a person is still present and can be reached by Owen’s method, then thus would allow the person to agree to or request that she be taken off life support. Naturally, there would be practical questions about the accuracy of the method, but this is distinct from the more abstract ethical issue.

It must be noted that the consent of the person would not automatically make termination morally acceptable—after all, there are moral objections to letting a person die in this manner even when the person is fully and clearly conscious. Once it is established that the method adequately shows consent (or lack of consent), the broader moral issue of the right to die would need to be addressed.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Neil deGrasse Tyson, Philosophy & Science

Dr. at the November 29, 2005 meeting of the NA...

. (Photo credit: Wikipedia)

In March of 2014 popular astrophysicist and Cosmos host Neil deGrasse Tyson did a Nerdist Podcast. This did not garner much attention until May when some philosophers realized that Tyson was rather critical and dismissive of philosophy. As might be imagined, there was a response from the defenders of philosophy. Some critics went so far as to accuse him of being a philistine.

Tyson presents a not uncommon view of contemporary philosophy, namely that “asking deep questions” can cause a “pointless delay in your progress” in engaging “this whole big world of unknowns out there.” To avoid such pointless delays, Tyson advises scientists to respond to such questioners by saying, “I’m moving on, I’m leaving you behind, and you can’t even cross the street because you’re distracted by deep questions you’ve asked of yourself. I don’t have time for that.”

Since Tyson certainly seems to be a deep question sort of guy, it is tempting to consider that his remarks are not serious—that is, he is being sarcastic. Even if he is serious, it is also reasonable to consider that these remarks are off-the cuff and might not represent his considered view of philosophy in general.

It is also worth considering that the claims made are his considered and serious position. After all, the idea that a scientist would regard philosophy as useless (or worse) is quite consistent with my own experiences in academics. For example, the politically fueled rise of STEM and the decline of the humanities has caused some in STEM to regard this situation as confirmation of their superior status and on some occasions I have had to defuse conflicts instigated by STEM faculty making their views about the uselessness of non-STEM fields clear.

Whatever the case, the concern that the deep questioning of philosophy can cause pointless delays does actually have some merit and is well worth considering. After all, if philosophy is useless or even detrimental, then this would certainly be worth knowing.

The main bite of this criticism is that philosophical questioning is detrimental to progress: a scientist who gets caught in these deep questions, it seems, would be like a kayaker caught in a strong eddy: she would be spinning around and going nowhere rather than making progress. This concern does have significant practical merit. To use an analogy outside of science, consider a committee meeting aimed at determining the curriculum for state schools. This committee has an objective to achieve and asking questions is a reasonable way to begin. But imagine that people start raising deep questions about the meaning of terms such as “humanities” or “science” and become very interested in sorting out the semantics of various statements. This sort of sidetracking will result in a needlessly long meeting and little or no progress. After all, the goal is to determine the curriculum and deep questions will merely slow down progress towards this practical goal. Likewise, if a scientist is endeavoring to sort out the nature of the cosmos, deep questions can be a similar sort of trap: she will be asking ever deeper questions rather than gathering data and doing math to answer her less deep questions.

Philosophy, as Socrates showed by deploying his Socratic method, can endlessly generate deep questions. Questions such as “what is the nature of the universe?”, “what is time?”, “what is space?”, “what is good?” and so on. Also, as Socrates showed, for each answer given, philosophy can generate more questions. It is also often claimed that this shows that philosophy really has no answers since every alleged answer can be questioned or raises even more questions. Thus, philosophy seems to be rather bad for the scientist.

A key assumption seems to be that science is different from philosophy in at least one key way—while it raises questions, proper science focuses on questions that can be answered or, at the very least, gets down to the business of answering them and (eventually) abandons a question should it turn out to be a distracting deep question. Thus, science provides answers and makes progress. This, obviously enough, ties into another stock criticism of philosophy: philosophy makes no progress and is useless.

One rather obvious reason that philosophy is regarded as not making progress and as being useless is that when enough progress is made on a deep question, it is perceived as being a matter for science rather than philosophy. For example, ancient Greek philosophers, such as Democritus, speculated about the composition of the universe and its size (was it finite or infinite?) and these were considered deep philosophical questions. Even Newton considered himself a natural philosopher. He has, of course, been claimed by the scientist (many of whom conveniently overlook the role of God in his theories). These questions are now claimed by physicists, such as Tyson, who regard them as scientific rather than philosophical questions.

Thus, it is rather unfair to claim that philosophy does not solve problems or make progress—since when excellent progress is made, the discipline is labeled as science and no longer considered philosophy. However, the progress would have obviously been impossible without the deep questions that set people in search of answers and the work done by philosophers before the field was claimed as a science. To use an analogy, to claim that philosophy has made no progress or contributions would be on par with a student taking the work done by another, adding to it and then claiming the whole as his own work and deriding the other student as “useless.”

At this point, some might be willing to grudgingly concede that philosophy did make some valuable contributions (perhaps on par with how the workers who dragged the marble for Michelangelo’s David contributed) in the past, but philosophy is now an eddy rather than the current of progress.

Interestingly enough, philosophy has been here before—back in the days of Socrates the Sophists contended that philosophical speculation was valueless and that people should focus on getting things done—that is, achieving success. Fortunately for contemporary science, philosophy survived and philosophers kept asking those deep questions that seemed so valueless then.

While philosophy’s day might be done, it seems worth considering that some of the deep, distracting philosophical questions that are being asked are well worth pursuing—if only because they might lead to great things. Much as how Democritus’ deep questions led to the astrophysics that a fellow named Neil loves so much.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Why is the Universe the Way it is?

Galaxies are so large that stars can be consid...

(Photo credit: Wikipedia)

One of the fundamental questions shared by science, philosophy and theology is the question of why the universe is the way it is. Over the centuries, the answers have fallen into two broad camps. The first is that of teleology. This is the view that the universe is the way it is because it has a purpose, goal or end for which it aims. The second is the non-teleological camp, which is the denial of the teleological view. Members of this camp often embrace purposeless chance as the “reason” why things are as they are.

Both camps agree on many basic matters, such as the view that the universe seems to be finely tuned. Theorists vary a bit in their views on what a less finely tuned universe would be like. On some views, the universe would just be slightly different while on other views small differences would have significant results, such as an uninhabitable universe. Because of this apparent fine tuning, one main concern for philosophers and physicists is explaining why this is the case.

The dispute over this large question nicely mirrors the dispute over a smaller question, namely the question about why living creatures are the way they are. The division into camps follows the same pattern. On one side is the broad camp inhabited by those who embrace teleology and the other side dwell those who reject it. Interestingly, it might be possible to have different types of answers to these questions. For example, the universe could have been created by a deity (a teleological universe) who decides to let natural selection rather than design sort out life forms (non-teleological). That said, the smaller question does provide some interesting ways to answer the larger question.

As noted above, the teleological camp is very broad. In the United States, perhaps the best known form of teleology is Christian creationism. This view answers the large and the small question with God: He created the universe and the inhabitants. There are many other religious teleological views—the creation stories of various other cultures and faiths are examples of these. There are also non-religious views. Among these, probably the best known are those of Plato and Aristotle. For Plato, roughly put, the universe is the way it is because of the Forms (and behind them all is the Good). Aristotle does not put any god in charge of the universe, but he regarded reality as eminently teleological. Views that posit laws governing reality also seem, to some, to be within the teleological camp. As such, the main divisions in the teleological camp tends to be between the religious theories and the non-religious theories.

Obviously enough, teleological accounts have largely fallen out of favor in the sciences—the big switch took place during the Modern era as philosophy and science transitioned away from Aristotle (and Plato) towards a more mechanistic and materialistic view of reality.

The non-teleological camp is at least as varied as the teleological camp and as old. The pre-Socratic Greek philosophers considered the matter of what would now be called natural selection and the idea of a chance-based, purposeless universe is ancient.

One non-teleological way to answer the question of why the universe is the way it is would be to take an approach similar to Spinoza, only without God. This would be to claim that the universe is what it is as a matter of necessity: it could not be any different from what it is. However, this might be seen as unsatisfactory since one can easily ask about why it is necessarily the way it is.

The opposite approach is to reject necessity and embrace a random universe—it was just pure chance that the universe turned out as it did and things could have been very different. So, the answer to the question of why the universe is the way it is would be blind chance. The universe plays dice with itself.

Another approach is to take the view that the universe is the way it is and finely tuned because it has “settled” down into what seems to be a fine-tuned state. Crudely put, the universe worked things out without any guidance or purpose. To use an analogy, think of sticks and debris washed by a flood to form a stable “structure.” The universe could be like that—where the flood is the big bang or whatever got it going.

One variant on this would be to claim that the universe contains distinct zones—the zone we are in happened to be “naturally selected” to be stable and hospitable to life. Other zones could be rather different—perhaps so different that they are beyond our epistemic abilities. Or perhaps these zones “died” thus allowing an interesting possibility for fiction about the ghosts of dead zones haunting the cosmic night. Perhaps the fossils of dead universes drift around us, awaiting their discovery.

Another option is to expand things from there being just one universe to a multiverse. This allows a rather close comparison to natural selection: in place of a multitude of species, there is a multitude of universes. Some “survive” the selection while others do not. Just as we are supposed to be a species that has so far survived the natural selection of evolution, we live in a universe that has so far survived cosmic selection. If the model of evolution and natural selection is intellectually satisfying in biology, it would seem reasonable to accept cosmic selection as also being intellectually satisfying—although it will be radically different from natural selection in many obvious ways.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta