Category Archives: Philosophy

Are E-Athletes Really Athletes?

http://www.gettyimages.com/detail/150400339

While professional video game competitions have been around for some time, it is only fairly recently that competitive gamers have been dubbed as “e-athletes.” Some colleges now offer athletic scholarships to e-athletes and field sports teams that compete in games like the infamous League of Legends. As with some other college sports, these e-athletes can go pro and earn large paychecks for playing video games competitively.

While regarding video games as sports and gamers as e-athletes can be seen as harmless, there are some grounds for believing that these designations are not accurate. Intuitively, playing a video game, even competitively, is not a sport and working a keyboard or controller (even very well) does not seem to very athletic. Since I am both an athlete (college varsity in track and cross country and I still compete in races) and a gamer (I started with Pong and I currently play Destiny) I have some insight into this matter.

Before properly starting the game, there is the question of why this matter is even worth considering. After all, why should anyone care whether e-athletes are considered athletes or not? Why would it matter whether video game competitions are sports or not. One reason (which might not be a good one) is a matter of pride. Athletes often tend to regard being athletes as a point of pride and see it as being an accomplishment that sets them apart from others in this area. As such, they tend to be concerned about what counts as being an athlete. This is, some would say, supposed to be an earned title and not one to be appropriated by just anyone.

To use an obvious analogy, consider the matter of being a musician. Like athletes, musicians often take pride in being set apart from others on the basis of their defining activity. It matters to them who is and is not considered a musician. Sticking with the analogy, to many athletes the idea that a video gamer who plays League of Legends or Starcraft is an athlete would be comparable to saying to a musician that someone who plays Rock Band or Guitar Hero is a musician just like them.

Naturally it could be argued that this is all just a matter of vanity and that such distinctions have no real significance. If e-athletes want to think of themselves in the same category as Jessie Owens or if people who play music video games want to think they keep company with Hendrix or Clapton, then so be it.

While that sort of egalitarianism has a certain appeal, there is also the matter of the usefulness of categories. On the face of it, the category of athlete does seem to be a useful and meaningful category, just as the category of musician also seems useful and meaningful. As such, it seems worth maintaining some distinctions in regards to these classifications.

Turning back to the matter of whether or not e-athletes are athletes, the obvious point of concern is determining the conditions under which a person is (and is not) an athlete. This will, I believe, prove to be far trickier to sort out than it would first appear.

One obvious starting point is the matter of competition. Athletes typically compete and competitive video games obviously involve competition. However, being involved in competition does not appear to be a necessary or sufficient condition for being an athlete. After all, there are many competitions (such as spelling bees and art shows) that are not athletic in nature. Also, there are people who clearly seem to be athletes who do not compete. For example, I have known and know many runners who do not compete in races, although they run many miles. There are also people who practice martial arts, bike, swim and so on and never compete. However, they seem to be athletes. As such, this factor does not settle the matter. However, the discussion does seem to indicate that being an athlete is a physical sort of thing, which does raise another factor.

When distinguishing an athlete from, for example, a mathlete or chess player, the key difference seems to lie in the nature of the activity. Athletics is primarily physical in nature (although the mental is very significant) while being something like a mathlete or chess player is primarily mental. This seems to suggest a legitimate ground of distinction, though this must be discussed further.

Those who claim that video gaming is a sport and that e-athletes are athletes tend to focus on the similarities between sports and video games. One similarity is that both require certain skills and abilities.

Competitive video gaming clearly requires physical skills and abilities. Gamers need good reflexes, the ability to make tactical or strategic judgments and so on. These are skills that are also possessed by paradigm cases of athletes, such as tennis players and baseball players. However, they are also skills and abilities that are possessed by non-athletes. For example, these skills are used by people who drive, pilot planes, and operate heavy machinery. Intuitively, I am not an athlete because I am able to drive my truck competently, so being able to play Destiny competently should not qualify me as an athlete.

Specifying the exact difference is rather difficult, but a reasonable suggestion is that in the case of athletics the application of skill involves a more substantial aspect of the physical body than does driving a car or playing a video game. A nice illustration of this is comparing a tennis video game with the real thing. The tennis video game requires many of the reflex skills of real tennis, but a key difference is that in the real tennis the player is fully engaged in body rather than merely pushing buttons. That is, the real tennis player has to run, swing, backpedal and so on for real. The video game player has all this done for her at the push of a button. This seems to be an important difference.

To use an analogy, consider the difference between a person who creates a drawing from a photo and someone who merely uses a Photoshop filter to transform a photo into what looks like a drawing. One person is acting as an artist; the other is just pushing a button.

It might be objected that it is the skill that makes video gamers athletes. In reply, operating complex industrial equipment, programming a computer or other such things also require skills, but I would not call a programmer an athlete. Nor would I call a surgeon an athlete, despite the skill required and the challenges she faces trying to save lives.

Sticking with gaming, playing a board game like Star Fleet Battles or classic tabletop war games also requires skills and involves competition. Some games even require fast reflexes. However, when I am pushing a plastic Federation heavy cruiser around a map and rolling dice to hit Klingon D7 battle cruisers with imaginary photon torpedoes, it seems rather evident that this does not make me an athlete. Even if I am really good at it and I am competing in a tournament. Likewise, if I am pushing around a virtual warrior in a video game competition, I am not an athlete because of this. I’m a gamer.

This is not to look down on gaming—after all, I am a gamer and I take my gaming almost as seriously as I do my running. Rather, it is just to argue what seems obvious: video gaming is not an athletic activity and video gamers are not athletes. They are gamers and there seems to be no reason to come up with a new category, that of e-athlete. I do not, however, have any issue with people getting scholarships for being college gamers. I would have loved to have received a D&D or Call of Cthulhu scholarship when I went to college. I’d have worn my letter jacket with pride, too.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Evidence: a love-story

Philosophers! I have a proposition to put to you. Nowadays, we would-be rational members of the public, the intellectually-minded, many citizens, are too in love with the concept of evidence.
Perhaps this surprises you. Maybe you’re thinking: if only! If only enough attention were paid to the massive evidence that dangerous climate change is happening, and that it’s human-triggered. Or: if only the epidemiological evidence marshalled by Wilkinson and Pickett — that more inequality makes society worse in almost every conceivable way — were acted upon.
But actually, even in cases like these, I think that my proposition is still true. Take human-triggered climate-change. Yes, the evidence is strong; but a ‘sceptic’ can always ask for more/better evidence, and thus delay action. There is something stronger than evidence: the concept of precaution.
A sceptic, unconvinced by climate-models, ought to be more cautious than the rest of us about bunging unprecedented amounts of potential-pollutants into the atmosphere! For any uncertainty over the evidence increases our exposure to risk, our fragility.
The climate-sceptics exploit any scientific uncertainty to seek to undermine our confidence in the evidence at our disposal. So far as it goes, this move is correct. But: our exposure to risk is higher, the greater the uncertainty in the science. Uncertainty undermines evidence, but it doesn’t undermine the need for precaution: it underscores it! For remember how high the stakes are.
Think back to the great precedent for the climate issue: the issue of smoking and cancer. For decades, tobacco companies prevaricated against action being taken to stop the epidemic of lung cancer. How? They demanded incontrovertible evidence that smoking caused cancer, and they claimed that until we had such evidence there was nothing to be said against smoking, health-wise. They deliberately evaded the employment of the precautionary principle: which would have warned that, in the absence of such evidence, it was still unsafe to pump your lungs full of smoke and associated chemicals, day in day out, in a manner without natural precedent.
We ought to have relied more on precaution and less on evidence in relation to the smoking-cancer connection. The same goes for climate. (Only: the stakes are much higher, and so the case for precaution is much stronger still.)
And for inequality: Wilkinson and Pickett are merely confirming what we all already ought to have known anyway: that it’s reckless to raise inequality to unprecedented levels, and so to fragilise society itself (for how can one have a society at all, when levels of trust and of commingling are ever-decreasing?).
The same goes for advertising targeted at children: It’s outrageous to demand evidence that dumping potential-toxins into the mental environment actually is dangerous; we just need to exercise precautious care with regard to our children’s fragile, malleable minds.
And for geo-engineering: There’s no evidence at all that geoengineering does any harm, because (thankfully!) it hasn’t been carried out yet: in this case we must be precautious, or risk nemesis, for by the time any evidence was in, it would be too late.
The same goes for GM crops: There is little evidence of harm, to date, from GM, but evidence is the wrong place to look (http://blog.talkingphilosophy.com/?p=8071 ): one ought to focus on the generation of new uncertainties and of untold exposures to grave risk that is inevitably consequent upon taking genes from fish and putting them into tomatoes, or on creating ‘terminator’ genes, etc. . The absence of evidence that GM is harmful must not be confused with evidence of absence of potential harm from GM. We lack the latter, and thus we are direly exposed to the risk of what my philosophical colleague Nassim Taleb (see http://www.fooledbyrandomness.com/pp2.pdf for our joint work in this area) calls a ‘black swan’ event. A massive known or even unknown unknown.
Our love-affair with science, that I’ve criticised previously on this blog (see e.g. http://blog.talkingphilosophy.com/?p=8071 ), is at the root of this. Science-worship, scientism, is responsible for the extreme privileging of evidence over other things that are often even more important. So: let’s end our irrational, dogmatic love-affair with evidence. Yes, being ‘evidence-based’ is usually (though not always!) better than nothing. But there’s usually, when the stakes are highest, something better still: being precautious. (And what’s more: being precautious makes it easier to win, and quicker.)
To end with, here are a couple of my favourite quotes from Wittgenstein, on topic:
1) Science: enrichment and impoverishment. The one method elbows all others aside. Compared with this they all seem paltry, preliminary stages at best. [Wittgenstein, Culture and Value p.69]
2) “Our craving for generality has [as one key] source … our preoccupation with the method of science. I mean the method of reducing the explanation of natural phenomena to the smallest possible number of primitive natural laws; and, in mathematics, of unifying the treatment of different topics by using a generalization. Philosophers constantly see the method of science before their eyes, and are irresistibly tempted to ask and answer in the way science does. This tendency is the real source of metaphysics, and leads the philosopher into complete darkness. I want to say here that it can never be our job to reduce anything to anything, or to explain anything. Philosophy really is “purely descriptive.”” – Wittgenstein, Blue and Brown Books p.23.
I’ll be elaborating on these quotes, and on the case made here, in opening and closing plenaries at a Conference in Oxford this Saturday, in case anyone happens to be in the area… http://www.stx.ox.ac.uk/happ/events/wittgenstein-and-physics-one-day-conference
Meanwhile, thanks for your attention…

The Teenage Mind & Decision Making

http://www.gettyimages.com/detail/163207027

One of the stereotypes regarding teenagers is that they are poor decision makers and engage in risky behavior. This stereotype is usually explained in terms of the teenage brain (or mind) being immature and lacking the reasoning abilities of adults. Of course, adults often engage in poor decision-making and risky behavior.

Interestingly enough, there is research that shows teenagers use basically the same sort of reasoning as adults and that they even overestimate risks (that is, regard something as more risky than it is). So, if kids use the same processes as adults and also overestimate risk, then what needs to be determined is how teenagers differ, in general, from adults.

Currently, one plausible hypothesis is that teenagers differ from adults in terms of how they evaluate the value of a reward. The main difference, or so the theory goes, is that teenagers place higher value on rewards (at least certain rewards) than adults. If this is correct, it certainly makes sense that teenagers are more willing than adults to engage in risk taking. After all, the rationality of taking a risk is typically a matter of weighing the (perceived) risk against the (perceived) value of the reward. So, a teenager who places higher value on a reward than an adult would be acting rationally (to a degree) if she was willing to take more risk to achieve that reward.

Obviously enough, adults also vary in their willingness to take risks and some of this difference is, presumably, a matter of the value the adults place on the rewards relative to the risks. So, for example, if Sam values the enjoyment of sex more than Sally, then Sam will (somewhat) rationally accept more risks in regards to sex than Sally. Assuming that teenagers generally value rewards more than adults do, then the greater risk taking behavior of teens relative to adults makes considerable sense.

It might be wondered why teenagers place more value on rewards relative to adults. One current theory is based in the workings of the brain. On this view, the sensitivity of the human brain to dopamine and oxytocin peaks during the teenage years. Dopamine is a neurotransmitter that is supposed to trigger the “reward” mechanisms of the brain. Oxytocin is another neurotransmitter, one that is also linked with the “reward” mechanisms as well as social activity. Assuming that the teenage brain is more sensitive to the reward triggering chemicals, then it makes sense that teenagers would place more value on rewards. This is because they do, in fact, get a greater reward than adults. Or, more accurately, they feel more rewarded. This, of course, might be one and the same thing—perhaps the value of a reward is a matter of how rewarded a person feels. This does raise an interesting subject, namely whether the value of a reward is a subjective or objective matter.

Adults are often critical of what they regard as irrationally risk behavior by teens. While my teen years are well behind me, I have looked back on some of my decisions that seemed like good ideas at the time. They really did seem like good ideas, yet my adult assessment is that they were not good decisions. However, I am weighing these decisions in terms of my adult perspective and in terms of the later consequences of these actions. I also must consider that the rewards that I felt in the past are now naught but faded memories. To use the obvious analogy, it is rather like eating an entire good cake. At the time, that sugar rush and taste are quite rewarding and it seems like a good idea while one is eating that cake. But once the sugar rush gives way to the sugar crash and the cake, as my mother would say, “went right to the hips”, then the assessment might be rather different. The food analogy is especially apt: as you might well recall from your own youth, candy and other junk food tasted so good then. Now it is mostly just…junk. This also raises an interesting subject worthy of additional exploration, namely the assessment of value over time.

Going back to the cake, eating the whole thing was enjoyable and seemed like a great idea at the time. Yes, I have eaten an entire cake. With ice cream. But, in my defense, I used to run 95-100 miles per week. Looking back from the perspective of my older self, that seems to have been a bad idea and I certainly would not do that (or really enjoy doing so) today. But, does this change of perspective show that it was a poor choice at the time? I am tempted to think that, at the time, it was a good choice for the kid I was. But, my adult self now judges my kid self rather harshly and perhaps unfairly. After all, there does seem to be considerable relativity to value and it seems to be mere prejudice to say that my current evaluation should be automatically taken as being better than the evaluations of the past.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Determinism, Order & Chaos

http://www.gettyimages.com/detail/460688619

As science and philosophy explained ever more of the natural world in the Modern Era, there arose the philosophical idea of strict determinism. Strict determinism, as often presented, includes both metaphysical and epistemic aspects. In regards to the metaphysics, it is the view that each event follows from previous events by necessity. In negative terms, it is a denial of both chance and free will. A religious variant on this is predestination, which is the notion that all events are planned and set by a supernatural agency (typically God). The epistemic aspect is grounded in the metaphysics: if each event follows from other events by necessity, if someone knew all the relevant facts about the state of a system at a time and had enough intellectual capabilities, she could correctly predict the future of that system. Philosophers and scientists who are metaphysical determinists typically claim that the world seems undetermined to us because of our epistemic failings. In short, we believe in choice or chance because we are unable to always predict what will occur. But, for the determinist, this is a matter of ignorance and not metaphysics. For those who believe in choice or chance, our inability to predict is taken as being the result of a universe in which choice or chance is real. That is, we cannot always predict because the metaphysical nature of the universe is such that it is unpredictable. Because of choice or chance, what follows from one event is not a matter of necessity.

One rather obvious problem for choosing between determinism and its alternatives is that given our limited epistemic abilities, a deterministic universe seems the same to us as a non-deterministic universe. If the universe is deterministic, our limited epistemic abilities mean that we often make predictions that turn out to be wrong. If the universe is not deterministic, our limited epistemic abilities and the non-deterministic nature of the universe mean that we often make predictions that are in error. As such, the fact that we make prediction errors is consistent with deterministic and non-deterministic universes.

It can be argued that as we get better and better at predicting we will be able to get a better picture of the nature of the universe. However, until we reach a state of omniscience we will not know whether our errors are purely epistemic (events are unpredictable because we are not perfect predictors) or are the result of metaphysics (that is, the events are unpredictable because of choice or chance).

Interestingly, one feature of reality that often leads thinkers to reject strict determinism is what could be called chaos. To use a concrete example, consider the motion of the planets in our solar system. In the past, the motion of the planets was presented as a sign of the order of the universe—a clockwork solar system in God’s clockwork universe. While the planets might seem to move like clockwork, Newton realized that the gravity of the planets affected each other but also realized that calculating the interactions was beyond his ability. In the face of problems in his physics, Newton famously used God to fill in the gaps. With the development of powerful computers, scientists have been able to model the movements of the planets and the generally accepted view is that they are not parts of deterministic divine clock. To be less poetical, the view is that chaos seems to be a factor. For example, some scientists believe that the gas giant Jupiter’s gravity might change Mercury’s gravity enough that it collides with Venus or Earth. This certainly suggests that the solar system is not an orderly clockwork machine of perfect order. Because of this sort of thing (which occurs at all levels in the world) some thinkers take the universe to include chaos and infer from the lack of perfect order that strict determinism is false. While this is certainly tempting, the inference is not as solid as some might think.

It is, of course, reasonable to infer that the universe lacks a strict and eternal order from such things as the chaotic behavior of the planets. However, strict determinism is not the same thing as strict order. Strict order is a metaphysical notion that a system will work in the same way, without any variation or change, for as long as it exists. The idea of an eternally ordered clockwork universe is an excellent example of this sort of system: it works like a perfect clock, each part relentlessly following its path without deviation. While a deterministic system would certainly be consistent with such an orderly system, determinism is not the same thing as strict order. After all, to accept determinism is to accept that each event follows by necessity from previous events. This is consistent with a system that changes over time and changes in ways that seem chaotic.

Returning to the example of the solar system, suppose that Jupiter’s gravity will cause Mercury’s orbit to change enough so that it hits the earth. This is entirely consistent with that event being necessarily determined by past events such that things could not have been different. To use an analogy, it is like a clockwork machine built with a defect that will inevitably break the machine. Things cannot be otherwise, yet to those ignorant of the defect, the machine will seem to fall into chaos. However, if one knew the defect and had the capacity to process the data, then this breakdown would be completely predictable. To use another analogy, it is like scripted performance of madness by an actor: it might seem chaotic, but the script determines it. That is, it merely seems chaotic because of our ignorance. As such, the appearance of chaos does not disprove strict determinism because determinism is not the same thing as unchanging.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

The Corruption of Academic Research

Synthetic insulin crystals synthesized using r...

Synthetic insulin crystals synthesized using recombinant DNA technology (Photo credit: Wikipedia)

STEM (Science, Technology, Engineering and Mathematics) fields are supposed to be the new darlings of the academy, so I was slightly surprised when I heard an NPR piece on how researchers are struggling for funding. After all, even the politicians devoted to cutting education funding have spoken glowingly of STEM. My own university recently split the venerable College of Arts & Sciences, presumably to allow more money to flow to STEM without risking that professors in the soft sciences and the humanities might inadvertently get some of the cash. As such I was somewhat curious about this problem, but mostly attributed it to a side-effect of the general trend of defunding public education. Then I read “Bad Science” by Llewellyn Hinkes-Jones. This article was originally published in issue 14, 2014 of Jacobin Magazine. I will focus on the ethical aspects of the matters Hinkes-Jones discussed in this article, which is centered on the Bayh-Dole Act.

The Bayh-Dole Act was passed in 1980 and was presented as having very laudable goals. Before the act was passed, universities were limited in regards to what they could do with the fruits of their scientific research. After the act was passes, schools could sell their patents or engage in exclusive licensing deals with private companies (that is, monopolies on the patents). Supporters asserted this act would be beneficial in three main ways. The first is that it would secure more private funding for universities because corporations would provide money in return for the patents or exclusive licenses. The second is that it would bring the power of the profit motive to public research: since researchers and schools could profit, they would be more motivated to engage in research. The third is that the private sector would be motivated to implement the research in the form of profitable products.

On the face of it, the act was a great success. Researchers at Columbia University patented the process of DNA cotransfrormation and added millions to the coffers of the school. A patent on recombinant DNA earned Stanford over $200 million. Companies, in turn, profited greatly. For example, researchers at the University of Utah created Myriad Genetics and took ownership of their patent on the BRCA1 and BRCA2 tests for breast cancer. The current cost of the test is $4,000 (in comparison a full sequencing of human DNA costs $1,000) and the company has a monopoly on the test.

Given these apparent benefits, it is easy enough to advance a utilitarian argument in favor of the act and its consequences. After all, if allows universities to fund their research and corporations to make profits, then its benefits would seem to be considerable, thus making it morally good. However, a proper calculation requires considering the harmful consequences of the act.

The first harm is that the current situation imposes a triple cost on the public. One cost is that the taxpayers fund the schools that conduct the research. The next is that thanks to the monopolies on patents the taxpayers have to pay whatever prices the companies wish to charge, such as the $4,000 for a test that should cost far less. In an actual free market there would be competition and lower prices—but what we have is a state controlled and regulated market. Ironically, those who are often crying the loudest against government regulation and for the value of competition are quite silent on this point.  The final cost of the three is that the corporations can typically write off their contributions on their taxes, thus leaving other taxpayers to pick up their slack. These costs seem to be clear harms and do much to offset the benefits—at least when looked at from the perspective of the whole society and not just focusing on those reaping the benefits.

The second harm is that, ironically, this system makes research more expensive. Since processes, strains of bacteria and many other things needed for research are protected by monopolistic patents the researchers who do not hold these patents have to pay to use them. The costs are usually quite high, so while the patent holders benefit, research in general suffers. In order to pay for these things, researchers need more funding, thus either imposing more cost on taxpayers or forcing them to turn to private funding (which will typically result in more monopolistic patents).

The third harm is the corruption of researchers. Researchers are literally paid to put their names on positive journal articles that advance the interests of corporations. They are also paid to promote drugs and other products while presenting themselves as researchers rather than paid promoters. If the researchers are not simply bought, the money is clearly a biasing factor. Since we are depending on these researchers to inform the public and policy makers about these products, this is clearly a problem and presents a clear danger to the public good.

A fourth harm is that even the honest researchers who have not been bought are under great pressure to produce “sexy science” that will attract grants and funding. While it has always been “publish or perish” in modern academics, the competition is even fiercer in the sciences now. As such, researchers are under great pressure to crank out publications. The effect has been rather negative as evidenced by the fact that the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies.  Far from driving advances in science, the act has served as an engine of corruption, fraud and bad science. This would be bad enough, but there is also the impact on a misled and misinformed public. I must admit that I fell for the antioxidant and omega-3 “research”—I modified my diet to include more antioxidants and omega-3. While this bad science does get debunked, the debunking takes a long time and most people never hear about it. For example, how many people know that the antioxidant and omega-3 “research” is flawed and how many still pop omega-3 “fish oil pills” and drink “antioxidant teas”?

A fifth harm is that universities have rushed to cash in on the research, driven by the success of the research schools that have managed to score with profitable patents. However, setting up research labs aimed at creating million dollar patents is incredibly expensive. In most cases the investment will not yield the hoped for returns, thus leaving many schools with considerable expenses and little revenue.

To help lower costs, schools have turned to employing adjuncts to do the teaching and research, thus creating a situation in which highly educated but very low-paid professionals are toiling away to secure millions for the star researchers, the administrators and their corporate benefactors. It is, in effect, sweat-shop science.

This also shows another dark side to the push for STEM: as the number of STEM graduates increase, the value of the degrees will decrease and wages for the workers will continue to fall. This is great for the elite, but terrible for those hoping that a STEM degree will mean a good job and a bright future.

These harms would seem to outweigh the alleged benefits of the act, thus indicating it is morally wrong. Naturally, it can be countered that the costs are worth it. After all, one might argue, the incredible advances in science since 1980 have been driven by the profit motive and this has been beneficial overall. Without the profit motive, the research might have been conducted, but most of the discoveries would have been left on the shelves. The easy and obvious response is to point to all the advances that occurred due to public university research prior to 1980 as well as the research that began before then and came to fruition.

While solving this problem is a complex matter, there seem to be some easy and obvious steps. The first would be to restore public funding of state schools. In the past, the publicly funded universities drove America’s worldwide dominance in research and helped fuel massive economic growth while also contributing to the public good. The second would be replacing the Bayh-Dole Act with an act that would allow universities to benefit from the research, but prevent the licensing monopolies that have proven so damaging. Naturally, this would not eliminate patents but would restore competition to what is supposed to be a competitive free market by eliminating the creation of monopolies from public university research. The folks who complain about the state regulating business and who praise the competitive free market will surely get behind this proposal.

It might also be objected that the inability to profit massively from research will be a disincentive. The easy and obvious reply is that people conduct research and teach with great passion for very little financial compensation. The folks that run universities and corporations know this—after all, they pay such people very little yet still often get exceptional work. True, there are some people who are solely motivated by profit—but those are typically the folks who are making the massive profit rather than doing the actual research and work that makes it all possible.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Factions & Fallacies

http://www.gettyimages.com/detail/151367815

In general, human beings readily commit to factions and then engage in very predictable behavior: they regard their own factions as right, good and truthful while casting opposing factions as wrong, evil and deceitful. While the best known factions tend to be political or religious, people can form factions around almost anything, ranging from sports teams to video game consoles.

While there can be rational reasons to form and support a faction, factionalism tends to be fed and watered by cognitive biases and fallacies. The core cognitive bias of factionalism is what is commonly known as in group bias. This is the psychology tendency to easily form negative views of those outside of the faction. For example, Democrats often regard Republicans in negative terms, casting them as uncaring, sexist, racist and fixated on money. In turn, Republicans typically look at Democrats in negative terms and regard them as fixated on abortion, obsessed with race, eager to take from the rich, and desiring to punish success. This obviously occurs outside of politics as well, with competing religious groups regarding each other as heretics or infidels. It even extends to games and sports, as the battle of #gamergate serving as a nice illustration.

The flip side of this bias is that members of a faction regard their fellows and themselves in a positive light and are thus inclined to attribute to themselves positive qualities. For example, Democrats see themselves as caring about the environment and being concerned about social good. As another example, Tea Party folks cast themselves as true Americans who get what the founding fathers really meant.

This bias is often expressed in terms of and fuelled by stereotypes. For example, critics of the sexist aspects of gaming will make use of the worst stereotypes of male gamers (dateless, pale misogynists who spew their rage around a mouthful of Cheetos). As another example, Democrats will sometimes cast the rich as being uncaring and out of touch plutocrats. These stereotypes are sometimes taken the extreme of demonizing: presenting the other faction members as not merely wrong or bad but evil to the extreme.

Such stereotypes are easy to accept and many are based on another bias, known as a fundamental attribution error. This is a psychological tendency to fail to realize that the behavior of other people is as much limited by circumstances as our behavior would be if we were in their shoes. For example, a person who was born into a well off family and enjoyed many advantages in life might fail to realize the challenges faced by people who were not so lucky in their birth. Because of this, she might demonize those who are unsuccessful and attribute their failure to pure laziness.

Factionalism is also strengthened by various common fallacies. The most obvious of these is the appeal to group identity. This fallacy occurs when a person accepts her pride in being in a group as evidence that a claim is true. Roughly put, a person believes it because her faction accepts it as true. The claim might actually be true, the mistake is that the basis of the belief is not rational. For example, a devoted environmentalist might believe in climate change because of her membership in that faction rather than on the basis of evidence (which actually does show that climate change is occurring). This method of belief “protects” group members from evidence and arguments because such beliefs are based on group identity rather than evidence and arguments. While a person can overcome this fallacy, faction-based beliefs tend to only change when the faction changes or if the person leaves the faction.

The above-mentioned biases also tend to lean people towards fallacious reasoning. The negative biases tend to motivate people to accept straw man reasoning, which is when a when a person simply ignores a person’s actual position and substitutes a distorted, exaggerated or misrepresented version of that position. Politicians routinely make straw men out of the views they oppose and their faction members typically embrace these. The negative biases also make ad hominem fallacies common. An ad homimen is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made (or the character, circumstances, or actions of the person reporting the claim). Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). For example, opponents of a feminist critic of gaming might reject her claims by claiming that she is only engaged in the criticism so as to become famous and make money. While it might be true that she is doing just that, this does not disprove her claims. The guilt by association fallacy, in which a person rejects a claim simply because it is pointed out that people she dislikes accept the claim, both arises from and contributes to factionalism.

The negative views and stereotypes are also often fed by fallacies that involve poor generalizations. One is misleading vividness, a fallacy in which a very small number of particularly dramatic events are taken to outweigh a significant amount of statistical evidence. For example, a person in a faction holding that gamers are violent misogynists might point to the recent death threats against a famous critic of sexism in games as evidence that most gamers are violent misogynists. Misleading vividness is, of course, closely related to hasty generalization, a fallacy in which a person draws a conclusion about a population based on a sample that is not large enough to justify that conclusion. For example, a Democrat might believe that all corporations are bad based on the behavior of BP and Wal-Mart. Biased generalizations also occur, which is a fallacy that is committed when a person draws a conclusion about a population based on a sample that is biased or prejudiced in some manner. This tends to be fed by the confirmation bias—the tendency people have to seek and accept evidence for their view while avoiding or ignoring evidence against their view. For example, a person might hold that his view that the poor want free stuff for nothing from visits to web sites that feature Youtube videos selected to show poor people expressing that view.

The positive biases also contribute to fallacious reasoning, often taking the form of a positive ad hominem. A positive ad hominem occurs when a claim is accepted on the basis of some irrelevant fact about the author or person presenting the claim or argument. Typically, this fallacy involves two steps. First, something positive (but irrelevant) about the character of person making the claim, her circumstances, or her actions is made. Second, this is taken to be evidence for the claim in question. For example, a Democrat might accept what Bill Clinton says as being true, just because he really likes Bill.

Nor surprisingly, factionalism is also supported by faction variations on appeals to belief (it is true/right because my faction believes it is so), appeal to common practice (it is right because my faction does it), and appeal to tradition (it is right because my faction has “always done this”).

Factionalism is both fed by and contributes to such biases and poor reasoning. This is not to say that group membership is a bad thing, just that it is wise to be on guard against the corrupting influence of factionalism.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Lessons from Gaming #2: Random Universe

Call of Cthulhu (role-playing game)

Call of Cthulhu (role-playing game) (Photo credit: Wikipedia)

My experiences as a tabletop and video gamer have taught me numerous lessons that are applicable to the real world (assuming there is such a thing). One key skill in getting about in reality is the ability to model reality. Roughly put, this is the ability to get how things work and thus make reasonably accurate predictions. This ability is rather useful: getting how things work is a big step on the road to success.

Many games, such as Call of Cthulhu, D&D, Pathfinder and Star Fleet Battles make extensive use of dice to model the vagaries of reality. For example, if your Call of Cthulhu character were trying to avoid being spotted by the cultists of Hastur as she spies on them, you would need to roll under your Sneak skill on percentile dice. As another example, if your D-7 battle cruiser were firing phasers and disruptors at a Kzinti strike cruiser, you would roll dice and consult various charts to see what happened. Video games also include the digital equivalent of dice. For example, if you are playing World of Warcraft, the damage done by a spell or a weapon will be random.

Being a gamer, it is natural for me to look at reality as also being random—after all, if a random model (gaming system) nicely fits aspects of reality, then that suggests the model has things right. As such, I tend to think of this as being a random universe in which God (or whatever) plays dice with us.

Naturally, I do not know if the universe is random (contains elements of chance). After all, we tend to attribute chance to the unpredictable, but this unpredictability might be a matter of ignorance rather than chance. After all, the fact that we do not know what will happen does not entail that it is a matter of chance.

People also seem to believe in chance because they think things could have been differently: the die roll might have been a 1 rather than a 20 or I might have won the lottery rather than not. However, even if things could have been different it does not follow that chance is real. After all, chance is not the only thing that could make a difference. Also, there is the rather obvious question of proving that things could have been different. This would seem to be impossible: while it might be believed that conditions could be recreated perfectly, one factor that can never be duplicated – time. Recreating an event will be a recreation. If the die comes up 20 on the first roll and 1 on the second, this does not show that it could have been a 1 the first time. All its shows is that it was 20 the first time and 1 the second.

If someone had a TARDIS and could pop back in time to witness the roll again and if the time traveler saw a different outcome this time, then this might be evidence of chance. Or evidence that the time traveler changed the event.

Even traveling to a possible or parallel world would not be of help. If the TARDIS malfunctions and pops us into a world like our own right before the parallel me rolled the die and we see it come up 1 rather than 20, this just shows that he rolled a 1. It tells us nothing about whether my roll of 20 could have been a 1.

Of course, the flip side of the coin is that I can never know that the world is non-random: aside from some sort of special knowledge about the working of the universe, a random universe and a non-random universe would seem exactly the same. Whether my die roll is random or not, all I get is the result—I do not perceive either chance or determinism. However, I go with a random universe because, to be honest, I am a gamer.

If the universe is deterministic, then I am determined to do what I do. If the universe is random, then chance is a factor. However, a purely random universe would not permit actual decision-making: it would be determined by chance. In games, there is apparently the added element of choice—I chose for my character to try to attack the dragon, and then roll dice to determine the result. As such, I also add choice to my random universe.

Obviously, there is no way to prove that choice occurs—as with chance versus determinism, without simply knowing the brute fact about choice there is no way to know whether the universe allows for choice or not. I go with a choice universe for the following reason: If there is no choice, then I go with choice because I have no choice. So, I am determined (or chanced) to be wrong. I could not choose otherwise. If there is choice, then I am right. So, choosing choice seems the best choice. So, I believe in a random universe with choice—mainly because of gaming. So, what about the lessons from this?

One important lesson is that decisions are made in uncertainty: because of chance, the results of any choice cannot be known with certainty. In a game, I do not know if the sword strike will finish off the dragon. In life, I do not know if the investment will pay off. In general, this uncertainty can be reduced and this shows the importance of knowing the odds and the consequences: such knowledge is critical to making good decisions in a game and in life. So, know as much as you can for a better tomorrow.

Another important lesson is that things can always go wrong. Or well. In a game, there might be a 1 in 100 chance that a character will be spotted by the cultists, overpowered and sacrificed to Hastur. But it could happen. In life, there might be a 1 in a 100 chance of a doctor taking precautions catching Ebola from a patient. But it could happen. Because of this, the possibility of failure must always be considered and it is wise to take steps to minimize the chances of failure and to also minimize the consequences.

Keeping in mind the role of chance also helps a person be more understanding, sympathetic and forgiving. After all, if things can fail or go wrong because of chance, then it makes sense to be more forgiving and understanding of failure—at least when the failure can be attributed in part to chance. It also helps in regards to praising success: knowing that chance plays a role in success is also important. For example, there is often the assumption that success is entirely deserved because it must be the result of hard work, virtue and so on. However, if success involves chance to a significant degree, then that should be taken into account when passing out praise and making decisions. Naturally, the role of chance in success and failure should be considered when planning and creating policies. Unfortunately, people often take the view that both success and failure are mainly a matter of choice—so the rich must deserve their riches and the poor must deserve their poverty. However, an understanding of chance would help our understanding of success and failure and would, hopefully, influence the decisions we make. There is an old saying “there, but for the grace of God, go I.” One could also say “there, but for the luck of the die, go I.”

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

#Gamergate, Video Game Wars, & Evil

http://www.gettyimages.com/detail/106748964

As a gamer, philosopher and human being, I was morally outraged when I learned of the latest death threats against Anita Sarkeesian. Sarkeesian, who is well known as a moral critic of the misogynistic rot defiling gaming, was scheduled to speak at Utah State University. Emails were sent that threatened a mass shooting if her talk was not cancelled. For legal reasons, the University was not able to prevent people from being weapons to the talk, so Sarkeesian elected to cancel her talk because of concerns for the safety of the audience.

This incident is just the latest in an ongoing outpouring of threats against women involved in gaming and those who are willing to openly oppose sexism and misogyny in the gaming world (and in the real world). Sadly, this sort of behavior is not surprising and it is part of two larger problems: internet trolling and misogyny.

As a philosopher, I am in the habit of arguing for claims. However, there seems to be no need to argue that threatening women with violence, rape or death because they are opposed to misogyny in gaming and favor more inclusivity in gaming is morally wicked. It is also base cowardice in many cases: those making the threats often hide behind anonymity and spew their vile secretions from the shadows of the internet. That such people are cowards is not a shock: courage is a virtue and these are clearly people who are strangers to virtue. When they engage in such behavior on the internet, they are aptly named trolls. Gamers know the classic troll as a chaotic evil creature of great rage and little intellect, which tends to fit the internet troll reasonable well. But, the internet troll can often be a person who is not actually committed to the claims he is making. Rather, his goal is typically to goad others and get emotional responses. As such, the troll will pick his tools with a calculation to the strongest emotional impact and these tools will thus include racism, sexism and threats. There are those who go beyond mere trolling—they are the people who truly believe in the racist and sexist claims they make. They are not using misogynist and racist claims as tools—they are speaking from their rotten souls. Perhaps these creatures should be called demons rather than trolls.

While the moral right to free expression does include the saying of awful and evil things, a person should not say such things. This should not be punishable by the law (in most cases), but should be regarded as immoral actions. Matters change when threats are involved. Good sense should be used when assessing threats. After all, people Tweet and post from unthinking anger and without true intent. There are also plenty of expressions that seem to promise violence, but are also used as expressions of anger. For example, people say “I could kill you” even when they actually have no intent of doing so. However, people do make threats that have real intent behind them. While the person might not actually intend to commit the threatened act (such as murder or rape), there can be an intent to psychologically harm and harass the target and this can do real harm. When I contributed my work on fallacies to a site devoted to responding to holocaust deniers I received a few random threats. I was not too worried, but did have a feeling of cold anger when I read the emails. My ex-wife, who was a feminist philosopher, received the occasional threats and I was certainly worried for her. As such, I have some very limited understanding of what it would be like receiving threats and how this can impact a person’s life. Inflicting such a harm on an individual is wrong and legal sanctions should be taken in such cases. There is a right to express ideas, but not a right to threaten, abuse and harass. Especially in a cowardly manner from the shadows.

As might be suspected, I am in support of increasing the involvement of women in gaming and I favor removing sexism from games. My main reason for supporting more involvement of women in gaming is the same reason I would encourage anyone to game: I think it is fun and I want to share my beloved hobby with people. There is also the moral motivation: such exclusion is morally repugnant and unjustified. If there are any good arguments against women being more involved in playing and creating games, I would certainly be interested in seeing them. But, I am quite sure there are none—if there were, people would be presenting those rather than screeching hateful threats from their shadowed caves.

As far as removing sexism from video games, the argument for that is easy and obvious. Sexism is morally wrong and games that include it would thus be morally wrong. Considering the matter as a gamer and an author of tabletop RPG adventures, I would contend that the removal of sexist elements would improve games and certainly not diminish their quality. True, doing so might rob the sexists and misogynists of whatever enjoyment they get from such things, but this is not a loss that is even worthy of consideration. In this regard, it is analogous to removing racist elements from games—the racist has no moral grounds to complain that he has been wronged by the denial of his opportunity to enjoy his racism.

I do, of course, want to distinguish between sexual elements and sexism. A game can have sexual elements without being sexist—although there can be a fine line between the two. I am also quite aware that games set in sexist times might require sexist elements when recreating those times. So, for example, a WWII game that has just male generals need not be sexist (although it would be reflecting the sexism of the time). Also, games can legitimately feature sexist non-player characters, just as they can legitimately include racist characters and other sorts of evil traits. After all, villains need to be, well, villains.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Asteroid Mining & Death from Above

http://www.gettyimages.com/detail/475183125

Having written before on the ethics of asteroid mining, I thought I would return to this subject and address an additional moral concern, namely the potential dangers of asteroid (and comet) mining. My concern here is not with the dangers to the miners (though that is obviously a matter of concern) but with dangers to the rest of us.

While the mining of asteroids and comets is currently the stuff of science fiction, such mining is certainly possible and might even prove to be economically viable. One factor worth considering is the high cost of getting material into space from earth. Given this cost, constructing things in space using material mined in space might be cost effective. As such, we might someday see satellites built right in space from material harvested from asteroids. It is also worth considering that the cost of mining materials in space and shipping them to earth might also be low enough that space mining for this purpose would be viable. If the material is expensive to mine or has limited availability on earth, then space mining could thus be viable or even necessary.

If material mined in space is to be used on earth, the obvious problem is how to get the material to the surface safely and as cheaply as possible. One approach is to move an asteroid close to the earth to facilitate mining and transportation—it might be more efficient to move the asteroid rather than send mining vessels back and forth. One obvious moral concern about moving an asteroid close to earth is that something could go wrong and the asteroid could strike the earth, perhaps in a populated area. Another obvious concern is that the asteroid could be intentionally used as a weapon—perhaps by a state or by non-state actors (such as terrorists). An asteroid could do considerable damage and would provide a “clean kill”, that is it could do a lot of damage without radioactive fallout or chemical or biological residue. An asteroid might even “accidentally on purpose” be dropped on a target, thus allowing the attacker to claim that it was an accident (something harder to do when using actual weapons).

Given the dangers posed by moving asteroids into earth orbit, this is clearly something that would need to be carefully regulated. Of course, given humanity’s track record accidents and intentional misuse are guaranteed.

Another matter of concern is the transport of material from space to earth. The obvious approach is to ship material to the surface using some sort of vehicle, perhaps constructed in orbit from materials mined in space. Such a vehicle could be relatively simple—after all, it would not need a crew and would just have to ensure that the cargo landed in roughly the right area. Another approach would be to just drop material from orbit—perhaps by surrounding valuable materials with materials intended to ablate during the landing and with a parachute system for some basic braking.

The obvious concern is the danger posed by such transport methods. While such vehicles or rock-drops would not do the sort of damage that an asteroid would, if one crashed hard into a densely populated area (intentionally or accidentally) it could do considerable damage. While such crashes will almost certainly occur, there does seem to be a clear moral obligation to try to minimize the chances of such crashes. The obvious problem is that such safety matters would tend to increase cost and decrease convenience. For example, having the landing zones in unpopulated areas would reduce the risk of a crash into an urban area, but would involve the need to transport the materials from these areas to places where it can be processed (unless the processing plants are built in the zone). As another example, payload sizes might be limited to reduce the damage done by crashes. As a final example, the vessels or drop-rocks might be required to have safety systems, such as backup parachutes. Given that people will cut costs and corners and suffer lapses of attention, accidents are probably inevitable. But they should be made less likely by developing rational regulations. Also of concern is the fact that the vessels and drop-rocks could be used as weapons (as a rule, any technology that can be used to kill people will be used to kill people). As such, there will need to be safeguards against this. It would, for example, be rather bad if terrorist were able to get control of the drop system and start dropping vessels or drop-rocks onto a city.

Despite the risks, if there is profit to be made in mining space, it will almost certainly be done. Given that the resources on earth are clearly limited, access to the bounty of the solar system could be good for (almost) everyone. It could also be another step form humanity away from earth and towards the stars.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

A Philosopher’s Blog: 2012-2013

A-Philosopher's-Blog-2012-2013-CoverMy latest book, A Philosopher’s Blog 2012-2013, will be free on Amazon from October 8, 2014 to October 12 2014.

Description: “This book contains select essays from the 2012-2013 postings of A Philosopher’s Blog. The topics covered range from economic justice to defending the humanities, plus some side trips into pain pills and the will.”

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page