Monthly Archives: January 2014

Picking between Studies

Illustration of swan-necked flask experiment u...

(Photo credit: Wikipedia)

In my last essay I looked briefly at how to pick between experts. While people often reply on experts when making arguments, they also rely on studies (and experiments). Since most people do not do their own research, the studies mentioned are typically those conducted by others. While using study results in an argument is quite reasonable, making a good argument based on study results requires being able to pick between studies rationally.

Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick a study based on the fact that it agrees with what you already believe. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it is reasonable to believe it.

Another common approach is to accept a study as correct because the results match what you really want to be true. For example, a liberal might accept a study that claims liberals are smarter and more generous than conservatives.  This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

In some cases, people try to create their own “studies” by appealing to their own anecdotal data about some matter. For example, a person might claim that poor people are lazy based on his experience with some poor people. While anecdotes can be interesting, to take an anecdote as evidence is to fall victim to the classic fallacy of anecdotal evidence.

While fully assessing a study requires expertise in the relevant field, non-experts can still make rational evaluations of studies, provided that they have the relevant information about the study. The following provides a concise guide to studies—and experiments.

In normal use, people often jam together studies and experiments. While this is fine for informal purposes, this distinction is actually important. A properly done controlled cause-to-effect experiment is the gold standard of research, although it is not always a viable option.

The objective of the experiment is to determine the effect of a cause and this is done by the following general method. First, a random sample is selected from the population. Second, the sample is split into two groups: the experimental group and the control group. The two groups need to be as alike as possible—the more alike the two groups, the better the experiment.

The experimental group is then exposed to the causal agent while the control group is not. Ideally, that should be the only difference between the groups. The experiment then runs its course and the results are examined to determine if there is a statistically significant difference between the two. If there is such a difference, then it is reasonable to infer that the causal factor brought about the difference.

Assuming that the experiment was conducted properly, whether or not the results are statistically significant depends on the size of the sample and the difference between the control group and experimental group. The key idea is that experiments with smaller samples are less able to reliably capture effects. As such, when considering whether an experiment actually shows there is a causal connection it is important to know the size of the sample used. After all, the difference between the experimental and control groups might be rather large, but might not be significant. For example, imagine that an experiment is conducted involving 10 people. 5 people get a diet drug (experimental group) while 5 do not (control group). Suppose that those in the experimental group lose 30% more weight than those in the control group. While this might seem impressive, it is actually not statistically significant: the sample is so small, the difference could be due entirely to chance. The following table shows some information about statistical significance.

Sample Size (Control group + Experimental Group)

Approximate Figure That The Difference Must Exceed

To Be Statistically Significant

(in percentage points)

10 40
100 13
500 6
1,000 4
1,500 3

While the experiment is the gold standard, there are cases in which it would be impractical, impossible or unethical to conduct an experiment. For example, exposing people to radiation to test its effect would be immoral. In such cases studies are used rather than experiments.

One type of study is the Nonexperimental Cause-to-Effect Study. Like the experiment, it is intended to determine the effect of a suspected cause. The main difference between the experiment and this sort of study is that those conducting the study do not expose the experimental group to the suspected cause. Rather, those selected for the experimental group were exposed to the suspected cause by their own actions or by circumstances. For example, a study of this sort might include people who were exposed to radiation by an accident. A control group is then matched to the experimental group and, as with the experiment, the more alike the groups are, the better the study.

After the study has run its course, the results are compared to see if these is a statistically significant difference between the two groups. As with the experiment, merely having a large difference between the groups need not be statistically significant.

Since the study relies on using an experimental group that was exposed to the suspected cause by the actions of those in the group or by circumstances, the study is weaker (less reliable) than the experiment. After all, in the study the researchers have to take what they can find rather than conducting a proper experiment.

In some cases, what is known is the effect and what is not known is the cause. For example, we might know that there is a new illness, but not know what is causing it. In these cases, a Nonexperimental Effect-to-Cause Study can be used to sort things out.

Since this is a study rather than an experiment, those in the experimental group were not exposed to the suspected cause by those conducting the study. In fact, the cause it not known, so those in the experimental group are those showing the effect.

Since this is an effect-to-cause study, the effect is known, but the cause must be determined. This is done by running the study and determining if these is a statistically significant suspected causal factor. If such a factor is found, then that can be tentatively taken as a causal factor—one that will probably require additional study. As with the other study and experiment, the statistical significance of the results depends on the size of the study—which is why a study of adequate size is important.

Of the three methods, this is the weakest (least reliable). One reason for this is that those showing the effect might be different in important ways from the rest of the population. For example, a study that links cancer of the mouth to chewing tobacco would face the problem that those who chew tobacco are often ex-smokers. As such, the smoking might be the actual cause. To sort this out would involve a study involving chewers who are not ex-smokers.

It is also worth referring back to my essay on experts—when assessing a study, it is also important to consider the quality of the experts conducting the study. If those conducting the study are biased, lack expertise, and so on, then the study would be less credible. If those conducting it are proper experts, then that increases the credibility of the study.

As a final point, there is also a reasonable concern about psychological effects. If an experiment or study involves people, what people think can influence the results. For example, if an experiment is conducted and one group knows it is getting pain medicine, the people might be influenced to think they are feeling less pain. To counter this, the common approach is a blind study/experiment in which the participants do not know which group they are in, often by the use of placebos. For example, an experiment with pain medicine would include “sugar pills” for those in the control group.

Those conducting the experiment can also be subject to psychological influences—especially if they have a stake in the outcome. As such, there are studies/experiments in which those conducting the research do not know which group is which until the end. In some cases, neither the researchers nor those in the study/experiment know which group is which—this is a double blind experiment/study.

Overall, here are some key questions to ask when picking a study:

Was the study/experiment properly conducted?

Was the sample size large enough?

Were the results statistically significant?

Were those conducting the study/experiment experts?

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Picking between Experts

A logic diagram proposed for WP OR to handle a...

A logic diagram proposed for WP OR to handle a situation where two equal experts disagree. (Photo credit: Wikipedia)

One fairly common way to argue is the argument from authority. While people rarely follow the “strict” form of the argument, the basic idea is to infer that a claim is true based on the allegation that the person making the claim is an expert. For example, someone might claim that second hand smoke does not cause cancer because Michael Crichton claimed that it does not. As another example, someone might claim that astral projection/travel is real because Michael Crichton claims it does occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, there are medical experts who claim that second hand smoke does cause cancer.

If you are an expert in the field in question, you can endeavor to pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that second hand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an expert, a person is presumably qualified to make an informed pick. The obvious problem is, of course, that experts themselves pick different experts to accept as being correct.

The problem is even greater when it comes to non-experts who are trying to pick between experts. Being non-experts, they lack the expertise to make authoritative picks between the actual experts based on their own knowledge of the fields. This raises the rather important concern of how to pick between experts when you are not an expert.

Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick an expert based on the fact that she agrees with what you already believe. That is, to infer that the expert is right because you believe what she says. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).

Another common approach is to believe an expert because he makes a claim that you really want to be true. For example, a smoker might elect to believe an expert who claims second hand smoke does not cause cancer because he does not want to believe that he might be increasing the risk that his children will get cancer by his smoking around them. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

People also pick their expert based on qualities they perceive as positive but that are, in fact, irrelevant to the person’s actually credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not actually relevant to assessing a person’s expertise.  For example, a person might be very likeable, but not know a thing about what they are talking about.

Fortunately, there are some straightforward standards for picking and believing an expert. They are as follows.

 

1. The person has sufficient expertise in the subject matter in question.

Claims made by a person who lacks the needed degree of expertise to make a reliable claim will, obviously, not be well supported. In contrast, claims made by a person with the needed degree of expertise will be supported by the person’s reliability in the area. One rather obvious challenge here is being able to judge that a person has sufficient expertise. In general, the question is whether or not a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.

 

2. The claim being made by the person is within her area(s) of expertise.

If a person makes a claim about some subject outside of his area(s) of expertise, then the person is not an expert in that context. Hence, the claim in question is not backed by the required degree of expertise and is not reliable. People often mistake expertise in one area (acting, for example) for expertise in another area (politics, for example).

 

3. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.

This is perhaps the most important factor. As a general rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The basic idea is that the majority of experts are more likely to be right than those who disagree with the majority.

It is important to keep in mind that no field has complete agreement, so some degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.

It is also important to be aware that the majority could turn out to be wrong. That said, the reason it is still reasonable for non-experts to go with the majority opinion is that non-experts are, by definition, not experts. After all, if I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.

 

4. The person in question is not significantly biased.

This is also a rather important standard. Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of her claims, then the person’s credibility as an authority is reduced. This is because there would be reason to believe that the expert might not be making a claim because he has carefully considered it using his expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice. A biased expert can still be making claims that are true—however, the person’s bias lowers her credibility.

It is important to remember that no person is completely objective. At the very least, a person will be favorable towards her own views (otherwise she would probably not hold them). Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that researchers who receive funding from pharmaceutical companies might be biased while others might claim that the money would not sway them if the drugs proved to be ineffective or harmful.

Disagreement over bias can itself be a very significant dispute. For example, those who doubt that climate change is real often assert that the experts in question are biased in some manner that causes them to say untrue things about the climate. Questioning an expert based on potential bias is a legitimate approach—provided that there is adequate evidence of bias that would be strong enough to unduly influence the expert. One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from a claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.

These standards are clearly not infallible. However, they do provide a good general guide to logically picking an expert. Certainly more logical than just picking the one who says things one likes.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Kant & Economic Justice

English: , Prussian philosopher. Português: , ...

(Photo credit: Wikipedia)

One of the basic concerns is ethics is the matter of how people should be treated. This is often formulated in terms of our obligations to other people and the question is “what, if anything, do we owe other people?” While it does seem that some would like to exclude the economic realm from the realm of ethics, the burden of proof would rest on those who would claim that economics deserves a special exemption from ethics. This could, of course, be done. However, since this is a brief essay, I will start with the assumption that economic activity is not exempt from morality.

While I subscribe to virtue theory as my main ethics, I do find Kant’s ethics both appealing and interesting. In regards to how we should treat others, Kant takes as foundational that “rational nature exists as an end in itself.”

It is reasonable to inquire why this should be accepted. Kant’s reasoning certainly seems sensible enough. He notes that “a man necessarily conceives his own existence as such” and this applies to all rational beings. That is, Kant claims that a rational being sees itself as being an end, rather than a thing to be used as a means to an end.  So, for example, I see myself as a person who is an end and not as a mere thing that exists to serve the ends of others.

Of course, the mere fact that I see myself as an end would not seem to require that I extend this to other rational beings (that is, other people). After all, I could apparently regard myself as an end and regard others as means to my ends—to be used for my profit as, for example, underpaid workers or slaves.

However, Kant claims that I must regard other rational beings as ends as well. The reason is fairly straightforward and is a matter of consistency: if I am an end rather than a means because I am a rational being, then consistency requires that I accept that other rational beings are ends as well. After all, if being a rational being makes me an end, it would do the same for others. Naturally, it could be argued that there is a relevant difference between myself and other rational beings that would warrant my treating them as means only and not as ends. People have, obviously enough, endeavored to justify treating other people as things. However, there seems to be no principled way to insist on my own status as an end while denying the same to other rational beings.

From this, Kant derives his practical imperative: “so act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only.” This imperative does not entail that I cannot ever treat a person as a means—that is allowed, provided I do not treat the person as a means only. So, for example, I would be morally forbidden from being a pimp who uses women as mere means of revenue. I would, however, not be forbidden from having someone check me out at the grocery store—provided that I treated the person as a person and not a mere means.

One obvious challenge is sorting out what it is to treat a person as an end as opposed to just a means to an end. That is, the problem is figuring out when a person is being treated as a mere means and thus the action would be immoral.

Interestingly enough, many economic relationships would seem to clearly violate Kant’s imperative in that they treat people as mere means and not at all as ends. To use the obvious example, if an employer treats her employees merely as means to making a profit and does not treat them as ends in themselves, then she is acting immorally by Kant’s standard. After all, being an employee does not rob a person of personhood.

One obvious reply is to question my starting assumption, namely that economics is not exempt from ethics. It could be argued that the relationship between employer and employee is purely economic and only economic considerations matter. That is, the workers are to be regarded as means to profit and treated in accord with this—even if doing so means treating them as things rather than persons. The challenge is, of course, to show that the economic realm grants a special exemption in regards to ethics. Of course, if it does this, then the exemption would presumably be a general one. So, for example, people who decided to take money from the rich at gunpoint would be exempt from ethics as well. After all, if everyone is a means in economics, then the rich are just as much means as employees and if economic coercion against people is acceptable, then so too is coercion via firearms.

Another obvious reply is to contend that might makes right. That is, the employer has the power and owes nothing to the employees beyond what they can force him to provide. This would make economics rather like the state of nature—where, as Hobbes said, “profit is the measure of right.” Of course, this leads to the same problem as the previous reply: if economics is a matter of might making right, then people have the same right to use might against employers and other folks—that is, the state of nature applies to all.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Two kinds of arguments against GM: evidentiary, and precautionary

The news hit the headlines this morning that genetic engineers in Hertfordshire want to trial plants that have been genetically-modified with genes from fish: http://www.farmersguardian.com/home/arable/rothamsted-trialling-gm-omega-3-plants/61696.article .

There are several aspects of the arguments around this that are of philosophical interest. They relate principally to the philosophy of rhetoric, the philosophy of science and technology (epistemology, methodological issues), and the philosophy and ethics of precaution/risk. I will explore these briefly in what follows.

My closest philosophical colleague Phil Hutchinson (@phil_hutchinson) has just had a mini ‘twitter-storm’ with Mark Lynas, over this latest GM business. Phil has been making the argument that the evidence does not support the need for fish genes to be put into plants in order to produce fish oil, because the evidence does not support the claim that doing so is beneficial and necessary.
This ‘mirrors’ the argument that my other current close philosophical colleague Nassim Taleb (@nntaleb) and I (@rupertread) have recently had on twitter with Lynas (Go back to Jan.5 if you want to see this ‘twitter-storm’ from the start). Taleb and I made the argument that (e.g.) taking genes from fish and putting them into plants is reckless, because it is unprecautious: it violates the Precautionary Principle. In other words, our argument was not evidentiary but precautionary.

It seems to me that the ‘evidence’ line against GM combined with the ‘precautionary’ line against it catches GM-apologists such as Lynas in a bind. In a pincer movement.

In outline, the full (the two-pronged) case then runs roughly like this (for references to back this up, if desired, see the material on Twitter):

A GM company wants to take genes from fish and put them into a plant: specifically, in today’s furore: they want to produce Omega three GM camelina.

In brief: There is first NO conclusive evidence for heart-related benefits of Omega 3 fish oil, which demonstrate it as beneficial separate from the fish, as a supplement. There is NO evidence that we need fish oil omega 3 over and above that our bodies already convert from vegetable-based ALA Omega 3 from things like flax.

To elaborate somewhat: We’ve had over ten years of hype from food manufacturers and supplement manufacturers about the heart-benefits of fish-sourced Omega 3 oil. But the evidence for benefits is still inconclusive, at best.

Basically there are three types of Omega 3 fatty acids that humans need: ALA (found in plant oils), EPA, and DHA (found in fish oils).

ALA is in flax seeds and hemp seeds as well as other veg (brussels sprouts for example). Our bodies convert ALA in to EPA and DHA.

Over the past decade or so all sorts of wild claims have been made for the benefits of consuming a diet high in EPA and DHA fatty acids. Goldacre has some sport exposing some of the nonsense hereabouts in Bad Science.

However, there are one or two RCTs that do seem to show some benefit of a diet high in EPA and DHA Omega 3 for heart disease, but, and this is important, only when eaten as part of a fresh fish which contains it. There is simply no evidence for EPA and DHA taken as a supplement being beneficial to health. So, the real kicker is, that they cannot say for sure that it is the EPA and GLA and not just the fact that those who eat fresh fish are likely to eat healthier diets in any case and be better off, socio-economically.

So, why would anyone assume that GM camelina with EPA and DHA would work better than the _ineffective_ supplements? No reason whatsoever. Indeed, as noted, while diets high in fish oil do seem to (in a few cases) have benefits, even there it is unclear this is because of some magical properties of the oil, but rather because of other factors that might be related to a diet rich in oily fish.

So, no clear evidence at all that consumption of EPA and DHA as a supplement has health benefits.

When Phil made these points to Lynas and the GM company, they shifted ground away from talking about the alleged health benefits of omega 3 fish oil (to humans) to talking about the health benefits of feeding omega 3 fish oil to fish.

So: there really – clearly — is no clear evidence that we need EPA and DHA in any case, as our bodies convert ALA (from vegetable sources into those). Lynas et al, when pressed, concede this. They then say: this is about improving aquaculture by making fish food. But then we have the same problem: we have no reason to think that even if the GM splicing worked and they could get it into the seeds that this would work for the fish. Oily fish that are high in Omega 3 get it from the krill and shrimps they eat.

This is about salmon-farming! Not, as they tried to mislead us all this morning into thinking, about human health.

Human health would be better served by better balanced diets.

To sum up the case so far: there is no reason to see what the GM wizards are trying to put into the plant from the fish as useful for fish food if there is no evidence for the benefits of Omega 3 fish oil supplements. At this point, when forced into seeing this, the company replied that that’s allegedly why they need to do the research they are seeking to do… Which is close to a concession that there are (few or) no evidentiary grounds for thinking GM fish-omega 3 camelina will be beneficial: But of course, surprise surprise, that is not what their rep said on the Today programme this morning, nor what Lynas were arguing when Phil first responded to him.

The final phase of the argument (at the time of writing) is I think very telling. It runs thus:

Phil Hutchinson @phil_hutchinson

@Rothamsted @mark_lynas consumed as fish. Barely any conc. evidence for supplement benefits. Your version will be akin to consuming a supp.

Kate de Selincourt @Kate_de

But, @Rothamsted & @mark_lynas, since all livestock farming turns more nutrient into less, why not just eat the fish food? @phil_hutchinson

Mark Lynas @mark_lynas

@Kate_de @Rothamsted @phil_hutchinson That’s an argument for veganism. Fine by me, but hardly a realistic way to tackle overfishing.

‘Fine by me’. Lynas has essentially conceded the case. He prefers a problematic techno-fix which lacks evidential support to a behaviourial and political change that is perfectly possible (i.e. for humans to consume less (factory-farmed) fish (from which a profit can be extracted), and find their omega 3 in other ways).

That’s the evidence-based argument against GM (which has to be made in each individual case on grounds specific to that case (in other cases, the argument will be based on poor yield, or on the inputs to the GM-farming being unsustainable, or on alleged damage to human health, or on actual epidemics of superweeds, or on the desperately-problematic political economy of GM; etc etc), and can be made in each individual case I think, with the possible exception of some GM-cotton). The case benefits from a savvy understanding of the nature of evidence-based arguments, obviously, and thus from a sound philosophy of science and technology perspective. But it is essential an ‘empirical’ argument.

The precautionary argument is different. It is philosophical from the get-go. It is an argument about where the burden of proof lies.
This is in my view the deepest argument against GM: a precautionary one which shifts the burden of proof. It’s no longer about one trying to find a particular counter-argument to claims that GM-enthusiasts are making: it’s suggesting that the onus is rather on THEM to establish the safety of the technology that they are puffing.

The precautionary case against GMOs, in brief, runs thus: If we (for example) take a gene from a fish and put it in a plant, a move utterly without precedent in the whole of evolution, we are recklessly fiddling with and unavoidably changing a system we don’t fully understand and doing something novel whose consequences we cannot possibly predict. This is a reckless gamble, stupid in the short- to medium- term, unconscionably short-sighted and selfish in the long term, as we risk imposing a world of new danger on those who are yet even to be born. We are launching a vast uncontrolled natural (sic.) experiment. The consequences for superweeds, for damaging biodiversity, for creating dangerous mutations, and possibly directly for human health, are unforeseeable. There is a strong precautionary argument against GM, or at the very least in favour of keeping some parts of the world (e.g. an island-nation!) GM-free. IF GM could be properly safely researched to determine what bad ‘side effects’ it may have, then I would favour such research, in good empirical fashion. But it mostly can’t – because it can only be properly ‘researched’ in this way outside the laboratory. In this regard, it differs profoundly from most medical advances, for instance.
This is the terrible dilemma of field trials for GM: The more extensive they are, the more they resemble conditions in the real world, the longer-term they are, then the more reliable they are – BUT also, the more dangerous they are. The more likely it is that they will escape their confines, affect the broader ecosystem, produce unexpected and dangerous drift of genetic materials, etc. . One can’t get the evidence one needs to assess GM with without creating vast uncontrolled new risks.
If we in Britain as a nation contaminate our countryside with GMOs, then that can never be undone. Simple caution and commonsense enjoins – overwhelmingly – against such recklessness.
Defenders of GM sometimes say that there is an absence of evidence of harm from GM. Even if this is true, it is not good enough. What the precautionary argument shows is that we need evidence of absence of harm from GM. And that is what we don’t have. And what will be very hard ever to get without taking an unconscionable risk.
That is the point of the precautionary principle.

Until we have ultra-long-term large-scale trials which cannot contaminate the surrounding countryside, then GM must be considered unsafe. Such trials are at present impossible to carry out. They might one day be possible, though I doubt that they ever will be (this is the dilemma expressed above). If they ever were, then, rather than jumping in precipitously to make a quick buck (as is happening at Rothampsted today) we would then need to wait dozens of years for the results.
In other words, I have argued that one current impossibility is to adequately research contamination, possible damage to biodiversity, etc., without actually potentially causing limitless such damage. One somehow needs long-term (generations of ) trials, in the natural environment, but contained. Something like a huge part-permeable dome that somehow lets in what you want to let in (e.g. sun, rain) in an unaltered way without letting out the GM-crops, over an area of many square miles. Good luck with that…

I am sometimes accused of inconsistency, in making this kind of argument. For, as I’ve made clear in previous posts on this site, I am, like any reasonable person, a fan of climate science, which is vital to the survivability of our species, as we breach the limits to growth. So, why not of ‘GM science’? But this alleged parallel with manmade climate change is very weak. That is a matter of science; while GM is a technology.
Of course genetics is science, but genetic engineering, as the name suggests, is not: it is engineering, i.e. technology. GM is a technology, and so we should be very wary of GM-advocates dressing themselves up in the clothing of science. It is not ‘anti-science’ to oppose GM technology. There are strong empirical and precautionary arguments for doing so.

The parallel in relation to climate is with geo-engineering, not with climate science! And I’m no more a fan of genetic engineering than of geo-engineering, which involves perhaps the ultimate hubristic lack of precaution (or of ethics)… That is: It seems to me, as I’ve sketched, that there are profound philosophical reasons not to be a fan of either of these forms of engineering…

[[Big thanks to Phil Hutchinson for contributing very generously to the researching and writing of this piece, and for our ongoing joint work on ‘evidence-based medicine’. Thanks also to Nassim N. Taleb for his influence on my thinking in this area, through the dialogue we are having on it and the arguments we are making against others over it. But responsibility for the piece is mine alone.]]

Owning Intelligent Machines

Rebel ToasterWhile truly intelligent machines are still in the realm of science fiction, it is worth considering the ethics of owning them. After all, it seems likely that we will eventually develop such machines and it seems wise to think about how we should treat them before we actually make them.

While it might be tempting to divide beings into two clear categories of those it is morally permissible to own (like shoes) and those that are clearly morally impermissible to own (people), there are clearly various degrees of ownership in regards to ethics. To use the obvious example, I am considered the owner of my husky, Isis. However, I obviously do not own her in the same way that I own the apple in my fridge or the keyboard at my desk. I can eat the apple and smash the keyboard if I wish and neither act is morally impermissible. However, I should not eat or smash Isis—she has a moral status that seems to allow her to be owned but does not grant her owner the right to eat or harm her. I will note that there are those who would argue that animals should not be owner and also those who would argue that a person should have the moral right to eat or harm her pets. Fortunately, my point here is a fairly non-controversial one, namely that it seems reasonable to regard ownership as possessing degrees.

Assuming that ownership admits of degrees in this regard, it makes sense to base the degree of ownership on the moral status of the entity that is owned. It also seems reasonable to accept that there are qualities that grant a being the status that morally forbids ownership. In general, it is assumed that persons have that status—that it is morally impermissible to own people. Obviously, it has been legal to own people (be the people actual people or corporations) and there are those who think that owning other people is just fine. However, I will assume that there are qualities that provide a moral ground for making ownership impermissible and that people have those qualities. This can, of course, be debated—although I suspect few would argue that they should be owned.

Given these assumptions, the key matter here is sorting out the sort of status that intelligent machines should possess in regards to ownership. This involves considering the sort of qualities that intelligent machines could possess and the relevance of these qualities to ownership.

One obvious objection to intelligent machines having any moral status is the usual objection that they are, obviously, machines rather than organic beings. The easy and obvious reply to this objection is that this is mere organicism—which is analogous to a white person saying blacks can be owned as slaves because they are not white.

Now, if it could be shown that a machine cannot have qualities that give it the needed moral status, then that would be another matter. For example, philosophers have argued that matter cannot think and if this is the case, then actual intelligent machines would be impossible. However, we cannot assume a priori that machines cannot have such a status merely because they are machines. After all, if certain philosophers and scientists are right, we are just organic machines and thus there would seem to be nothing impossible about thinking, feeling machines.

As a matter of practical ethics, I am inclined to set aside metaphysical speculation and go with a moral variation on the Cartesian/Turing test. The basic idea is that a machine should be granted a moral status comparable to organic beings that have the same observed capabilities. For example, a robot dog that acted like an organic dog would have the same status as an organic dog. It could be owned, but not tortured or smashed. The sort of robohusky I am envisioning is not one that merely looks like a husky and has some dog-like behavior, but one that would be fully like a dog in behavioral capabilities—that is, it would exhibit personality, loyalty, emotions and so on to a degree that it would pass as real dog with humans if it were properly “disguised” as an organic dog. No doubt real dogs could smell the difference, but scent is not the foundation of moral status.

In terms of the main reason why a robohusky should get the same moral status as an organic husky, the answer is, oddly enough, a matter of ignorance. We would not know if the robohusky really had the metaphysical qualities of an actual husky that give an actual husky moral status. However, aside from difference in the parts, we would have no more reason to deny the robohusky moral status than to deny the husky moral status. After all, organic huskies might just be organic machines and it would be mere organicism to treat the robohusky as a mere thing and grant the organic husky a moral status. Thus, advanced robots with the capacities of higher animals should receive the same moral status as organic animals.

The same sort of reasoning would apply to robots that possess human qualities. If a robot had the capability to function analogously to a human being, then it should be granted the same status as a comparable human being. Assuming it is morally impermissible to own humans, it would be impermissible to own such robots. After all, it is not being made of meat that grants humans the status of being impermissible to own but our qualities. As such, a machine that had these qualities would be entitled to the same status. Except, of course, to those unable to get beyond their organic prejudices.

It can be objected that no machine could ever exhibit the qualities needed to have the same status as a human. The obvious reply is that if this is true, then we will never need to grant such status to a machine.

Another objection is that a human-like machine would need to be developed and built. The initial development will no doubt be very expensive and most likely done by a corporation or university. It can be argued that a corporation would have the right to make a profit off the development and construction of such human-like robots. After all, as the argument usually goes for such things, if a corporation was unable to profit from such things, they would have no incentive to develop such things. There is also the obvious matter of debt—the human-like robots would certainly seem to owe their creators for the cost of their creation.

While I am reasonably sure that those who actually develop the first human-like robots will get laws passed so they can own and sell them (just as slavery was made legal), it is possible to reply to this objection.

One obvious reply is to draw an analogy to slavery: just because a company would have to invest money in acquiring and maintaining slaves it does not follow that their expenditure of resources grants a right to own slaves. Likewise, the mere fact that a corporation or university spent a lot of money developing a human-like robot would not entail that they thus have a right to own it.

Another obvious reply to the matter of debt owed by the robots themselves is to draw an analogy to children: children are “built” within the mother and then raised by parents (or others) at great expense. While parents do have rights in regards to their children, they do not get the right of ownership. Likewise, robots that had the same qualities as humans should thus be regarded as children would be regarded and hence could not be owned.

It could be objected that the relationship between parents and children would be different than between corporation and robots. This is a matter worth considering and it might be possible to argue that a robot would need to work as an indentured servant to pay back the cost of its creation. Interestingly, arguments for this could probably also be used to allow corporations and other organizations to acquire children and raise them to be indentured servants (which is a theme that has been explored in science fiction). We do, after all, often treat humans worse than machines.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Programmed Consent

Sexbot YesScience fiction is often rather good at predicting the future and it is not unreasonable to think that the intelligent machine of science fiction will someday be a reality. Since I have been writing about sexbots lately, I will use them to focus the discussion. However, what follows can also be applied, with some modification, to other sorts of intelligent machines.

Sexbots are, obviously enough, intended to provide sex. It is equally obvious that sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped or not. Sorting this out requires considering the matter of consent in more depth.

When it is claimed that sex without consent is rape, one common assumption is that the victim of non-consensual sex is a being that could provide consent but did not. A violent sexual assault against a person would be an example of this as would, presumably, non-consensual sex with an unconscious person. However, a little reflection reveals that the capacity to provide consent is not always needed in order for rape to occur. In some cases, the being might be incapable of engaging in any form of consent. For example, a brain dead human cannot give consent, but presumably could still be raped. In other cases, the being might be incapable of the right sort of consent, yet still be a potential victim of rape. For example, it is commonly held that a child cannot properly consent to sex with an adult.

In other cases, a being that cannot give consent cannot be raped. To use an obvious example, a human can have sex with a sex-doll and the doll cannot consent. But, it is not the sort of entity that can be raped. After all, it lacks the status that would require consent. As such, rape (of a specific sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being in order for the sex to be morally acceptable. Naturally, I have not laid out all the fine details to create a necessary and sufficient account here—but that is not my goal nor what I need for my purpose in this essay. In regards to the main focus of this essay, the question would be whether or not a sexbot could be an entity that has a status that would require consent. That is, would buying (or renting) and using a sexbot for sex be rape?

Since the current sexbots are little more than advanced sex dolls, it seems reasonable to put them in the category of beings that lack this status. As such, a person can own and have sex with this sort of sexbot without it being rape (or slavery). After all, a mere object cannot be raped (or enslaved).

But, let a more advanced sort of sexbot be imagined—one that engages in complex behavior and can pass the Turning Test/Descartes Test. That is, a conversation with it would be indistinguishable from a conversation with a human. It could even be imagined that the sexbot appeared fully human, differing only in terms of its internal makeup (machine rather than organic). That is, unless someone cut the sexbot open, it would be indistinguishable from an organic person.

On the face of it (literally), we would seem to have as much reason to believe that such a sexbot would be a person as we do to believe that humans are people. After all, we judge humans to be people because of their behavior and a machine that behaved the same way would seem to deserve to be regarded as a person. As such, nonconsensual sex with a sexbot would be rape.

The obvious objection is that we know that a sexbot is a machine with a CPU rather than a brain and a mechanical pump rather than a heart. As such, one might, argue, we know that the sexbot is just a machine that appears to be a person and is not a person.  As such, a real person could own a sexbot and have sex with it without it being rape—the sexbot is a thing and hence lacks the status that requires consent.

The obvious reply to this objection is that the same argument can be used in regards to organic humans. After all, if we know that a sexbot is just a machine, then we would also seem to know that we are just organic machines. After all, while cutting up a sexbot would reveal naught but machinery, cutting up a human reveals naught but guts and gore. As such, if we grant organic machines (that is, us) the status of persons, the same would have to be extended to similar beings, even if they are made out of different material. While various metaphysical arguments can be advanced regarding the soul, such metaphysical speculation provides a rather tenuous basis for distinguishing between meat people and machine people.

There is, it might be argued, still an out here. In his Hitchhikers’ Guide to the Galaxy Douglas Adams envisioned “an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly.” A similar sort of thing could be done with sexbots: they could be programmed so that they always give consent to their owner, thus the moral concern would be neatly bypassed.

The obvious reply is that programmed consent is not consent. After all, consent would seem to require that the being has a choice: it can elect to refuse if it wants to. Being compelled to consent and being unable to dissent would obviously not be morally acceptable consent. In fact, it would not be consent at all. As such, programming sexbots in this manner would be immoral—it would make them into slaves and rape victims because they would be denied the capacity of choice.

One possible counter is that the fact that a sexbot can be programmed to give “consent” shows that it is (ironically) not the sort of being with a status that requires consent. While this has a certain appeal, consider the possibility that humans could be programmed to give “consent” via a bit of neurosurgery or by some sort of implant. If this could occur, then if programmed consent for sexbots is valid consent, then the same would have to apply to humans as well. This, of course, seems absurd. As such, a sexbot programmed for consent would not actually be consenting.

It would thus seem that if advanced sexbots were built, they should not be programmed to always consent. Also, there is the obvious moral problem with selling such sexbots, given that they would certainly seem to be people. It would thus seem that such sexbots should never be built—doing so would be immoral.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Slippery Slope, Same Sex Marriage, Goats & Corpses

Gray-GoatWhile same-sex marriage seems to have momentum in its favor in the United States, there is still considerable opposition to its acceptance. This opposition is well stocked up with stock arguments against this practice. One of these is the slippery slope argument: if same-sex marriage is allowed, then people will then be allowed to marry turtles, dolphins, trees, cats, corpses or iPads.  Since this would be bad/absurd, same-sex marriage should not be allowed. This is, of course, the classic slippery slope fallacy.

This is a fallacy in which a person asserts that some event must inevitably follow from another without any argument for the inevitability of the event in question. In most cases, there are a series of steps or gradations between one event and the one in question and no reason is given as to why the intervening steps or gradations will simply be bypassed. This “argument” has the following form:

1. Event X has occurred (or will or might occur).
2. Therefore event Y will inevitably happen.

This sort of “reasoning” is fallacious because there is no reason to believe that one event must inevitably follow from another without adequate evidence for such a claim. This is especially clear in cases in which there are a significant number of steps or gradations between one event and another.

In the case of same-sex marriage the folks who claim these dire results do not make the causal link needed to infer, for example, that allowing same-sex marriage will lead to people marrying goats.  As such, they are committing this fallacy and inviting others to join them in their error.

While I have written a reply to this fallacious argument before, hearing someone making the argument using goat marriage and corpse marriage got me thinking about the matter once again.

Using goat marriage as an example, the idea is that if same-sex marriage is allowed, then there is no way to stop the slide into people marrying goats. Presumably people marrying goats would be bad, so this should be avoided. In the case of corpse marriage, the gist is that if same-sex marriage is allowed, then there would be no way to stop the slide into people marry corpses. This would presumably be bad and hence must be avoided.

The slide down the slippery slope, it must be assumed, would occur because a principled distinction cannot be drawn between humans and goats. Nor can a principled distinction be drawn between living humans and corpses. After all, if such principled distinctions could be drawn, then the slide from same-sex marriage to goat marriage and corpse marriage could be stopped in a principled way, thus allowing same-sex marriage without the alleged dire consequences.

For the slippery slope arguments to work, there must not be a way to stop the slide. That is, there is a smooth and well-lubricated transition between humans and goats and between living humans and corpses. Since this is a conceptual matter rather than a matter of actual slopes, the slide would go both ways. That is, if we do not have an adequate wall between goats and humans, then the wall can be jumped from either direction. Likewise for corpses.

So, for the sake of argument, let it be supposed that there are not such adequate walls—that once we start moving, we are over the walls or down the slopes. This would, apparently, show that same-sex marriage would lead to goat marriage and corpse marriage. Of course, it would also show that different sex-marriage would lead to a slide into goat marriage and corpse marriage (I argued this point in my book, For Better or Worse Reasoning, so I will not repeat the argument here).

Somewhat more interestingly, the supposition of a low wall (or slippery slope) between humans and animals would also lead to some interesting results. For example, if we allow animals to be hunted and there is no solid wall between humans and animals in terms of laws and practices, then that would put us on the slippery slope to the hunting of humans. So, by the logic of the slippery slope, we should not allow humans to hunt animals. Ditto for eating animals—after all, if same-sex marriage leads to goat marriage, then eating beef must surely lead to cannibalism.

In the case of the low wall (or slippery slope) between corpses and humans, then there would also be some odd results. For example, if we allow corpses to be buried or cremated and there is no solid wall between the living and the dead, then this would put us on the slippery slope to burying or cremating the living. So, by the logic of the slippery slope, we should not allow corpses to be buried or cremated. Ditto for denying the dead the right to vote. After all, if allowing same-sex marriage would warrant necrophilia, then denying corpses the vote would warrant denying the living the right to vote.

Obviously, people will want to say that we can clearly distinguish between animals and humans as well as between the living and corpses. However, if we can do this, then the slippery slope argument against same-sex marriage would lose its slip.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Sexbots are Persons, Too?

Page_1In my previous essays on sexbots I focused on versions that are clearly mere objects. If the sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots would lack the moral status needed to be wronged. Obviously enough, the sexbots of the near future will be in the class of objects. However, science fiction has routinely featured intelligent, human-like robots (commonly known as androids). Intelligent beings, even artificial ones, would seem to have an excellent claim on being persons. In terms of sorting out when a robot should be treated as person, the reasonable test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

While Descartes does not deeply explore the moral distinctions between beings that talk (that have minds) and those that merely make noises, it does seem reasonable to regard a being that talks as a person and to thus grant it the moral status that goes along with personhood. This, then, provides a means to judge whether an advanced sexbot is a person or not: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would presumably lack moral status.

Having sex with a sexbot that can pass the Cartesian test would certainly seem to be morally equivalent to having sex with a human person. As such, whether the sexbot freely consented or not would be a morally important matter. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is, of course, actually done). If such sexbots were mistreated, this would also be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition.

It might also be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and hence it would still be reasonable to hold that it is not a person. It would seem to be a person, but would merely be acting like a person. While this is a point well worth considering, the same sort of argument could be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the classic problem of other minds: all I can do is see your behavior and infer that you are self-aware based on analogy to my own case. Hence, I do not know that you are aware since I cannot be you. From your perspective, the same is true about me. As such, if a robot acted in an intelligent manner, it would seem that it would have to be regarded as being a person on those grounds. To fail to do so would be a mere prejudice in favor of the organic.

In reply, some people believe that other people can be used as they see fit. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are simply evil.

Those with religious inclinations would probably bring up the matter of the soul. But, the easy reply is that we would have as much evidence that robots have souls as we do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product that is as like a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do onerous tasks, but to the degree they are intelligent, they would be slaves.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would actually be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Sexbots: Sex & Consequences

Sexbot-ColorAs a general rule, any technology that can be used for sex will be used for sex. Even if it shouldn’t. In accord with this rule, researchers and engineers are hard at work developing ever more realistic sexbots. By science-fiction standards, these sexbots are fairly crude—the most human-like seem to be just a bit more advanced than high-end sex dolls.

In my previous essay on this subject, I considered a Kantian approach to such non-rational sexbots. In this essay I will look at the matter from a consequentialist/utilitarian moral viewpoint.

On the face of it, sexbots could be seen as nothing new—currently they are merely an upgrade of the classic sex dolls that have been around for quite some time. Sexbots are, of course, more sophisticated than the famous blow-up sex dolls, but the basic idea is the same: the sexbot is an object that a person has sex with.

That said, one thing that makes sexbots morally interesting is the fact that they are typically designed to mimic human beings not merely in physical form (which is what sex dolls do) but in regards to the mind. For example, the Roxxxy sexbot’s main feature is its personality (or, more accurately, personalities). As a fictional example, the sexbots in Almost Human do not merely provide sex—they also provide human-like companionship. However, such person-like sexbots are a still a thing of science-fiction. As such, human-mimicking sexbots of this sort can be seen as something new.

An obvious moral concern is that the human-mimicking sexbots will have negative consequences for actual human beings, be they men or women. Not surprisingly, many of these concerns are analogous to existing moral concerns regarding pornography.

Pornography, so the stock arguments go, can have considerable negative consequences. One of these is that it teaches men to regard women as being mere sexual objects. This can, in some cases, influence men to treat women poorly and can also impact how women see themselves. Another point of concern is the addictive nature of pornography—people can become obsessed with it to their detriment.

Human-mimicking sexbots would certainly seem to have the potential to do more harm than pornography. After all, while watching pornography allows a person to see other people treated as mere sexual objects, a sexbot would allow a person to use a human-mimicking object sexually. This could presumably have an even stronger conditioning effect on the person using the object, leading some to regard other people as mere sexual objects and thus increasing the chances they will treat other people poorly. If so, it would seem that selling or using a sexbot would be morally wrong.

People might become obsessed with their sexbots, as people do with pornography. Then again, people might simply “conduct their business” with their sexbots and get on with things. If so, sexbots might be an improvement over pornography in this regard.  After all, while a guy could spend hours each day watching pornography, he certainly would not last very long with his sexbot.

Another concern raised in regards to certain types of pornography is that they encourage harmful sexual views and behavior. For example, violent pornography is supposed to influence people to engage in violence. As another example, child pornography is supposed to have an especially pernicious influence on people. Naturally, there is the concern about causation here: do people seek such porn because they are already that sort of person or does the porn influence them to become that sort of person? I will not endeavor to answer this here.

Since sexbots are objects, a person can do whatever he wishes to his sexbot—hit it, burn it, and “torture” it and so on. Presumably there will also be specialty markets catering to particular interests, such as those of pedophiles and necrophiliacs. If pornography that caters to these “tastes” can be harmful, then presumably a person being actively involved in such activities with a human-mimicking sexbot would be even more harmful. Essentially, the person would be practicing or warming up for the real thing. As such, it would seem that selling or using sexbots, especially those designed for harmful “interests” would be immoral.

Not surprisingly, these arguments are also similar to those used in regards to violent video games. The general idea is that violent video games are supposed to influence people so that they are more likely to engage in violence. So, just as some have proposed restrictions on virtual violence, perhaps there should be strict restrictions on sexbots.

When it comes to video games, one plausible counter is that while violent video games might have negative impact on the behavior of some people, they allow most people to harmlessly “burn off” their desire for violence and to let off steam. This seems analogous to sports and non-video games: they allow people to engage in conflict and competition in safer and far less destructive ways. For example, a person can indulge her love of conflict and conquest by playing Risk or Starcraft II after she works out her desire for violence by sparring a few rounds in the ring.

Turning back to sexbots, while they might influence some people badly, they might also provide a means by which people could indulge in desires that would be wrong, harmful and destructive to indulge with another person. So, for example, a person who likes to engage in sexual torture could satisfy her desires on a human-mimicking sexbot rather than an actual human. The rather critical issue here is whether or not indulging in such virtual vice with a sexbot would be a harmless dissipation of these desires or merely fuel them and drive a person to indulging them on actual people. If sexbots did allow people who would otherwise harm other people to vent their “needs” harmlessly on machines, then that would certainly be good for society as a whole. However, if this sort of activity would simply push them into doing such things for real and with unwilling victims, then that would certainly be bad for the person and society as a whole. This, then, is a key part of addressing the ethical concerns regarding sexbots.

(As a side note, I’ve been teaching myself how to draw-clever mockery of my talent is always appreciated…)

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta