Tag Archives: Adolf Hitler

Androids, Autonomy & Agency

Blade Runner

Blade Runner (Photo credit: Wikipedia)

Philosophers have long speculated about the subjects of autonomy and agency, but the rise of autonomous systems have made these speculations ever more important.  Keeping things fairly simple, an autonomous system is one that is capable of operating independent of direct control. Autonomy comes in degrees in terms of the extent of the independence and the complexity of the operations. It is, obviously, the capacity for independent operation that distinguishes autonomous systems from those controlled externally.

Simple toys provide basic examples of the distinction. A wind-up mouse toy has a degree of autonomy: once wound and released, it can operate on its own until it runs down. A puppet, in contrast, has no autonomy—a puppeteer must control it. Robots provide examples of rather more complex autonomous systems. Google’s driverless car is an example of a relatively advanced autonomous machine—once programmed and deployed, it will be able to drive itself to its destination. A normal car is an example of a non-autonomous system—the driver controls it directly. Some machines allow for both autonomous and non-autonomous operation. For example, there are drones that follow a program guiding them to a target and then an operator can take direct control.

Autonomy, at least in this context, is quite distinct from agency. Autonomy is the capacity to operate (in some degree) independently of direct control. Agency, at least in this context, is the capacity to be morally responsible for one’s actions. There is clearly a connection between autonomy and moral agency: moral agency requires autonomy. After all, an entity whose actions are completely controlled externally would not be responsible for what it was made to do. A puppet is, obviously, not accountable for what the puppeteer makes it do.

While autonomy seems necessary for agency, it is clearly not sufficient—while all agents have some autonomy, not all autonomous entities are moral agents. A wind-up toy has a degree of autonomy, but has no agency. A robot drone following a pre-programed flight-plan has a degree of autonomy, but would lack agency—if it collided with a plane it would not be morally responsible. The usual reason why such a machine would not be an agent is that it lacks the capacity to decide. Or, put another way, it lacks freedom.  Since it cannot do otherwise, it is no more morally accountable than an earthquake or a super nova.

One obvious problem with basing agency on freedom (especially metaphysical freedom of the will) is that there is considerable debate about whether or not such freedom exists. There is also the epistemic problem of how one would know if an entity has such freedom.

As a practical matter, it is usually assumed that people have the freedom needed to make them into agents. Kant, rather famously, took this approach. What he regarded as the best science of his day indicated a deterministic universe devoid of metaphysical freedom. However, he contended that such freedom was needed for morality—so it should be accepted for this reason.

While humans are willing (generally) to attribute freedom and agency to other humans, there seem to be good reasons to not attribute freedom and agency to autonomous machines—even those that might be as complex as (or even more complex than) a human. The usual line of reasoning is that since such machines would be built and programmed by humans they would do what they do because they are what they are. This would be in clear contrast to the agency of humans: humans, it is alleged, do what they do because they choose to do what they do.

This distinction between humans and suitably complex machines would seem to be a mere prejudice favoring organic machines over mechanical machines. If a human was in a convincing robot costume and credibly presented as a robot while acting like a normal human, people would be inclined to deny that “it” had freedom and agency. If a robot was made to look and act just like a human, people would be inclined to grant it agency—at least until they learned it was “just” a machine. Then there would probably be an inclination to regard it as a very clever but unfree machine.  But, of course, it would not really be known whether the human or the machine had the freedom alleged needed for agency. Fortunately, it is possible to have agency even without free will (but with a form of freedom).

The German philosopher Leibiniz held the view that what each person will do is pre-established by her inner nature. On the face of it, this would seem to entail that there is no freedom: each person does what she does because of what she is—and she cannot do otherwise. Interestingly, Leibniz takes the view that people are free. However, he does not accept the common view that freedom requires actions that are unpredictable and spontaneous. Leibniz rejects this view in favor of the position that freedom is unimpeded self-development.

For Leibniz, being metaphysically without freedom would involve being controlled from the outside—like a puppet controlled by a puppeteer or a vehicle being operated by remote control.  In contrast, freedom is acting from one’s values and character (what Leibniz and Taoists call “inner nature”). If a person is acting from this inner nature and not external coercion—that is, the actions are the result of character, then that is all that can be meant by freedom. This view, which attempts to blend determinism and freedom, is known as compatibilism. On this sort of view, humans do have agency because they have the needed degree of freedom and autonomy.

If this model works for humans, it could also be applied to autonomous machines. To the degree that a machine is operating in accord to its “inner nature” and is not operating under the control of outside factors, it would have agency.

An obvious objection is that an autonomous machine, however complex, would have been built and programmed (in the broad sense of the term) by humans. As such, it would be controlled and not free. The easy and obvious reply is that humans are “built” by other humans (by mating) and are “programmed” by humans via education and socialization. As such, if humans can be moral agents, then it would seem that a machine could also be a moral agent.

From a moral standpoint, I would suggest a Moral Descartes’ Test (or, for those who prefer, a Moral Turing Test). Descartes argued that the sure proof of a being having a mind is its capacity to use true language. Turning later proposed a similar sort of test involving the ability of a computer to pass as human via text communication. In the moral test, the test would be a judgment of moral agency—can the machine be as convincing as a human in regards to its possession of agency? Naturally, a suitable means of concealing the fact that the being is a machine would be needed in order to prevent mere prejudice from infecting the judgment. The movie Blade Runner featured something similar, the Voight-Kampff test aimed at determining if the subject was a replicant or human. This test was based on the differences between humans and replicants in regards to emotions. In the case of moral agency, the test would have to be crafted to determine agency rather than to distinguish a human from machine, since the issue is not whether a machine is human but whether it has agency. A moral agent might have rather different emotions, etc. than a human. The challenge is, obviously enough, developing a proper test for moral agency. It would, of course, be rather interesting if humans could not pass it.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Review of Dungeons & Dragons and Philosophy

Dungeons & Dragons and Philosophy

Christopher Robichaud (Editor) $17.95 August, 2014

As a professional philosopher, I am often wary of “pop philosophy”, mainly because it is rather like soda pop: it is intended for light consumption. But, like soda, some of it is quite good and some of it is just sugary junk that will do little but rot your teeth (or mind). As a professional author in the gaming field, I am generally wary of attempts by philosophers to write philosophically about a game. While a philosopher might be adept at philosophy and might even know how to read a d4, works trying to jam gaming elements into philosophy (or vice versa) are often like trying to jam an ogre into full plate made for a Halfling: it will not be a good fit and no one is going to be happy with the results.

Melding philosophy and gaming also has a rather high challenge rating, mainly because it is difficult to make philosophy interesting and comprehensible to folks outside of philosophy, such as gamers who are not philosophers. After all, gamers usually read books that are game books: sourcebooks adding new monsters and classes, adventures (or modules as they used to be called), and rulebooks. There is also a comparable challenge in making the gaming aspects comprehensible and interesting to those who are not gamers. As such, this book faces some serious obstacles. So, I shall turn now to how the book fares in its quest to get your money and your eyeballs.

Fortunately for the authors of this anthology of fifteen essays, many philosophers are quite familiar with Dungeons & Dragons and gamers are often interested in philosophical issues. So, there is a ready-made audience for the book. There are, however, many more people who are interested in philosophy but not gaming and vice versa. So, I will discuss the appeal of the book to these three groups.

If you are primarily interested in philosophy and not familiar with Dungeons & Dragons, this book will probably not appeal to you—while the essays do not assume a complete mastery of the game, many assume considerable familiarity with the game. For example, the ethics of using summoned animals in combat is not an issue that non-gamers worry about or probably even understand. That said, the authors do address numerous standard philosophical issues, such as free will, and generally provide enough context so that a non-gamer will get what is going on.

If you are primarily a gamer and not interested in philosophy, this book will probably not be very appealing—it is not a gaming book and does not provide any new monsters, classes, or even background material. That said, it does include the sort of game discussions that gamers might not recognize as philosophical, such as handling alignments. So, even if you are not big on philosophy, you might find the discussions interesting and familiar.

For those interested in both philosophy and gaming, the book has considerable appeal. The essays are clear, competent and well-written on the sort of subjects that gamers and philosophers often address, such as what actions are evil. The essays are not written at the level of journal articles, which is a good thing: academic journals tend to be punishing reading. As such, people who are not professional philosophers will find the philosophy approachable. Those who are professional philosophers might find it less appealing because there is nothing really groundbreaking here, although the essays are interesting.

The subject matter of the book is fairly diverse within the general context. The lead essay, by Greg Littmann, considers the issue of free will within the context of the game. Another essay, by Matthew Jones and Ashley Brown, looks at the ethics of necromancy. While (hopefully) not relevant to the real world, it does raise an issue that gamers have often discussed, especially when the cleric wants to have an army of skeletons but does not want to have the paladin smite him in the face. There is even an essay on gender in the game, ably written by Shannon M. Musset.

Overall, the essays do provide an interesting philosophical read that will be of interest to gamers, be they serious or casual. Those who are not interested in either will probably not find the book worth buying with their hard earned coppers.

For those doing gift shopping for a friend or relative who is interested in philosophy and gaming, this would be a reasonable choice for a present. Especially if accompanied by a bag of dice. As a great philosopher once said, “there is no such thing as too many dice.”

As a disclaimer, I received a free review copy from the publisher. I do not know any of the authors or the editor and was not asked to contribute to the book.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Rhetorical Overkill

Adolf Hitler portrait, bust, 3/4 facing right.

Image via Wikipedia

As part of my critical thinking class, I teach a section on rhetoric. While my main concern is with teaching students how to defend against it, I also discuss how to use it. One of the points I make is that a risk with certain forms of rhetoric is what I call rhetorical overkill. This is  commonly done with hyperbole which is, by definition, an extravagant overstatement.

One obvious risk with hyperbole is that if it is too over the top, then it can be ineffective or even counterproductive. If a person is trying to use positive hyperbole, then going too far can create the impression that the person is claiming the absurd or even mocking the subject in question. For example, think of the over the top infomercials where the product is claimed to do  everything but cure cancer.  If the person is trying to use negative hyperbole, then going too far can undercut the attack by making it seem ridiculous. For example, calling a person a Nazi because he favors laws requiring people to use seat belts would seem rather absurd.

Another risk is that hyperbole can create an effect somewhat like crying “wolf”. In that tale, the boy cried “wolf” so often that no one believed him when the wolf actually came. In the case of rhetorical overkill, the problem is that it can create what might be dubbed “hyperbolic fatigue.” If matters are routinely blown out of proportion, this will tend to numb people to such terms. On a related note, if politicians and pundits routinely cry “Hitler” or “apocalypse” over lesser matters what words will they have left when the situation truly warrants such terms?

In some ways, this  is like swearing. While I am not a prude, I prefer to keep my swear words in reserve for situations that actually merit them. I’ve noticed that many people tend to use swear words in everyday conversations and I found this a bit confusing at first. After all, I have “hierarchy of escalation” when it comes to words, and swear words are at the top.  But, for many folks today, swear words are just part of everyday conversation (even in the classroom). So, when someone swears at me now, I pause to see if they are just talking normally or if they are actually trying to start trouble.

While I rarely swear, I do resent the fact that swear words have become so diluted and hence less useful to make a point quickly and directly. The same applies to extreme language-if we do not reserve it for extreme circumstances, then we diminish our language by robbing extreme words of their corresponding significance.

So, what the f@ck do you think?

Reblog this post [with Zemanta]

Darwin & Cameron

Kirk Cameron, formerly of the American sitcom Growing Pains, has lent his skills to the defense of creationism against Darwinism. He is currently involved in handing out a version of Darwin’s book with a new introduction. Not surprisingly, the introduction is highly critical of Darwin.

While there are some reasonable criticisms of evolution and it is quite possible to give reasonable arguments in favor of teleology (see, for example, Plato, Aristotle, and Aquinas), this introduction seems to focus primarily on ad homimen attacks against Darwin. To be specific, the main criticisms seem to be allegations that Darwin’s theory influenced Hitler, that Darwin was a racist and that Darwin was a misogynist.

The logical response to these charges is quite easy: even if these claims were true, they have no bearing whatsoever on the correctness or incorrectness of Darwin’s claims. After all, these are mere ad homimen attacks.

To see that this sort of reasoning is flawed, simply consider this: Adolf Hitler believed that 2+2=4. Obviously the fact that Hitler was a wicked man has no bearing on the truth of that view. Likewise, even racists believe that fire burns and to say that this makes the claim about fire untrue is obviously false.

To use another example, it has been argued that Hitler was influenced by Christianity. However, it would be a logical error to infer that Christianity is flawed because a wicked person was influenced by it (or believed in it).

Interestingly enough, certain atheists attack religions in the same manner that Darwin is being attacked here: by noting that people who did terrible things were Christians/influenced by Christianity (such as the impact of Christian antisemitism on the Holocaust). Obviously, this sort of tactic is based on a fallacy whether it is used against Darwin’s theory or against a religious view.

Reblog this post [with Zemanta]