Tag Archives: Human

Automation & Ethics

Suomi: Heronin aeolipiili Türkçe: Yunanlı mühe...

Suomi: Heronin aeolipiili Türkçe: Yunanlı mühendis Hero’nun yaptığı ilk örnek türbin (Photo credit: Wikipedia)

Hero of Alexandria (born around 10 AD) is credited with developing the first steam engine, the first vending machine and the first known wind powered machine (a wind powered musical organ). Given the revolutionary impact of the steam engine centuries later, it might be wondered why the Greeks did not make use of these inventions in their economy. While some claim that the Greeks simply did not see the implications, others claim that the decision was based on concerns about social stability: the development of steam or wind power on a significant scale would have certainly displaced slave labor. This displacement could have caused social unrest or even contributed to a revolution.

While it is somewhat unclear what prevented the Greeks from developing steam or wind power, the Roman emperor Vespasian was very clear about his opposition to a labor saving construction device: he stated that he must always ensure that the workers earned enough money to buy food and this device would put workers out of work.

While labor saving technology has advanced considerably since the time of Hero and Vespasian, the basic questions remain the same. These include the question of whether to adopt the technology or not and questions about the impact of such technology (which range from the impact on specific individuals to the society as a whole).

Obviously enough, each labor saving advancement must (by its very nature) eliminate some jobs and thus create some initial unemployment. For example, if factory robots are introduced, then human laborers are displaced. Obviously enough, this initial impact tends to be rather negative on the displaced workers while generally being positive for the employers (higher profits, typically).

While Vespasian expressed concerns about the impact of such labor saving devices, the commonly held view about much more recent advances is that they have had a general positive impact. To be specific, the usual narrative is that these advances replaced the lower-paying (and often more dangerous or unrewarding) jobs with better jobs while providing more goods at a lower cost. So, while some individuals might suffer at the start, the invisible machine of the market would result in an overall increase in utility for society.

This sort of view can and is used to provide the foundation for a moral argument in support of such labor saving technology. The gist, obviously enough, is that the overall increase in benefits outweighs the harms created. Thus, on utilitarian grounds, the elimination of these jobs by means of technology is morally acceptable. Naturally, each specific situation can be debated in terms of the benefits and the harms, but the basic moral reasoning seems solid: if the technological advance that eliminates jobs creates more good than harm for society as a whole, then the advance is morally acceptable.

Obviously enough, people can also look at the matter rather differently in terms of who they regard as counting morally and who they regard as not counting (or not counting as much). Obviously, a person who focuses on the impact on workers can have a rather different view than a person who focuses on the impact on the employer.

Another interesting point of concern is to consider questions about the end of such advances. That is, what the purpose of such advances should be. From the standpoint of a typical employer, the end is obvious: reduce labor to reduce costs and thus increase profits (and reduce labor troubles). The ideal would, presumably, to replace any human whose job can be done cheaper (or at the same cost) by a machine. Of course, there is the obvious concern: to make money a business needs customers who have money. So, as long as profit is a concern, there must always be people who are being paid and are not replaced by unpaid machines. Perhaps the pinnacle of this sort of system will consist of a business model in which one person owns machines that produce goods or services that are sold to other business owners. That is, everyone is a business owner and everyone is a customer. This path does, of course, have some dystopian options. For example, it is easy to imagine a world in which the majority of people are displaced, unemployed and underemployed while a small elite enjoys a lavish lifestyle supported by automation and the poor. At least until the revolution.

A more utopian sort of view, the sort which sometimes appears in Star Trek, is one in which the end of automation is to eliminate boring, dangerous, unfulfilling jobs to free human beings from the tyranny of imposed labor. This is the sort of scenario that anarchists like Emma Goldman promised: people would do the work they loved, rather than laboring as servants to make others wealthy. This path also has some dystopian options. For example, it is easy to imagine lazy people growing ever more obese as they shovel in cheese puffs and burgers in front of their 100 inch entertainment screens. There are also numerous other dystopias that can be imagined and have been explored in science fiction (and in political rhetoric).

There are, of course, a multitude of other options when it comes to automation.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Sexbots, Killbots & Virtual Dogs

Sexbots,_Killbots_&__Cover_for_KindleMy most recent  book, Sexbots, Killbots & Virtual Dogs, is now available as a Kindle book on Amazon. It will soon be available as a print book as well (the Kindle version is free with the print book on Amazon).

There is also a free promo for the Kindle book from April 1, 2014 to April 5, 2014. At free, it is worth every penny!

Book Description

While the story of Cain and Abel does not specify the murder weapon used by Cain, traditional illustrations often show Cain wielding the jawbone of an animal (perhaps an ass—which is what Samson is said to have employed as a weapon). Assuming the traditional illustrations and the story are right, this would be one of the first uses of technology by a human—and, like our subsequent use of technology, one of considerable ethical significance.

Whether the tale of Cain is true or not, humans have been employing technology since our beginning. As such, technology is nothing new. However, we are now at a point at which technology is advancing and changing faster than ever before—and this shows no signs of changing. Since technology so often has moral implications, it seems worthwhile to consider the ethics of new and possible future technology. This short book provides essays aimed at doing just that on subjects ranging from sexbots to virtual dogs to asteroid mining.

While written by a professional philosopher, these essays are aimed at a general audience and they do not assume that the reader is an expert at philosophy or technology.

The essays are also fairly short—they are designed to be the sort of things you can read at your convenience, perhaps while commuting to work or waiting in the checkout line.

Enhanced by Zemanta

Love, Voles & Spinoza

Benedict de Spinoza: moral problems and our em...

(Photo credit: Wikipedia)

In my previous essays I examined the idea that love is a mechanical matter as well as the implications this might have for ethics. In this essay, I will focus on the eternal truth that love hurts.

While there are exceptions, the end of a romantic relationship typically involves pain. As noted in my original essay on voles and love, Young found that when a prairie voles loses its partner, it becomes depressed. This was tested by dropping voles into beakers of water to determine how much the voles would struggle. Prairie voles who had just lost a partner struggled to a lesser degree than those who were not so bereft. The depressed voles, not surprisingly, showed a chemical difference from the non-depressed voles. When a depressed vole was “treated” for this depression, the vole struggled as strongly as the non-bereft vole.

Human beings also suffer from the hurt of love. For example, it is not uncommon for a human who has ended a relationship (be it divorce or a breakup) to fall into a vole-like depression and struggle less against the tests of life (though dropping humans into giant beakers to test this would presumably be unethical).

While some might derive an odd pleasure from stewing in a state of post-love depression, presumably this feeling is something that a rational person would want to end. The usual treatment, other than self-medication, is time: people usually tend to come out of the depression and then seek out a new opportunity for love. And depression.

Given the finding that voles can be treated for this depression, it would seem to follow that humans could also be treated for this as well. After all, if love is essentially a chemical romance grounded in strict materialism, then tweaking the brain just so would presumably fix that depression. Interestingly enough, the philosopher Spinoza offered an account of love (and emotions in general) that nicely match up with the mechanistic model being examined.

As Spinoza saw it, people are slaves to their affections and chained by who they love. This is an unwise approach to life because, as the voles in the experiment found out, the object of one’s love can die (or leave). This view of Spinoza nicely matches up: voles that bond with a partner become depressed when that partner is lost. In contrast, voles that do not form such bonds do not suffer that depression.

Interestingly enough, while Spinoza was a pantheist, his view of human beings is rather similar to that of the mechanist: he regarded humans are being within the laws of nature and was a determinist in that all that occurs does so from necessity—there is no chance or choice. This view guided him to the notion that human behavior and motivations can be examined as one might examine “lines, planes or bodies.” To be more specific, he took the view that emotions follow the same necessity as all other things, thus making the effects of the emotions predictable.  In short, Spinoza engaged in what can be regarded as a scientific examination of the emotions—although he did so without the technology available today and from a rather more metaphysical standpoint. However, the core idea that the emotions can be analyzed in terms of definitive laws is the same idea that is being followed currently in regards to the mechanics of emotion.

Getting back to the matter of the negative impact of lost love, Spinoza offered his own solution: as he saw it, all emotions are responses to what is in the past, present or future. For example, a person might feel regret because she believes she could have done something different in the past. As another example, a person might worry because he thinks that what he is doing now might not bear fruit in the future. These negative feelings rest, as Spinoza sees it, on the false belief that the past and present could be different and the future is not set. Once a person realizes that all that happens occurs of necessity (that is, nothing could have been any different and the future cannot be anything other than what it will be), then that person will suffer less from the emotions. Thus, for Spinoza, freedom from the enslaving chains of love would be the recognition and acceptance that what occurs is determined.

Putting this in the mechanistic terms of modern neuroscience, a Spinoza-like approach would be to realize that love is purely mechanical and that the pain and depression that comes from the loss of love are also purely mechanical. That is, the terrible, empty darkness that seems to devour the soul at the end of love is merely chemical and electrical events in the brain. Once a person recognizes and accepts this, if Spinoza is right, the pain should be reduced. With modern technology it is possible to do even more: whereas Spinoza could merely provide advice, modern science can eventually provide us with the means to simply adjust the brain and set things right—just as one would fix a malfunctioning car or PC.

One rather obvious problem is, of course, that if everything is necessary and determined, then Spinoza’s advice makes no sense: what is, must be and cannot be otherwise. To use an analogy, it would be like shouting advice at someone watching a cut scene in a video game. This is pointless, since the person cannot do anything to change what is occurring. For Spinoza, while we might think life is a like a game, it is like that cut scene: we are spectators and not players. So, if one is determined to wallow like a sad pig in the mud of depression, that is how it will be.

In terms of the mechanistic mind, advice would seem to be equally absurd—that is, to say what a person should do implies that a person has a choice. However, the mechanistic mind presumably just ticks away doing what it does, creating the illusion of choice. So, one brain might tick away and end up being treated while another brain might tick away in the chemical state of depression. They both eventually die and it matters not which is which.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Owning Intelligent Machines

Rebel ToasterWhile truly intelligent machines are still in the realm of science fiction, it is worth considering the ethics of owning them. After all, it seems likely that we will eventually develop such machines and it seems wise to think about how we should treat them before we actually make them.

While it might be tempting to divide beings into two clear categories of those it is morally permissible to own (like shoes) and those that are clearly morally impermissible to own (people), there are clearly various degrees of ownership in regards to ethics. To use the obvious example, I am considered the owner of my husky, Isis. However, I obviously do not own her in the same way that I own the apple in my fridge or the keyboard at my desk. I can eat the apple and smash the keyboard if I wish and neither act is morally impermissible. However, I should not eat or smash Isis—she has a moral status that seems to allow her to be owned but does not grant her owner the right to eat or harm her. I will note that there are those who would argue that animals should not be owner and also those who would argue that a person should have the moral right to eat or harm her pets. Fortunately, my point here is a fairly non-controversial one, namely that it seems reasonable to regard ownership as possessing degrees.

Assuming that ownership admits of degrees in this regard, it makes sense to base the degree of ownership on the moral status of the entity that is owned. It also seems reasonable to accept that there are qualities that grant a being the status that morally forbids ownership. In general, it is assumed that persons have that status—that it is morally impermissible to own people. Obviously, it has been legal to own people (be the people actual people or corporations) and there are those who think that owning other people is just fine. However, I will assume that there are qualities that provide a moral ground for making ownership impermissible and that people have those qualities. This can, of course, be debated—although I suspect few would argue that they should be owned.

Given these assumptions, the key matter here is sorting out the sort of status that intelligent machines should possess in regards to ownership. This involves considering the sort of qualities that intelligent machines could possess and the relevance of these qualities to ownership.

One obvious objection to intelligent machines having any moral status is the usual objection that they are, obviously, machines rather than organic beings. The easy and obvious reply to this objection is that this is mere organicism—which is analogous to a white person saying blacks can be owned as slaves because they are not white.

Now, if it could be shown that a machine cannot have qualities that give it the needed moral status, then that would be another matter. For example, philosophers have argued that matter cannot think and if this is the case, then actual intelligent machines would be impossible. However, we cannot assume a priori that machines cannot have such a status merely because they are machines. After all, if certain philosophers and scientists are right, we are just organic machines and thus there would seem to be nothing impossible about thinking, feeling machines.

As a matter of practical ethics, I am inclined to set aside metaphysical speculation and go with a moral variation on the Cartesian/Turing test. The basic idea is that a machine should be granted a moral status comparable to organic beings that have the same observed capabilities. For example, a robot dog that acted like an organic dog would have the same status as an organic dog. It could be owned, but not tortured or smashed. The sort of robohusky I am envisioning is not one that merely looks like a husky and has some dog-like behavior, but one that would be fully like a dog in behavioral capabilities—that is, it would exhibit personality, loyalty, emotions and so on to a degree that it would pass as real dog with humans if it were properly “disguised” as an organic dog. No doubt real dogs could smell the difference, but scent is not the foundation of moral status.

In terms of the main reason why a robohusky should get the same moral status as an organic husky, the answer is, oddly enough, a matter of ignorance. We would not know if the robohusky really had the metaphysical qualities of an actual husky that give an actual husky moral status. However, aside from difference in the parts, we would have no more reason to deny the robohusky moral status than to deny the husky moral status. After all, organic huskies might just be organic machines and it would be mere organicism to treat the robohusky as a mere thing and grant the organic husky a moral status. Thus, advanced robots with the capacities of higher animals should receive the same moral status as organic animals.

The same sort of reasoning would apply to robots that possess human qualities. If a robot had the capability to function analogously to a human being, then it should be granted the same status as a comparable human being. Assuming it is morally impermissible to own humans, it would be impermissible to own such robots. After all, it is not being made of meat that grants humans the status of being impermissible to own but our qualities. As such, a machine that had these qualities would be entitled to the same status. Except, of course, to those unable to get beyond their organic prejudices.

It can be objected that no machine could ever exhibit the qualities needed to have the same status as a human. The obvious reply is that if this is true, then we will never need to grant such status to a machine.

Another objection is that a human-like machine would need to be developed and built. The initial development will no doubt be very expensive and most likely done by a corporation or university. It can be argued that a corporation would have the right to make a profit off the development and construction of such human-like robots. After all, as the argument usually goes for such things, if a corporation was unable to profit from such things, they would have no incentive to develop such things. There is also the obvious matter of debt—the human-like robots would certainly seem to owe their creators for the cost of their creation.

While I am reasonably sure that those who actually develop the first human-like robots will get laws passed so they can own and sell them (just as slavery was made legal), it is possible to reply to this objection.

One obvious reply is to draw an analogy to slavery: just because a company would have to invest money in acquiring and maintaining slaves it does not follow that their expenditure of resources grants a right to own slaves. Likewise, the mere fact that a corporation or university spent a lot of money developing a human-like robot would not entail that they thus have a right to own it.

Another obvious reply to the matter of debt owed by the robots themselves is to draw an analogy to children: children are “built” within the mother and then raised by parents (or others) at great expense. While parents do have rights in regards to their children, they do not get the right of ownership. Likewise, robots that had the same qualities as humans should thus be regarded as children would be regarded and hence could not be owned.

It could be objected that the relationship between parents and children would be different than between corporation and robots. This is a matter worth considering and it might be possible to argue that a robot would need to work as an indentured servant to pay back the cost of its creation. Interestingly, arguments for this could probably also be used to allow corporations and other organizations to acquire children and raise them to be indentured servants (which is a theme that has been explored in science fiction). We do, after all, often treat humans worse than machines.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Programmed Consent

Sexbot YesScience fiction is often rather good at predicting the future and it is not unreasonable to think that the intelligent machine of science fiction will someday be a reality. Since I have been writing about sexbots lately, I will use them to focus the discussion. However, what follows can also be applied, with some modification, to other sorts of intelligent machines.

Sexbots are, obviously enough, intended to provide sex. It is equally obvious that sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped or not. Sorting this out requires considering the matter of consent in more depth.

When it is claimed that sex without consent is rape, one common assumption is that the victim of non-consensual sex is a being that could provide consent but did not. A violent sexual assault against a person would be an example of this as would, presumably, non-consensual sex with an unconscious person. However, a little reflection reveals that the capacity to provide consent is not always needed in order for rape to occur. In some cases, the being might be incapable of engaging in any form of consent. For example, a brain dead human cannot give consent, but presumably could still be raped. In other cases, the being might be incapable of the right sort of consent, yet still be a potential victim of rape. For example, it is commonly held that a child cannot properly consent to sex with an adult.

In other cases, a being that cannot give consent cannot be raped. To use an obvious example, a human can have sex with a sex-doll and the doll cannot consent. But, it is not the sort of entity that can be raped. After all, it lacks the status that would require consent. As such, rape (of a specific sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being in order for the sex to be morally acceptable. Naturally, I have not laid out all the fine details to create a necessary and sufficient account here—but that is not my goal nor what I need for my purpose in this essay. In regards to the main focus of this essay, the question would be whether or not a sexbot could be an entity that has a status that would require consent. That is, would buying (or renting) and using a sexbot for sex be rape?

Since the current sexbots are little more than advanced sex dolls, it seems reasonable to put them in the category of beings that lack this status. As such, a person can own and have sex with this sort of sexbot without it being rape (or slavery). After all, a mere object cannot be raped (or enslaved).

But, let a more advanced sort of sexbot be imagined—one that engages in complex behavior and can pass the Turning Test/Descartes Test. That is, a conversation with it would be indistinguishable from a conversation with a human. It could even be imagined that the sexbot appeared fully human, differing only in terms of its internal makeup (machine rather than organic). That is, unless someone cut the sexbot open, it would be indistinguishable from an organic person.

On the face of it (literally), we would seem to have as much reason to believe that such a sexbot would be a person as we do to believe that humans are people. After all, we judge humans to be people because of their behavior and a machine that behaved the same way would seem to deserve to be regarded as a person. As such, nonconsensual sex with a sexbot would be rape.

The obvious objection is that we know that a sexbot is a machine with a CPU rather than a brain and a mechanical pump rather than a heart. As such, one might, argue, we know that the sexbot is just a machine that appears to be a person and is not a person.  As such, a real person could own a sexbot and have sex with it without it being rape—the sexbot is a thing and hence lacks the status that requires consent.

The obvious reply to this objection is that the same argument can be used in regards to organic humans. After all, if we know that a sexbot is just a machine, then we would also seem to know that we are just organic machines. After all, while cutting up a sexbot would reveal naught but machinery, cutting up a human reveals naught but guts and gore. As such, if we grant organic machines (that is, us) the status of persons, the same would have to be extended to similar beings, even if they are made out of different material. While various metaphysical arguments can be advanced regarding the soul, such metaphysical speculation provides a rather tenuous basis for distinguishing between meat people and machine people.

There is, it might be argued, still an out here. In his Hitchhikers’ Guide to the Galaxy Douglas Adams envisioned “an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly.” A similar sort of thing could be done with sexbots: they could be programmed so that they always give consent to their owner, thus the moral concern would be neatly bypassed.

The obvious reply is that programmed consent is not consent. After all, consent would seem to require that the being has a choice: it can elect to refuse if it wants to. Being compelled to consent and being unable to dissent would obviously not be morally acceptable consent. In fact, it would not be consent at all. As such, programming sexbots in this manner would be immoral—it would make them into slaves and rape victims because they would be denied the capacity of choice.

One possible counter is that the fact that a sexbot can be programmed to give “consent” shows that it is (ironically) not the sort of being with a status that requires consent. While this has a certain appeal, consider the possibility that humans could be programmed to give “consent” via a bit of neurosurgery or by some sort of implant. If this could occur, then if programmed consent for sexbots is valid consent, then the same would have to apply to humans as well. This, of course, seems absurd. As such, a sexbot programmed for consent would not actually be consenting.

It would thus seem that if advanced sexbots were built, they should not be programmed to always consent. Also, there is the obvious moral problem with selling such sexbots, given that they would certainly seem to be people. It would thus seem that such sexbots should never be built—doing so would be immoral.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Sexbots: Sex & Consequences

Sexbot-ColorAs a general rule, any technology that can be used for sex will be used for sex. Even if it shouldn’t. In accord with this rule, researchers and engineers are hard at work developing ever more realistic sexbots. By science-fiction standards, these sexbots are fairly crude—the most human-like seem to be just a bit more advanced than high-end sex dolls.

In my previous essay on this subject, I considered a Kantian approach to such non-rational sexbots. In this essay I will look at the matter from a consequentialist/utilitarian moral viewpoint.

On the face of it, sexbots could be seen as nothing new—currently they are merely an upgrade of the classic sex dolls that have been around for quite some time. Sexbots are, of course, more sophisticated than the famous blow-up sex dolls, but the basic idea is the same: the sexbot is an object that a person has sex with.

That said, one thing that makes sexbots morally interesting is the fact that they are typically designed to mimic human beings not merely in physical form (which is what sex dolls do) but in regards to the mind. For example, the Roxxxy sexbot’s main feature is its personality (or, more accurately, personalities). As a fictional example, the sexbots in Almost Human do not merely provide sex—they also provide human-like companionship. However, such person-like sexbots are a still a thing of science-fiction. As such, human-mimicking sexbots of this sort can be seen as something new.

An obvious moral concern is that the human-mimicking sexbots will have negative consequences for actual human beings, be they men or women. Not surprisingly, many of these concerns are analogous to existing moral concerns regarding pornography.

Pornography, so the stock arguments go, can have considerable negative consequences. One of these is that it teaches men to regard women as being mere sexual objects. This can, in some cases, influence men to treat women poorly and can also impact how women see themselves. Another point of concern is the addictive nature of pornography—people can become obsessed with it to their detriment.

Human-mimicking sexbots would certainly seem to have the potential to do more harm than pornography. After all, while watching pornography allows a person to see other people treated as mere sexual objects, a sexbot would allow a person to use a human-mimicking object sexually. This could presumably have an even stronger conditioning effect on the person using the object, leading some to regard other people as mere sexual objects and thus increasing the chances they will treat other people poorly. If so, it would seem that selling or using a sexbot would be morally wrong.

People might become obsessed with their sexbots, as people do with pornography. Then again, people might simply “conduct their business” with their sexbots and get on with things. If so, sexbots might be an improvement over pornography in this regard.  After all, while a guy could spend hours each day watching pornography, he certainly would not last very long with his sexbot.

Another concern raised in regards to certain types of pornography is that they encourage harmful sexual views and behavior. For example, violent pornography is supposed to influence people to engage in violence. As another example, child pornography is supposed to have an especially pernicious influence on people. Naturally, there is the concern about causation here: do people seek such porn because they are already that sort of person or does the porn influence them to become that sort of person? I will not endeavor to answer this here.

Since sexbots are objects, a person can do whatever he wishes to his sexbot—hit it, burn it, and “torture” it and so on. Presumably there will also be specialty markets catering to particular interests, such as those of pedophiles and necrophiliacs. If pornography that caters to these “tastes” can be harmful, then presumably a person being actively involved in such activities with a human-mimicking sexbot would be even more harmful. Essentially, the person would be practicing or warming up for the real thing. As such, it would seem that selling or using sexbots, especially those designed for harmful “interests” would be immoral.

Not surprisingly, these arguments are also similar to those used in regards to violent video games. The general idea is that violent video games are supposed to influence people so that they are more likely to engage in violence. So, just as some have proposed restrictions on virtual violence, perhaps there should be strict restrictions on sexbots.

When it comes to video games, one plausible counter is that while violent video games might have negative impact on the behavior of some people, they allow most people to harmlessly “burn off” their desire for violence and to let off steam. This seems analogous to sports and non-video games: they allow people to engage in conflict and competition in safer and far less destructive ways. For example, a person can indulge her love of conflict and conquest by playing Risk or Starcraft II after she works out her desire for violence by sparring a few rounds in the ring.

Turning back to sexbots, while they might influence some people badly, they might also provide a means by which people could indulge in desires that would be wrong, harmful and destructive to indulge with another person. So, for example, a person who likes to engage in sexual torture could satisfy her desires on a human-mimicking sexbot rather than an actual human. The rather critical issue here is whether or not indulging in such virtual vice with a sexbot would be a harmless dissipation of these desires or merely fuel them and drive a person to indulging them on actual people. If sexbots did allow people who would otherwise harm other people to vent their “needs” harmlessly on machines, then that would certainly be good for society as a whole. However, if this sort of activity would simply push them into doing such things for real and with unwilling victims, then that would certainly be bad for the person and society as a whole. This, then, is a key part of addressing the ethical concerns regarding sexbots.

(As a side note, I’ve been teaching myself how to draw-clever mockery of my talent is always appreciated…)

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Should Killer Robots be Banned?

The Terminator.

The Terminator. (Photo credit: Wikipedia)

You can’t say that civilization don’t advance, however, for in every war they kill you in a new way.

-Will Rogers

 

Humans have been using machines to kill each other for centuries and these machines have become ever more advanced and lethal. In more recent decades there has been considerable focus on developing autonomous weapons. That is, weapons that can locate and engage the enemy on their own without being directly controlled by human beings. The crude seeking torpedoes of World War II are an example of an early version of such a killer machine. Once fired, the torpedo would be guided by acoustic sensors to its target and then explode—it was a crude, suicidal mechanical shark. Of course, this weapon had very limited autonomy since humans decided when to fire it and at what target.

Thanks to advances in technology, far greater autonomy is now possible. One peaceful example of this is the famous self-driving cars. While some see them as privacy killing robots, they are not designed to harm people—quite the opposite, in fact. However, it is easy to see how the technology used to guide a car safely around people, animals and other vehicles could be used to guide an armed machine to its targets.

Not surprisingly, some people are rather concerned about the possibility of killer robots, or with less hyperbole, autonomous weapon systems. Recently there has been a push to ban such weapons by international treaty. While people are no doubt afraid of killer machines roaming about due to science fiction stories and movies, there are legitimate moral, legal and practical grounds for such a ban.

One concern is that while autonomous weapons might be capable of seeking out and engaging targets, they would lack the capability to make the legal and moral decisions needed to operate within the rules of war. As a specific example, there is the concern that a killer robot will not be able to distinguish between combatants and non-combatants as reliably as a human being. As such, autonomous weapon systems could be far more likely than human combatants to kill noncombatants due to improper classification.

One obvious reply is that while there are missions in which the ability to make such distinctions would be important, there are others where it would not be required on the part of the autonomous weapon. If a robot infantry unit were engaged in combat within a populated city, then it would certainly need to be able to make such a distinction. However, just a human bomber crew sent on a mission to destroy a factory would not be required to make such distinctions, an autonomous bomber would not need to have this ability. As such, this concern only has merit in cases in which such distinctions must be made and could be reasonably made by a human in the same situation. Thus, a sweeping ban on autonomous weapons would not be warranted by this concern.

A second obvious reply is that this is a technical problem that could be solved to a degree that would make an autonomous weapon at least as reliable as an average human soldier in making the distinction between combatants and non-combatants. It seems likely that this could be done given that the objective is a human level of reliability. After all, humans in combat do make mistakes in this matter so the bar is not terribly high.  As such, banning such weapons would seem to be premature—it would need to be shown that such weapons could not make this distinction as well as an average human in the same situation.

A second concern is based on the view that the decision to kill should be made by a human being and not by a machine. Such a view could be based on an abstract view about the moral right to make killing decisions or perhaps on the view that humans would be more merciful than machines.

One obvious reply is that autonomous weapons are still just weapons. Human leaders will, presumably, decide when they are deployed and give them their missions. This is analogous to a human firing a seeking missile—the weapon tracks and destroys the intended target, but the decision that someone should die was made by a human. Presumably humans would be designing the decision making software for the machines and they could program in a form of digital mercy—if desired.

There is, of course, the science fiction concern that the killer machines will become completely autonomous and fight their own wars (as in Terminator and “Second Variety”). The concern about rogue systems is worth considering, but is certainly a tenuous basis for a ban on autonomous weapons.

Another obvious reply is that while a machine would probably lack mercy, they would also lack anger and hate. As such, they might actually be less awful about killing than humans.

A third concern is based on the fact that autonomous machines are just machines without will or choice (which might also be true of humans). As such, wicked or irresponsible leaders could acquire autonomous weapons that will simply do what they are ordered to do, even if that involves slaughtering children.

The obvious, but depressing, reply to this is that such leaders seem to never want for people to do bidding, however awful that bidding might be. Even a cursory look at the history of war and terrorism shows that this is a terrible truth. As such, autonomous weapons do not seem to pose a special danger in this regard: anyone who could get an army of killer robots would almost certainly be able to get an army of killer humans.

There is, of course, a legitimate concern that autonomous weapons could be hacked and used by terrorists or other bad people. However, this would be the same as such people getting access to non-autonomous weapons and using them to hurt and kill people.

In general, the moral motivation of the people who oppose autonomous weapons is laudable. They presumable wish to cut down on death and suffering. However, this goal seems to be better served by the development of autonomous weapons. Some reasons for this are as follows.

First, since autonomous weapons are not crewed, their damage or destruction will not result in harm or death to people. If a manned fighter plane is destroyed, that is likely to result in harm or death to a person. However, if a robot fighter plane is shot down, no one dies. If both sides are using autonomous weapons, then the causality count would presumably be lower than in a conflict where the weapons are all manned. To use an analogy, automating war could be analogous to automating dangerous factory work.

Second, autonomous weapons can advance the existing trend in precision weapons. Just as “dumb” bombs that were dropped in massive raids gave way to laser guided bombs, autonomous weapons could provide an even greater level of precision. This would be, in part, due to the fact that there is no human crew at risk and hence the safety of the crew would no longer be a concern. For example, rather than having a manned aircraft drop a missile on target while jetting by at a high altitude, an autonomous craft could approach the target closely at a lower speed in order to ensure that the missile hits the right target.

Thus, while the proposal to ban such weapons is no doubt motivated by the best of intentions, the ban itself would not be morally justified.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Owning Human Genes

Human genome to genes

Human genome to genes (Photo credit: Wikipedia)

While it sounds a bit like science fiction, the issue of whether or not human genes can be owned has become a matter of concern. While the legal issue is interesting, my focus will be on the philosophical aspects of the matter. After all, it was once perfectly legal to own human beings—so what is legal is rather different from what is right.

Perhaps the most compelling argument for the ownership of genes is a stock consequentialist argument. If corporations cannot patent and thus profit from genes, then they will have no incentive to engage in expensive genetic research (such as developing tests for specific genes that are linked to cancer). The lack of such research will mean that numerous benefits to individuals and society will not be acquired (such as treatments for specific genetic conditions). As such, not allowing patents on human genes would be wrong.

While this argument does have considerable appeal, it can be countered by another consequentialist argument. If human genes can be patented, then this will allow corporations to take exclusive ownership of these genes, thus allowing them a monopoly. Such patents will allow them to control the allowed research conducted even at non-profit institutions such as universities (who sometimes do research for the sake of research), thus restricting the expansion of knowledge and potentially slowing down the development of treatments. This monopoly would also allow the corporation to set the pricing for relevant products or services without any competition. This is likely to result in artificially high prices which could very well deny people needed medical services or products simply because they cannot meet the artificially high prices arising from the lack of competition. As such, allowing patents on human genes would be wrong.

Naturally, this counter argument can be countered. However, the harms of allowing the ownership of human genes would seem to outweigh the benefits—at least when the general good is considered. Obviously, such ownership would be very good for the corporation that owns the patent.

In addition to the moral concerns regarding the consequences, there is also the general matter of whether it is reasonable to regard a gene as something that can be owned. Addressing this properly requires some consideration of the basis of property.

John Locke presents a fairly plausible account of property: a person owns her body and thus her labor. While everything is initially common property, a person makes something her own property by mixing her labor with it. To use a simple example, if Bill and Sally are shipwrecked on an ownerless island and Sally gathers coconuts from the trees and build a hut for herself, then the coconuts and hut are her property. If Bill wants coconuts or a hut, he’ll have to either do work or ask Sally for access to her property.

On Locke’s account, perhaps researchers could mix their labor with the gene and make it their own. Or perhaps not—I do not, for example, gain ownership of the word “word” in general because I mixed my labor with it by typing it out. I just own the work I have created in particular. That is, I own this essay, not the words making it up.

Sticking with Locke’s account, he also claims that we are owned by God because He created us. Interestingly, for folks who believe that God created the world, it would seem to follow that a corporation cannot own a human gene. After all, God is the creator of the genes and they are thus His property. As such, any attempt to patent a human gene would be an infringement on God’s property rights.

It could be countered that although God created everything, since He allows us to own the stuff He created (like land, gold, and apples), then He would be fine with people owning human genes. However, the basis for owning a gene would still seem problematic—it would be a case of someone trying to patent an invention which was invented by another person—after all, if God exists then He invented our genes, so a corporation cannot claim to have invented them. If the corporation claims to have a right to ownership because they worked hard and spent a lot of money, the obvious reply is that working hard and spending a lot of money to discover what is already owned by another would not transfer ownership. To use an analogy, if a company worked hard and spent a lot to figure out the secret formula to Coke, it would not thus be entitled to own Coca Cola’s formula.

Naturally, if there is no God, then the matter changes (unless we were created by something else, of course). In this case, the gene is not the property of a creator, but something that arose naturally. In this case, while someone can rightfully claim to be the first to discover a gene, no one could claim to be the inventor of a naturally occurring gene. As such, the idea that ownership would be confirmed by mere discovery would seem to be a rather odd one, at least in the case of a gene.

The obvious counter is that people claim ownership of land, oil, gold and other resources by discovering them. One could thus argue that genes are analogous to gold or oil: discovering them turns them into property of the discoverer. There are, of course, those who claim that the ownership of land and such is unjustified, but this concern will be set aside for the sake of the argument (but not ignored—if discovery does not confer ownership, then gene ownership would be right out in regards to natural genes).

While the analogy is appealing, the obvious reply is that when someone discovers a natural resource, she gains ownership of that specific find and not all instances of what she found. For example, when someone discovers gold, they own that gold but not gold itself. As another example, if I am the first human to stumble across naturally occurring Unobtanium on an owner-less alien world, I thus do not gain ownership of all instances of Unobtanium even if it cost me a lot of money and work to find it. However, if I artificially create it in my philosophy lab, then it would seem to be rightfully mine. As such, the researchers that found the gene could claim ownership of that particular genetic object, but not the gene in general on the grounds that they merely found it rather than created it. Also, if they had created a new artificial gene that occurs nowhere in nature, then they would have grounds for a claim of ownership—at least to the degree they created the gene.

My Amazon Author Page

Enhanced by Zemanta

Do Dogs Have Morality?

A Good Dog or a Moral Dog?

The idea that morality has its foundations in biology is enjoying considerable current popularity, although the idea is not a new one. However, the current research is certainly something to be welcomed, if only because it might give us a better understanding of our fellow animals.

Being a philosopher and a long-time pet owner, I have sometimes wondered whether my pets (and other animals) have morality. This matter was easily settled in the case of cats: they have a morality, but they are evil.  My best cats have been paragons of destruction, gladly throwing the claw into lesser beings and sweeping breakable items to the floor with feline glee. Lest anyone get the wrong idea, I really like cats—in part because they are so very evil in their own special ways. The matter of dogs and morality is rather more controversial. Given that all of ethics is controversial; this should hardly be a shock.

Being social animals that have been shaped and trained by humans for thousands of years, it would hardly be surprising that dogs exhibit behaviors that humans would regard as moral in nature. However, it is well known that people anthropomorphize their dogs and attribute to them qualities that they might not, in fact, possess. As such, this matter must be approached with due caution. To be fair, we also anthropomorphize each other and there is the classic philosophical problem of other minds—so it might be the case that neither dogs nor other people have morality because they lack minds. For the sake of the discussion I will set aside the extreme version of the problem of other minds and accept a lesser challenge. To be specific, I will attempt to make a plausible case for the notion that dogs have the faculties to possess morality.

While I will not commit to a specific morality here, I will note that for a creature to have morality it would seem to need certain mental faculties. These would seem to include cognitive abilities adequate for making moral choices and perhaps also emotional capabilities (if morality is more a matter of feeling than thinking).

While dogs are not as intelligent as humans (on average) and they do not use true language, they clearly have a fairly high degree of intelligence. This is perhaps most evident in the fact that they can be trained in very complex tasks and even in professions (such as serving as guide or police dogs). They also exhibit an exceptional understanding of human emotions and while they do not have language, they certainly can learn to understand verbal and gesture commands given by humans. Dogs also have an understanding of tokens and types. To be specific, they are quite good at recognizing individuals and also good at recognizing types of things. For example, a dog can distinguish its owner while also distinguishing humans from cats. As another example, my dogs have always been able to recognize any sort of automobile and seem to understand what they do—they are generally eager to jump aboard whether it is my pickup truck or someone else’s car. On the face of it, dogs seem to have the mental horsepower needed to engage in basic decision making.

When it comes to emotions, we have almost as much reason to believe that dogs feel and understand them as we do for humans having that ability. The main difference is that humans can talk (and lie) about how they feel; dogs can only observe and express emotions. Dogs clearly express anger, joy, fear and other emotions and seem to understand those emotions in other animals. This is shown by how dogs react to expression of emotion. For example, dogs seem to recognize when their owners are sad or angry and react accordingly. Thus, while dogs might lack all the emotional nuances of humans and the capacity to talk about them, they do seem to have the basic emotional capabilities that might be necessary for ethics.

Of course, showing that dogs have intelligence and emotions would not be enough to show that dogs have morality. What is needed is some reason to think that dogs use these capabilities to make moral decisions and engage in moral behavior.

Dogs are famous for possessing traits that are analogous to (or the same as) virtues such as loyalty, compassion and courage.  Of course, Kant recognized these traits but still claimed that dogs could not make moral judgments. As he saw it, dogs are not rational beings and do not act in accord with the law. But, roughly put, they seem to have an ersatz sort of ethics in that they can act in ways analogous to human virtue. While Kant does make an interesting case, there do seem to be some reasons to accept that dogs can engage in basic moral judgments. Naturally, since dogs do not write treatises on moral philosophy, I can only speculate on what is occurring in their minds (or brains). As noted above, there is always the risk of projecting human qualities onto dogs and, of course, they make this very easy to do.

One area that seems to have potential for showing that dogs have morality is the matter of property. While some might think that dogs regard whatever they can grab (be it food or toys) as their property, this is not always the case. While it seems true that some dogs are Hobbesian, this is also true of humans. Dogs, based on my decades of experience with them, seem to be capable of clearly grasping property. For example, my husky Isis has a large collection of toys that are her possessions. She reliably distinguishes between her toys and very similar items (such as shoes, clothing, sporting goods and so on) that do not belong to her. While I do not know for sure what happens in her mind, I do know that when I give her a toy and go through the “toy ritual” she gets it and seems to recognize that the toy is her property now. Items that are not given to her are apparently recognized as being someone else’s property and are not chewed upon or dragged outside. In the case of Isis, this extends (amazingly enough) even to food—anything handed to her or in her bowl is her food, anything else is not. Naturally, she will ask for donations, even when she could easily take the food. While other dogs have varying degrees of understanding of property and territory, they certainly seem to grasp this. Since the distinction between mine and not mine seems rather important in ethics, this suggests that dogs have some form of basic morality—at least enough to be capitalists.

Dogs, like many other animals, also have the capacity to express a willingness to trust and will engage in reprisals against other dogs that break trust. I often refer to this as “dog park justice” to other folks who are dog people.

When dogs get together in a dog park (or other setting) they will typically want to play with each other. Being social animals, dogs have various ways of signaling intent. In the case of play, they typically engage in “bows” (slapping their front paws on the ground and lowering their front while making distinctive sounds). Since dogs cannot talk, they have to “negotiate” in this manner, but the result seems similar to how humans make agreements to interact peacefully.

Interestingly, when a dog violates the rules of play (by engaging in actual violence against a playing dog) other dogs recognize this violation of trust—just as humans recognize someone who violates trust. Dogs will typically recognize a “bad dog” when it returns to the park and will avoid it, although dogs seem to be willing to forgive after a period of good behavior. An understanding of agreements and reprisals for violating them seems to show that dogs have at least a basic system of morality.

As a final point, dogs also engage in altruistic behavior—helping out other dogs, humans and even other animals. Stories of dogs risking their lives to save others from danger are common in the media and this suggests that dogs can make decisions that put themselves at risk for the well-being of others. This clearly suggests a basic canine morality and one that makes such dogs better than ethical egoists. This is why when I am asked whether I would chose to save my dog or a stranger, I would chose my dog: I know my dog is good, but statistically speaking a random stranger has probably done some bad things. Fortunately, my dog would save the stranger.

My Amazon Author Page

Enhanced by Zemanta

Four kinds of philosophical people

We’ll begin this post where I ended the last. The ideal philosopher lives up to her name by striving for wisdom. In practice, the pursuit of wisdom involves developing a sense of good judgment when tackling very hard questions. I think there are four skills involved in the achievement of good judgment: self-insight, humility, rigor, and cooperativeness.

Even so, it isn’t obvious how the philosophical ideal is supposed to model actual philosophers. Even as I was writing the last post, I had the nagging feeling that I was playing the role of publicist for philosophy. A critic might say that I set out to talk about how philosophers were people, but only ended up stating some immodest proposals about the Platonic ideal of the philosopher. The critic might ask: Why should we think that it has any pull on real philosophers? Do the best professional philosophers really conceive of themselves in this way? If I have no serious answer to these questions, then I have done nothing more than indulged in a bit of cheerleading on behalf of my beloved discipline. So I want to start to address that accusation by looking at the reputations of real philosophers.

Each individual philosopher will have their own ideas about which virtues are worth investing in and which are worth disregarding. Even the best working philosophers end up neglecting some of the virtues over the others: e.g., some philosophers might find it relatively less important to write in order to achieve consensus among their peers, and instead put accent on virtues like self-insight, humility, and rigour. Hence, we should expect philosophical genius to be correlated with predictable quirks of character which can be described using the ‘four virtues’ model. And if that is true, then we should be able to see how major figures in the history of philosophy measure up to the philosophical ideal. If the greatest philosophers can be described in light of the ideal, we should be able to say we’ve learned something about the philosophers as people.

And then I shall sing to the Austrian mountains in my best Julie Andrews vibrato: “public relations, this is not“.

—-

In my experience, many skilled philosophers who work in the Anglo-American tradition will tend to have a feverish streak. They will tend to find a research program which conforms with their intuitions (some of which may be treated as “foundational” or givens), and then hold onto that program for dear life. This kind of philosopher will change her mind only on rare occasions, and even then only on minor quibbles that do not threaten her central programme. We might call this kind of philosopher a “programmist” or “anti-skeptic, since the programmist downplays the importance of humility, and is more interested in characterizing herself in terms of the other virtues like philosophical rigour.

You could name a great many philosophers who seem to hold this character. Patricia and Paul Churchland come to mind: both have long held the view that the progress of neuroscience will require the radical reformation of our folk psychological vocabulary. However, when I try to think of a modern exemplar of this tradition, I tend to think of W.V.O. Quine, who held fast to most of his doctrinal commitments throughout his lifetime: his epistemological naturalism and holism, to take two examples. This is just to say that Quine thought that the interesting metaphysical questions were answerable by science. Refutation of the deeper forms of skepticism was not very high on Quine’s agenda; if there is a Cartesian demon, he waits in vain for the naturalist’s attention. The most attractive spin on the programmist’s way of doing things is by saying they have raised philosophy to the level of a craft, if not a science.

—-

Programmists are common among philosophers today. But if I were to take you into a time machine and introduced you to the elder philosophers, then it would be easy to lose all sense of how the moderns compare with their predecessors. The first philosophers lived in a world where science was young, if not absent altogether; there was no end of mystery to how the universe got on. For many of them, there was no denying that skepticism deserved a place at the table. From what we can tell from what they left behind, many ancient philosophers (save Aristotle and Pythagoras) did not possess the quality that we now think of as analytic rigour. The focus was, instead, of developing the right kind of life, and then — well, living it.

We might think of this as a wholly different approach to being a philosopher than our modern friend the programmist. These philosophers were self-confident and autonomous, yet had plenty to say to the skeptic. For lack of a better term, we might call this sort of philosopher a “guru” or “informalist“. The informalist trudges forward, not necessarily with the light of reason and explicit argument, but of insight and association, often expressed in aphorisms. To modern professional philosophers and academic puzzle-solvers, the guru may seem like a specialist in woo and mysticism, a peddler of non-sequiturs. Many an undergraduate in philosophy will aspire to be a guru, and endure the scorn from their peers  (often, rightly administered).

Be that as it may, some gurus end up having a vital place in the history of modern philosophy. Whenever I think of the ‘guru’ type of philosopher, I tend to think of Frederich Nietzsche — and I feel justified in saying that in part because I guess that he would have accepted the title. For Nietzsche, insight was the single most important feature of the philosopher, and the single trait which he felt was altogether lacking in his peers.

Nietzsche was a man of passion, which is the reason why he is so easily misunderstood. Also, for a variety of reasons, Nietzsche was a man who suffered from intense loneliness. (In all likelihood, the fact that he was a rampant misogynist didn’t help in that department.) But he was also a preacher’s son, his rhetoric electric, his sermons brimming with insight and even weird lapses into latent self-deprecation. Moreover, he is a man who wrote in order to be read, and who was excited by the promise of new philosophers coming out to replace old canons. In the long run, he got what he wanted; as Walter Kaufman wrote, “Nietzsche is one of the few philosophers since Plato whom large numbers of intelligent people read for pleasure”.

—-

“He has the pride of Lucifer.” — Russell on Wittgenstein

Some philosophers prefer to strike out on their own, paving an intellectual path by way of sheer stamina and force of will. We might call them the “lone wolves“. The lone wolf will often appear as a kind of contrarian with a distinctive personality. However the lone wolf is set apart from a mere devil’s advocate by virtue of the fact that she needs to pump unusually deep wellsprings of creativity and cleverness into her craft. Because she needs to strike off alone, the wolf has to be prepared to chew bullets for breakfast: there is no controversial position she is incapable of endorsing, so long as those positions qualify as valid moves in the game of giving and taking of reasons. She is out for adventure, to prove herself capable of working on her own. More than anything else, the lone wolf despises philosophical yes-men and yes-women. She has no time for the people who are satisfied by conventional wisdom — people who revere the ongoing dialectic as a sacred activity, a Great Conversation between the ages. The lone wolf says: the hell with this! These are problems, and problems are meant to be solved.

Ludwig Wittgenstein was a lone wolf, in the sense that nobody could quite refute Wittgenstein except for Wittgenstein. The philosophical monograph which made him famous, the Tractatus, began with an admission of idiosyncracy: “Perhaps this book will be understood only by someone who has himself already had the thoughts that are expressed in it—or at least similar thoughts.—So it is not a textbook.—Its purpose would be achieved if it gave pleasure to one person who read and understood it.” He was a private man, who published very little while alive, and whose positions were sometimes unclear even to his students. He was an intense man, reputed to have wielded a hot poker at one of his contemporaries. And he had an oracular style of writing — the Tractatus resembles an overlong Powerpoint presentation, while the Investigations was a free-wheeling screed. These qualities conspired to give the man himself an almost mythical quality. As Ernest Nagel wrote in 1936 (quoting a Viennese friend): “in certain circles the existence of Wittgenstein is debated with as much ingenuity as the historicity of Christ has been disputed in others”.

Wittgenstein’s work has lasting significance. His anti-private language argument is a genuine philosophical innovation, and widely celebrated as such. As such, he is the kind of philosopher that everybody has to know at least something about. But none of this came about by the power of idiosyncrasy alone. Wittgenstein achieved notoriety by demonstrating that he had a penetrating ability to go about the whole game of giving and taking reasons.

—-

“Synthesizers are necessarily dedicated to a vision of an overarching truth, and display a generosity of spirit towards at least wide swaths of the intellectual community. Each contributes partial views of reality, Aristotle emphasizes; so does Plotinus, and Proclus even more widely…” Randall Collins, The Sociology of Philosophies

Some philosophers are skilled at combining the positions and ideas that are alive in the ongoing conversation and weaving them into an overall picture. This is a kind of philosopher that we might call the “syncretist“. Much like the lone wolf, the syncretist despises unchallenged dogmatism; but unlike the lone wolf, this is not because she enjoys the prospect of throwing down the gauntlet. Rather, the syncretist enjoys the murmur of people getting along, engaged in a productive conversation. Hence, the syncretist is driven to reconcile opposing doctrines, so long as those doctrines are plausible. When she is at her best, the syncretist is able to generate a powerful synthesis out of many different puzzle pieces, allowing the conversation to become both more abstract without also becoming unintelligible. They do not just say, “Let a thousand flowers bloom” — instead, they demonstrate how the blooming of one flower only happens when in the company of others.

The only philosopher that I have met who absolutely exemplifies the spirit of the syncretist, and persuasively presents the syncretist as a virtuous standpoint in philosophy, is the Stanford philosopher Helen Longino. In my view, her book The Fate of Knowledge is a revelation.

A more infamous [example] of the syncretist, however, is Jurgen Habermas. Habermas is an under-appreciated philosopher, a figure who is widely neglected in Anglo-American philosophy departments and (for a time) was widely scorned in certain parts of Europe. True, Habermas is a difficult philosopher to read. And, in fairness, one sometimes gets the sense that his stuff is a bit too ecumenical to be motivated on its own terms. But part of what makes Habermas close to an ideal philosopher is that he is an intellectual who has read just about everything — he has partaken in wider conversations, attempting to reconcile the analytic tradition with themes that stretch far beyond its remit. Habermas also has a prodigious output: he has written on a countless variety of subjects, including speech act theory, the ethics of assertion, political legitimation, Kohlberg’s stages of moral development, collective action, critical theory and the theory of ideology, social identity, normativity, truth, justification, civilization, argumentation theory, and doubtless many other things. If a dozen people carved up his bibliography and each staked a claim to part of it, you’d end up with a dozen successful academic careers.

For some intellectuals, syncretism is hard to digest. Just as both mothers in the court of King Solomon might have felt equally betrayed, the unwilling subjects of the syncretist’s analysis may respond with ill tempers. In particular, the syncretist grates on the nerves of those who aspire to achieve the status of lone wolf intellectuals. Take two examples, mentioned by Dr. Finlayson (Sussex). On the one hand, Marxist intellectuals will sometimes like to accuse Habermas of “selling out” — for instance, because Habermas has abandoned the usual rhythms of dialectical philosophy by trying his hand at analytic philosophy. On the other hand, those in analytic philosophy are not always very happy to recognize Habermas as a precursor to the shape of analytic philosophy today. John Searle explains in an uncompromising review: “Habermas has no theory of social ontology. He has something he calls the theory of communicative action. He says that the “purpose” of language is communicative action. This is wrong. The purpose of language is to perform speech acts. His concept of communicative action is to reach agreement by rational discussion. It has a certain irony, because Habermas grew up in the Third Reich, in which there was another theory: the “leadership principle”.” I suspect that Searle got Habermas wrong, but nobody said life as a philosopher was easy.

—-

Everything I’ve said above is a cartoon sketch of some philosophical archetypes. It is worth noting, of course, that none of the philosophers I have mentioned will fit into the neat little boxes I have made for them. The vagaries of the human personality resist being reduced to archetypes. Even in the above, I cheated a little: Nietzsche is arguably as much a lone wolf as he is a guru. I also don’t mean to suggest that all professional philosophers will fit into anything quite like these categories. Some are by reputation much too close to the philosophical ideal to fit into an archetype. (Hilary Putnam comes to mind.) And other professional philosophers are nowhere close to the ideal — there is no shortage of philosophers behaving badly. I mean only to say something about how you can use the ‘four virtues’ model of wisdom to say something interesting about philosophers themselves.

(BLS Nelson is the author of this article. For more information about him, click here.)