Tag Archives: Machiavellianism

The Trolling Test

Wtason TrollOne interesting philosophical problem is known as the problem of other minds. The basic idea is that although I know I have a mind (I think, therefore I think), the problem is that I need a method by which to know that other entities have (or are) minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether and entity thinks or not.

Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

This Cartesian approach was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test.

Not surprisingly, technological advances have resulted in computers that can engage in behavior that appears to involve using language in ways that might pass the test. Perhaps the best known example is IBM’s Watson—the computer that won at Jeopardy. Watson recently upped his game by engaging in what seemed to be a rational debate regarding violence and video games.

In response to this, I jokingly suggested a new test to Patrick Lin: the trolling test. In this context, a troll is someone “who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.”

While trolls are apparently truly awful people (a hateful blend of Machiavellianism, narcissism, sadism and psychopathy) and trolling is certainly undesirable behavior, the trolling test does seem worth considering.

In the abstract, the test would work like the Turing test, but would involve a human troll and a computer attempting to troll. The challenge would be for the computer troll to successfully pass as human troll.

Obviously enough, a computer can easily be programmed to post random provocative comments from a database. However, the real meat (or silicon) of the challenge comes from the computer being able to engage in (ironically) relevant trolling. That is, the computer would need to engage the other commentators in true trolling.

As a controlled test, the trolling computer (“mechatroll”) would “read” and analyze a selected blog post. The post would then be commented on by human participants—some engaging in normal discussion and some engaging in trolling. The mechatroll would then endeavor to troll the human participants (and, for bonus points, to troll the trolls) by analyzing the comments and creating appropriately trollish comments.

Another option is to have an actual live field test. A specific blog site would be selected that is frequented by human trolls and non-trolls. The mechatroll would then endeavor to engage in trolling on that site by analyzing the posts and comments.

In either test scenario, if the mechatroll were able to troll in a way indistinguishable from the human trolls, then it would pass the trolling test.

While “stupid mechatrolling”, such as just posting random hateful and irrelevant comments, is easy, true mechatrolling would be rather difficult. After all, the mechatroll would need to be able to analyze the original posts and comments to determine the subjects and the direction of the discussion. The mechatroll would then need to make comments that would be trollishly relevant and this would require selecting those that would be indistinguishable from those generated by a narcissistic, Machiavellian, psychopathic, and sadistic human.

While creating a mechatroll would be a technological challenge, it might be suspected that doing so would be undesirable. After all, there are far too many human trolls already and they serve no valuable purpose—so why create a computer addition? One reasonable answer is that modeling such behavior could provide useful insights into human trolls and the traits that make them trolls. As far as a practical application, such a system could be developed into a troll-filter to help control the troll population.

As a closing point, it might be a bad idea to create a system with such behavior—just imagine a Trollnet instead of Skynet—the trollinators would slowly troll people to death rather than just quickly shooting them.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Carpe Diem and the Longer Now

So here’s the thing: I like utilitarianism. No matter what I do, no matter what I read, I always find that I am stuck in a utility-shaped box. (Here’s one reason: it is hard for me to applaud moral convictions if they treat rights as inviolableeven when the future of the right itself is at stake.) But trapped in this box as I am, sometimes I put my ear to the wall and hear what people outside the box are doing. And the voices outside tell me that utilitarianism is alienating and overly demanding.

I’m going to argue that act-utilitarianism is only guilty of these things if fatalism is incorrect. If fatalism is right, then integrity is nothing more than the capacity to make sense of a world when we are possessed with limited information about the consequences of actions. If I am right, then integrity does not have any other role in moral deliberation.

~

Supposedly, one of the selling points of act-utilitarianism is that it requires us to treat people impartially, by forcing us to examine a situation from some third-person standpoint and apply the principle of utility in a disinterested way. But if it were possible to do a definitive ‘moral calculus’, then we would be left with no legitimate moral choices to make. Independent judgment would be supplanted with each click of the moral abacus. It is almost as if one would need to be a Machiavellian psychopath in order to remain so impartial.

One consequence of being robbed of legitimate moral alternatives is that you might be forced to do a lot of stuff you don’t want to do. For instance, it looks as though detachment from our integrity could force us to into the squalor of excessive altruism, where we must give away anything and everything we own and hold dear. Our mission would be to maximize utility by doing works in such a way that would keep our own happiness above some subsistence minimum, and improve the lot of people who are far away. Selfless asceticism would be the order of the day.

In short, it seems like act-utilitarianism is a sanctimonious schoolteacher, that not only overrides our capacity for independent moral judgment, but also obliges us to sacrifice almost all of our more immediate interests for interests that are more remote — the people of the future, and the people geographically distant.

The longer now is a harsh mistress.

Friedrich Nietzsche, Samuel Scheffler, Bernard Williams: here are some passionate critics who have argued against utility in the above-stated ways. And hey, they’re not wrong. The desire to protect oneself, one’s friends, and one’s family from harm cannot simply be laughed away. Nietzsche can always be called upon to provide a mendacious quote: “You utilitarians, you, too, love everything useful only as a vehicle for your inclinations; you, too, really find the noise of its wheels insufferable?”

Well, it’s pretty clear that at least one kind of act-utilitarianism has noisy wheels. One might argue that nearer goods must be considered to have equal value as farther goods; today is just as important as tomorrow. When stated as a piece of practical wisdom, this makes sense; grown-ups need to have what Bentham called a ‘firmness of mind’, meaning that they should be able to delay gratification in order to find the most happiness out of life. But a naive utilitarian might take this innocent piece of wisdom and twist it into a pure principle of moral conduct, and hence produce a major practical disaster.

Consider the sheer number of things you need to do in order to make far-away people happy. You need to clamp down on all possible unintended consequences of your actions, and spend the bulk of your time on altruistic projects. Now, consider the limited number of things you can do to make a small number of people happy who are closest to you. You can do your damnest to seize the day, but presumably, you can only do so much to make your friends and loved ones happy without making yourself miserable in the process. So, all things considered, it would seem as though the naive utilitarian has to insist that we all turn into slaves to worlds that are other than our own — the table is tilted in the favor of the greatest number. We would have to demand that we give up on today for the sake of the longer now.

But that’s not to say that the utilitarian has been reduced to such absurdities. Kurt Baier and Henry Sidgwick are two philosophers that have explicitly defended a form of short-term egoism, since individuals are better judges of their own desires. Maybe utilitarianism isn’t such an abusive schoolteacher after all.

Nietzscheans one and all.

Why does act-utilitarianism seem so onerous? Well, if you’ve ever taken an introductory ethics class, you’re going to hear some variation on the same story. First you’ll be presented with a variety of exotic and implausible scenarios, involving threats to the wellbeing of conspecifics that are caught in a deadly Rube Goldberg machine (involving trolleys, organ harvesting, fat potholers, ill-fated hobos, etc.) When the issue is act-utilitarianism, the choice will always come down to two options: either you kill one person, or a greater number of others will die. In the thought-experiment, you are possessed with the power to avert disaster, and are by hypothesis acquainted with perfect knowledge of the outcomes of your choices. You’ll then be asked about your intuitions about what counts as the right thing to do. Despite all the delicious variety of these philosophical horror stories, there is always one constant: they tell you that you are absolutely sure that certain consequences will follow if you perform this-or-that action. So, e.g., you know for sure that the trolley will kill the one and save the five, you know for sure that the forced transplant of the Hobo’s organs will save the souls in the waiting room (and that the police will never charge you with murder), and so on.

This all sounds pretty ghoulish. And here’s the upshot: it is not intuitively obvious that the right answer in each case is to kill the one to save the five. It seems as though there is a genuine moral choice to be made.

Yet when confronted with such thought-experiments, squadrons of undergraduates have moaned: ‘Life is not like this. Choices are not so clear. We do not know the consequences.’ Sophomores are in a privileged position to see what has gone wrong with academic moralizing, since they are able to view the state of play with fresh eyes. For it is a morally important fact about the human condition that we don’t know much about the future. By imagining ourselves in a perfect state of information, we alienate ourselves from our own moral condition.

Once you see the essential disconnect between yourself and your hypothetical actor in the thought-experiment, blinders ought to fall from your eyes. It is true that I may dislike pulling the switch to change the trolley’s track, but my moral feelings should not necessarily bear on the question of what my more perfect alternate would need to do. Our so-called ‘moral intuitions’ only make a difference to the actual morality of the matter on the assumption that our judgments can reliably track the intuitions of your theoretical alternate — assuming your alternate knows the things they know, right on down to the bone. But then, this assumption is a thing that needs to be argued for, not assumed.

While we know a lot about what human beings need, our most specific knowledge about what people want is limited to our friends and intimates. That knowledge makes the moral path all the more clear. When dealing with strangers, the range of moral options is much wider than the range of options at home; after all, people are diverse in temperament and knowledge, scowl and shoe size. Moral principles arise out of uncertainty about the best means of achieving the act-utilitarian goal. Strike out uncertainty about the situation, and the range of your moral choices whittle down to a nub.

So if we had perfect information, then there is no doubt that integrity should go by the boards. But then, that’s not the fault of act-utilitarianism. After all, if we knew everything about the past and the future, then any sense of conscious volition would be impossible. This is just what fatalism tells us: free will and the angst of moral choice are byproducts of limited information, and without a sense of volition the very idea of integrity could not even arise.

Perhaps all this fatalism sounds depressing. But here’s the thing — our limited information has bizarrely romantic implications for us, understood as the creatures we are. For if we truly are modest in our ability to know and process information, and the rest of the above holds, then it is absurd to say, as Nietzsche does, that “whatever is done from love always occurs beyond good and evil”. It is hard to conceive of a statement that could be more false. For whatever is done from love, from trust and familiarity, is the clearest expression of both good and evil.

~

Look. Trolley-style thought-experiments do not show us that act-utilitarianism is demanding. Rather, they show us that increased knowledge entails increased responsibility. Since we are the limited sorts of creatures that we are, we need integrity, personal judgment, and moral rules to help us navigate the wide world of moral choice. If the consequentialist is supposed to be worried about anything, the argument against them ought to be that we need the above-stated things for reasons other than as a salve to heal the gaps in what we know.

Enhanced by Zemanta