One interesting philosophical problem is known as the problem of other minds. The basic idea is that although I know I have a mind (I think, therefore I think), the problem is that I need a method by which to know that other entities have (or are) minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether and entity thinks or not.
Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language.
Crudely put, the idea is that if something talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:
How many different automata or moving machines can be made by the industry of man [...] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.
This Cartesian approach was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test.
Not surprisingly, technological advances have resulted in computers that can engage in behavior that appears to involve using language in ways that might pass the test. Perhaps the best known example is IBM’s Watson—the computer that won at Jeopardy. Watson recently upped his game by engaging in what seemed to be a rational debate regarding violence and video games.
In response to this, I jokingly suggested a new test to Patrick Lin: the trolling test. In this context, a troll is someone “who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.”
While trolls are apparently truly awful people (a hateful blend of Machiavellianism, narcissism, sadism and psychopathy) and trolling is certainly undesirable behavior, the trolling test does seem worth considering.
In the abstract, the test would work like the Turing test, but would involve a human troll and a computer attempting to troll. The challenge would be for the computer troll to successfully pass as human troll.
Obviously enough, a computer can easily be programmed to post random provocative comments from a database. However, the real meat (or silicon) of the challenge comes from the computer being able to engage in (ironically) relevant trolling. That is, the computer would need to engage the other commentators in true trolling.
As a controlled test, the trolling computer (“mechatroll”) would “read” and analyze a selected blog post. The post would then be commented on by human participants—some engaging in normal discussion and some engaging in trolling. The mechatroll would then endeavor to troll the human participants (and, for bonus points, to troll the trolls) by analyzing the comments and creating appropriately trollish comments.
Another option is to have an actual live field test. A specific blog site would be selected that is frequented by human trolls and non-trolls. The mechatroll would then endeavor to engage in trolling on that site by analyzing the posts and comments.
In either test scenario, if the mechatroll were able to troll in a way indistinguishable from the human trolls, then it would pass the trolling test.
While “stupid mechatrolling”, such as just posting random hateful and irrelevant comments, is easy, true mechatrolling would be rather difficult. After all, the mechatroll would need to be able to analyze the original posts and comments to determine the subjects and the direction of the discussion. The mechatroll would then need to make comments that would be trollishly relevant and this would require selecting those that would be indistinguishable from those generated by a narcissistic, Machiavellian, psychopathic, and sadistic human.
While creating a mechatroll would be a technological challenge, it might be suspected that doing so would be undesirable. After all, there are far too many human trolls already and they serve no valuable purpose—so why create a computer addition? One reasonable answer is that modeling such behavior could provide useful insights into human trolls and the traits that make them trolls. As far as a practical application, such a system could be developed into a troll-filter to help control the troll population.
As a closing point, it might be a bad idea to create a system with such behavior—just imagine a Trollnet instead of Skynet—the trollinators would slowly troll people to death rather than just quickly shooting them.