Tag Archives: Turing test

Ex Machina & Other Minds I: Setup

The movie Ex Machina is what I like to call “philosophy with a budget.” While the typical philosophy professor has to present philosophical problems using words and Powerpoint, movies like Ex Machina can bring philosophical problems to dramatic virtual life. This then allows philosophy professors to jealously reference such films and show clips of them in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some minor spoilers in what follows.

While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a much more limited set of problems, all connected to the mind. Since the film is primarily about AI, this is not surprising. The gist of the movie is that Nathan has created an AI named Ava and he wants an employee named Caleb to put her to the test.

The movie explicitly presents the test proposed by Alan Turing. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, there is a twist on the test: Caleb knows that Ava is a machine and will be interacting with her in person.

In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.

What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.

Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

As a test for intelligence, artificial or otherwise, this seems to be quite reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language. Fortunately, Ava uses English and these problems are bypassed.

Ava easily passes the Cartesian test: she is able to reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.

Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people in order to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.

While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether or not Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).

In the next essay, I will consider the matter of testing in more depth.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Trolling Test

Wtason TrollOne interesting philosophical problem is known as the problem of other minds. The basic idea is that although I know I have a mind (I think, therefore I think), the problem is that I need a method by which to know that other entities have (or are) minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether and entity thinks or not.

Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

This Cartesian approach was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test.

Not surprisingly, technological advances have resulted in computers that can engage in behavior that appears to involve using language in ways that might pass the test. Perhaps the best known example is IBM’s Watson—the computer that won at Jeopardy. Watson recently upped his game by engaging in what seemed to be a rational debate regarding violence and video games.

In response to this, I jokingly suggested a new test to Patrick Lin: the trolling test. In this context, a troll is someone “who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.”

While trolls are apparently truly awful people (a hateful blend of Machiavellianism, narcissism, sadism and psychopathy) and trolling is certainly undesirable behavior, the trolling test does seem worth considering.

In the abstract, the test would work like the Turing test, but would involve a human troll and a computer attempting to troll. The challenge would be for the computer troll to successfully pass as human troll.

Obviously enough, a computer can easily be programmed to post random provocative comments from a database. However, the real meat (or silicon) of the challenge comes from the computer being able to engage in (ironically) relevant trolling. That is, the computer would need to engage the other commentators in true trolling.

As a controlled test, the trolling computer (“mechatroll”) would “read” and analyze a selected blog post. The post would then be commented on by human participants—some engaging in normal discussion and some engaging in trolling. The mechatroll would then endeavor to troll the human participants (and, for bonus points, to troll the trolls) by analyzing the comments and creating appropriately trollish comments.

Another option is to have an actual live field test. A specific blog site would be selected that is frequented by human trolls and non-trolls. The mechatroll would then endeavor to engage in trolling on that site by analyzing the posts and comments.

In either test scenario, if the mechatroll were able to troll in a way indistinguishable from the human trolls, then it would pass the trolling test.

While “stupid mechatrolling”, such as just posting random hateful and irrelevant comments, is easy, true mechatrolling would be rather difficult. After all, the mechatroll would need to be able to analyze the original posts and comments to determine the subjects and the direction of the discussion. The mechatroll would then need to make comments that would be trollishly relevant and this would require selecting those that would be indistinguishable from those generated by a narcissistic, Machiavellian, psychopathic, and sadistic human.

While creating a mechatroll would be a technological challenge, it might be suspected that doing so would be undesirable. After all, there are far too many human trolls already and they serve no valuable purpose—so why create a computer addition? One reasonable answer is that modeling such behavior could provide useful insights into human trolls and the traits that make them trolls. As far as a practical application, such a system could be developed into a troll-filter to help control the troll population.

As a closing point, it might be a bad idea to create a system with such behavior—just imagine a Trollnet instead of Skynet—the trollinators would slowly troll people to death rather than just quickly shooting them.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Sexbots are Persons, Too?

Page_1In my previous essays on sexbots I focused on versions that are clearly mere objects. If the sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots would lack the moral status needed to be wronged. Obviously enough, the sexbots of the near future will be in the class of objects. However, science fiction has routinely featured intelligent, human-like robots (commonly known as androids). Intelligent beings, even artificial ones, would seem to have an excellent claim on being persons. In terms of sorting out when a robot should be treated as person, the reasonable test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

While Descartes does not deeply explore the moral distinctions between beings that talk (that have minds) and those that merely make noises, it does seem reasonable to regard a being that talks as a person and to thus grant it the moral status that goes along with personhood. This, then, provides a means to judge whether an advanced sexbot is a person or not: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would presumably lack moral status.

Having sex with a sexbot that can pass the Cartesian test would certainly seem to be morally equivalent to having sex with a human person. As such, whether the sexbot freely consented or not would be a morally important matter. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is, of course, actually done). If such sexbots were mistreated, this would also be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition.

It might also be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and hence it would still be reasonable to hold that it is not a person. It would seem to be a person, but would merely be acting like a person. While this is a point well worth considering, the same sort of argument could be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the classic problem of other minds: all I can do is see your behavior and infer that you are self-aware based on analogy to my own case. Hence, I do not know that you are aware since I cannot be you. From your perspective, the same is true about me. As such, if a robot acted in an intelligent manner, it would seem that it would have to be regarded as being a person on those grounds. To fail to do so would be a mere prejudice in favor of the organic.

In reply, some people believe that other people can be used as they see fit. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are simply evil.

Those with religious inclinations would probably bring up the matter of the soul. But, the easy reply is that we would have as much evidence that robots have souls as we do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product that is as like a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do onerous tasks, but to the degree they are intelligent, they would be slaves.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would actually be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Argubot

Robot Monster... ahora resulta que los robots ...
Image by Javier Piragauta via Flickr

One interesting phenomenon is that groups often adopt a set of stock views and arguments that are almost mechanically deployed to defend the views. In many cases, the pattern of responses seems almost robotic-in many “discussions” I can predict what stock arguments will be deployed next.

I have even found that if I can lure someone off their pre-established talking points, then they are often quite at a loss as to what to say next. This, I suspect, is a sign that a person does not really have his/her own arguments but is merely putting forth established dogmarguments (dogmatic arguments).

Apparently someone else noticed this phenomenon-specifically in the context of global warming arguments and decided to create his own argubot. Nigel Leck created a script that searches Twitter for key phrases associated with stock arguments against the view that humans have caused global warming. When the argubot finds a foe it then engages by sending a response tweet containing a counter to the argument (and relevant links).

In some cases the target of the argubot does not realize that s/he is arguing with a script and not a person. The argubot is set up to respond with a variety of “prefabricated” arguments when the target repeats an argument, thus helping to create that impression. The argubot also has a repertoire  that goes beyond global warming. For example, it is stocked with arguments about religion. This also allows it to maintain the impression that it is a person.

While the argubot is reasonably sophisticated, it is not quite up to the Turing test. For example, it cannot discern when people are joking. While it can fool people into thinking they are arguing with a person, it is important to note that the debate takes place in the context of Twitter.  As such, each tweet is limited to 140 characters. This makes it much easier for a argubot to pass itself off as a person.  Also worth considering is the fact that people tend to have rather low expectations for the contents of tweets which makes it much easier for an argubot to masquerade as a person. However, it is probably just a matter of time before a bot passes the Tweeter Test (being able to properly pass itself off as person in the context of twitter).

What I find most interesting about the argubot is not that it can often pass as a human tweeter, but that the argumentative process with its targets can be automated in this manner. This inclines me to think that the people who the argubot are arguing with are also, in effect, argubots. That is, they are also “running scripts” and presenting pre-fabricated arguments they have acquired from others. As such, it could be seen as  a case of a computer based argubot arguing against biological argubots with both sides relying on scripts and data provided by others.

It would be interesting to see the results if someone wrote another argubot to engage the current argubot in debate. Perhaps in the future argumentation will be left to the argubots and the silicon tower will replace the ivory tower. Then again, this would probably put me out of work.

One final point worth considering is the ethics of  the argubot at hand.

One concern is that it seems deceptive: it creates the impression that the target is engaged in a conversation with a person when s/he is actually just engaged with a script. Of course, the argubot does not state that it is a person nor does it make use of deception to harm the target. Given its purpose, to argue about global warming, it seems to be irrelevant whether the arguing is done by a person or a script. This contrasts with cases in which it does matter, such as a chatbot designed to trick someone into thinking that another person is romantically interested in them or to otherwise engage with the intent to deceive. As such, the argubot does not seem to be unethical in regards to fact that people might think it is a person.

Another concern is that the argubot seeks out targets and engages them (an argumentative Terminator or Berserker). This, some might claim, could be seen as a form of spamming or harassment.

As far as the spamming goes, the argubot does not deploy what would intuitively be considered spam in terms of its content. After all, it is not trying to sell a product, etc. However, it might be argued that it is sending out unsolicited bulk tweets, which might thus be regarded as spam.  Spamming is rather well established as immoral (if an argument is wanted, read “Evil Spam” in my book What Don’t You Know? ) and if the argubot is spamming, then this would be unethical.

While the argubot might seem like a spambot, one way to defend it against this charge is to note that the argubot provides what are mostly relevant responses that are comparable to what a human would legitimately  send in response to a tweet. Thus, while it is automated, it is arguing rather than spamming. This seems to be an important distinction. After all, the argubot does not try to sell male enhancement, scam people, or get people to download a virus. Rather, it responds to arguments that can be seen as inviting a response-be it from a person or a script.

In regards to the harassment charge, the argubot does not seem to be engaged in what could be legitimately considered harassment. First, the content does not seem to constitute harassment.  Second, the context of the “debate” is a public forum (Twitter) that explicitly allows such interactions to take place-whether they involve just humans or humans and bots.

Obviously, an argubot could be written that would actually be spamming or engaged in harassment. However, this argubot does not seem to cross the ethical line in regards to this behavior.

I suspect that we will see more argubots soon.

Enhanced by Zemanta