Robot Monster... ahora resulta que los robots ...
Image by Javier Piragauta via Flickr

One interesting phenomenon is that groups often adopt a set of stock views and arguments that are almost mechanically deployed to defend the views. In many cases, the pattern of responses seems almost robotic-in many “discussions” I can predict what stock arguments will be deployed next.

I have even found that if I can lure someone off their pre-established talking points, then they are often quite at a loss as to what to say next. This, I suspect, is a sign that a person does not really have his/her own arguments but is merely putting forth established dogmarguments (dogmatic arguments).

Apparently someone else noticed this phenomenon-specifically in the context of global warming arguments and decided to create his own argubot. Nigel Leck created a script that searches Twitter for key phrases associated with stock arguments against the view that humans have caused global warming. When the argubot finds a foe it then engages by sending a response tweet containing a counter to the argument (and relevant links).

In some cases the target of the argubot does not realize that s/he is arguing with a script and not a person. The argubot is set up to respond with a variety of “prefabricated” arguments when the target repeats an argument, thus helping to create that impression. The argubot also has a repertoire  that goes beyond global warming. For example, it is stocked with arguments about religion. This also allows it to maintain the impression that it is a person.

While the argubot is reasonably sophisticated, it is not quite up to the Turing test. For example, it cannot discern when people are joking. While it can fool people into thinking they are arguing with a person, it is important to note that the debate takes place in the context of Twitter.  As such, each tweet is limited to 140 characters. This makes it much easier for a argubot to pass itself off as a person.  Also worth considering is the fact that people tend to have rather low expectations for the contents of tweets which makes it much easier for an argubot to masquerade as a person. However, it is probably just a matter of time before a bot passes the Tweeter Test (being able to properly pass itself off as person in the context of twitter).

What I find most interesting about the argubot is not that it can often pass as a human tweeter, but that the argumentative process with its targets can be automated in this manner. This inclines me to think that the people who the argubot are arguing with are also, in effect, argubots. That is, they are also “running scripts” and presenting pre-fabricated arguments they have acquired from others. As such, it could be seen as  a case of a computer based argubot arguing against biological argubots with both sides relying on scripts and data provided by others.

It would be interesting to see the results if someone wrote another argubot to engage the current argubot in debate. Perhaps in the future argumentation will be left to the argubots and the silicon tower will replace the ivory tower. Then again, this would probably put me out of work.

One final point worth considering is the ethics of  the argubot at hand.

One concern is that it seems deceptive: it creates the impression that the target is engaged in a conversation with a person when s/he is actually just engaged with a script. Of course, the argubot does not state that it is a person nor does it make use of deception to harm the target. Given its purpose, to argue about global warming, it seems to be irrelevant whether the arguing is done by a person or a script. This contrasts with cases in which it does matter, such as a chatbot designed to trick someone into thinking that another person is romantically interested in them or to otherwise engage with the intent to deceive. As such, the argubot does not seem to be unethical in regards to fact that people might think it is a person.

Another concern is that the argubot seeks out targets and engages them (an argumentative Terminator or Berserker). This, some might claim, could be seen as a form of spamming or harassment.

As far as the spamming goes, the argubot does not deploy what would intuitively be considered spam in terms of its content. After all, it is not trying to sell a product, etc. However, it might be argued that it is sending out unsolicited bulk tweets, which might thus be regarded as spam.  Spamming is rather well established as immoral (if an argument is wanted, read “Evil Spam” in my book What Don’t You Know? ) and if the argubot is spamming, then this would be unethical.

While the argubot might seem like a spambot, one way to defend it against this charge is to note that the argubot provides what are mostly relevant responses that are comparable to what a human would legitimately  send in response to a tweet. Thus, while it is automated, it is arguing rather than spamming. This seems to be an important distinction. After all, the argubot does not try to sell male enhancement, scam people, or get people to download a virus. Rather, it responds to arguments that can be seen as inviting a response-be it from a person or a script.

In regards to the harassment charge, the argubot does not seem to be engaged in what could be legitimately considered harassment. First, the content does not seem to constitute harassment.  Second, the context of the “debate” is a public forum (Twitter) that explicitly allows such interactions to take place-whether they involve just humans or humans and bots.

Obviously, an argubot could be written that would actually be spamming or engaged in harassment. However, this argubot does not seem to cross the ethical line in regards to this behavior.

I suspect that we will see more argubots soon.

Enhanced by Zemanta
Leave a comment ?


  1. yep, I know it’s walking a fine line. The bot only ever responds tweets that the other person has made and I took the line that they are publishing something like “GW is caused by the Sunspot” so I thought it was quite reasonable to respond by saying “Sun spot activity is at record lows in 2010, yet globally on track to be the hottest year recorded

  2. 1. Deception. I can’t go along with “Given its purpose, to argue about global warming, it seems to be irrelevant whether the arguing is done by a person or a script.”

    When I argue with another human, I am forming a relationship – however distant, transient, or ill-tempered – through which each of us can change. That relationship is less deep than a romantic relationship, but it is no less real.

    The deception of interacting with a script about ideology is less serious, but no less a deception.

    2. Spam. I think the spam argument is moot. Since Twitter, Inc. owns the service end to end, it’s their house and their house rules. Twitter can define spam on their service as they wish – and they do. Their ToS forbid @reply automated messages based on keyword searches, and they have specified an array of factors they may take into account when considering whether an account is spamming. If enough people flag or block an account, they will take action, but it’s their decision.

    On a practical level, I don’t want every electronic discussion everywhere about religion, ideology, politics or Windows vs. Linux to be spammed by bots interjecting themselves into presumed human-to-human discussions with “my link can beat up your link” arguments.

  3. Not sure if it’s really deception. It’s called “turning test” and it has a picture of a robot eye both rather large clues.

    My argument would be as much as you don’t like auto reply corrections I don’t like willful misinformation on any subject.

  4. 1. Deception. I can’t go along with “Given its purpose, to argue about global warming, it seems to be irrelevant whether the arguing is done by a person or a script.”

    I agree. The bot is incapable of considering arguments put to it and can only reply with ‘canned’ responses. While a person might behave like this, it is possible (I would hope likely) that they would not. Because that possibility does not exist with the bot, it is deceptive.

  5. The point is not to “argue” but to correct known falsehoods ( as far as we can know anything is false).

    Take the argument that “must be the sun as Neptune is also warming”, this is factually wrong and the bot will respond with “Neptune’s orbit is 164 years so seasonal and the sun is cooling

    If a new argument is made then the bot will not respond at all and I’ll look into it but no, I can’t let this meme to continue without being corrected.

    The “real” debate should be left up to humans. The goal of this bot was to counter the memes that have long ago been answered.

  6. NL: ‘If a new argument is made then the bot will not respond at all and I’ll look into it but no, I can’t let this meme to continue without being corrected.’

    Apparently you believe that you are uniquely capable of deciding what is true and what is false. Your statement ‘continue without being corrected’, combined with the statement that you will ‘look into’ new arguments, implies this.

    In fact, all your bot does is present, apparently as arguments coming from a real person, statements which you have decided are ‘correct’.

    I regard this as deceptive. Of course, I wouldn’t try to have any kind of serious debate by twitter anyway as the constraints make this pointless.

  7. Yes, with this debate there are somethings that are beyond reasonable doubt. The main arguments are here :-

    See posting below from the wattsupwith that forum. Yet, no one has taken up the challenge

    “I would be extremely interested if some or any of these arguments could be shown to be false. I mean you’ll need to do a little better than “Leck is also an atheist..” but seriously if the current best scientific understanding is counter to any of these arguments then I would disable those arguments and if enough were shown to be false I would disable the bot and publicly apologize. There’s a challenge for you or anyone who is up to it.

    PS. Anomalies a rule does not make. So if ten peer-reviewed papers say one thing and one says the opposite, sorry I’m going with the ten unless you can give good cause.”

  8. One question discussed by Mike is whether or not the bot is engaging in deceptive behaviour.

    I do not need to show that any, or all, of the statements the bot makes are false to argue that the actions of the bot are deceptive.

    Nor can you defend the running of the bot by arguing that the statements it makes are true.

    PS In science, one contrary instance is sufficient to call into question a theory. Science is not done by consensus or vote. So if one paper says the opposite, then the evidence or arguments in that paper have to be shown to be false before the other ten can be accepted as valid.

  9. Where is the deception ? Do you think when you type in a search request that there is a bunch of people looking it up for you ?

    All I’m doing is automating the correction of false claims. If that person then fires of another false claim in defense of the first then the bot will correct that too. This process can repeat for quite a while.

    No where am I claiming to be a “young blond” or anything like that.

  10. “Where is the deception ? Do you think when you type in a search request that there is a bunch of people looking it up for you ?”

    No, of course not. If the people to whom the bot replied had wanted an automated answer, they would have gone to a search engine. If these answers had been written into a search engine, clearly labeled as such on a web page, no question of deception would arise.

    However, when I write an e-mail, or post to a forum or blog, I expect any answer to come from a human. Would a reasonable Twitter user have the same expectation? If so, and if the bot doesn’t declare itself as such, then it is deception.

    Of course, if Twitter accepted this, and bots sprang up spouting replies anytime someone mentions a subject their programmers feel strongly about, then users would expect chatbots, the norm would change, and there would be no question of deception.

  11. Jim: I think my point is that I was doing this manually anyway, which is fairly silly thing for a web developer to do.

    I have to believe that “well informed reasonable people will come to reasonable conclusions” so it’s the misinformation that must be addressed.

  12. The fact that you were doing it manually does not negate the criticism that the practice is deceptive.

    And you have provided no clear argument why the practice is not deceptive.

    Everyone knows that searches are automated but, as Jim says, most people will expect a tweet to be coming from a person not a bot. Unless you make it clear that the responses you provide are generally automatically, you are engaged in a practice which is deceptive.

    Consider this. Suppose you made it absolutely clear to readers that the tweets they were receiving were coming from a bot responding automatically to certain key words or phrases. How many do you expect would either (a) respond or (b) take any notice of the tweet? Not many, is my guess.

  13. Is willfully propagating misinformation to suite a political agenda be deceptive?

    If so would trying to counter that misinformation be a good or bad thing ?

    The bot is trying to respond to arguments that are known to be false.

  14. Nigel, I conclude that you recognise that the behaviour of your bot IS deceptive as you (again) fail to provide an argument why it is not.

    Your reply is a simple ‘two wrongs make a right’ argument and that requires a considerably better defence than you have presented.

    (On the issue of ‘misinformation’ you might want to read this recent report and see how whether it qualifies:

  15. It’s hard to argue a negative. Can you prove that the flying spaghetti monster doesn’t exist ?

    Is it correct to deny a person their freedom ? NO, then why do we lock people up ? simple answer it’s the lesser of the two evils.

    Cool, I read your daily mail article now please read my link to the geological society, which has more weight ?

  16. As far as the claim that the bot is deceptive, this can be countered in various ways.

    First, deception entails an intent to mislead. The mere fact that people are mistaken when they think they are arguing with a human does not count as deception. To use an analogy, if someone thinks I am a French citizen because of my last name, they are mistaken but not deceived. Nigel does not seem to intend to deceive people in regards to his bot-after all, as he points out, the “Turing test” and Hal 9000 eye are rather good evidence it is a bot. While some folks might not know about Turing or Hal, this is hardly an act of deceit on Nigel’s part.

    Second, as far as the expectation that there is a human replying, this does not seem to be the case. Automated responses are standard on the net and, for example, I do not think I am being deceived when an Amazon script informs me of my order.

    Third.even if it were granted (incorrectly) that Nigel wanted people to think his bot was a person, the real concern is whether the arguments and information presented by the bot are deceptive or not. That is, whether or not Nigel has intentionally provided the bot with deceitful claims and arguments.

    While the claims can be questioned and the arguments challenged, there seems to be no evidence to accuse Nigel of engaging in such deceit: he seems to be presenting information he regards as true and arguments he regards as reasonable. Also important is the fact that his intent seems to be to inform people rather than trick or dupe people.

  17. Perhaps the word you are looking for is ‘trolling’ The consquences of such a bot would be horrendous.

    you could argue that having a bot that creates a series of set responses and prompts arguments that can never have a conclusion (because a bot would be incapable of moving beyond that argument) is nearly identical to a troll who uses set phrases so as to cause arguments just to annoy people.

    While I am sure one could attribute some kind of intelligence and intent to a Troll, I would rather choose not too. 🙂

  18. C: If the person responses with a new/interesting argument then I will manually research&respond. If they just flick back something like “hasn’t warmed since 1998” well I’m sorry, they are getting the automated response they deserve as that is factually wrong and has been answered so many times now.

  19. Well, I personally would find it difficult to decide about what people ‘deserve’ because they are factually wrong.

    I was going to say a few things, but I suspect more instructive would be an example I unfortunately encountered.

    A particular person solicited our department to give a talk, on the face of it there was nothing particularly odd about his ‘topic’, and he seemed legitimate after a fashion. Other than he came from nowhere and had apparently worked on this in his spare time.

    Anyway without going into much detail, it became very quickly apparent that he was mis-representing nearly every branch of science – physiology, psychology, physics and philosophy. Basically all the Ps.

    Now, as soon as it started going into wave-particle duality somehow meeting with eye physiology and generating psychological phenomena of depth and distance (yes, he really was off the chart) we did confront him with numerous pieces of scientific fact and some general philosophy.

    The best part (and the most relevant to the argubot) is that he had stated what he described as ‘a reflex problem’. Which he claimed was that you can not undermine a theory using a theory that has previously undermined by it.

    Basically he believed he had undermined science and no matter what you can not therefore use science to undermine his idea. Which is a nice summation of how people can and do think: you are already wrong, so you can’t use something I already know to be wrong to prove me wrong.

    The point is this: if you create arguments with people who have no desire to listen (other than to state their stock view) all you may end up doing them is 1. antagonize them (and perhaps reinforcing the original prejudice) or 2. in getting them to state the argument you boost their confidence in their own ability to argue.

    I mean, don’t get me wrong, I do see the importance of confronting someone (and also of understanding the nature and dynamics of stock arguments), but the difficult of bots are that they can not make judgments with regards to the mental state of the other person.

    The other, and potentially worse, is that especially with something like trolling the most rational thing to do is not respond, but if you have people who are not going to accept scientific evidence learning the best way is just not debate then you are making them more immune to listening to the debate in real life.

  20. Sorry about the lack of commas in the last sentence, bit ponderous, should have re-read it before I posted.

  21. C: I agree with your first two points and you maybe right with the conclusion but out of shear frustration I can’t let known falsehoods be allowed to stand without at least trying to correct them.

    The bot isn’t as robotic as you may think, when someone engages in an actual debate I do step in and answer myself ( time permitting account for timezone etc). It never happens that someone says “oh, gees you’re right I was wrong” but also people don’t generally like being factually wrong and when corrected they do move onto new arguments more nuance arguments. You’ll never get a Glenn Beck to see reason but other people yes with time and effort.

    There are many examples of people progressing the arguments. Often the argument goes from “It’s not warming” -> “It’s warming but it’s not us” -> “It’s warming and partial caused by us but who doesn’t like it warmer?” and so on. All these arguments are recorded and I could give a number of examples.

  22. I agree with Nigel: a canned argument really only deserves a canned argument in response.

  23. To what extent is the bot (or Nigel) responsible for the target’s (probably erroneous) expectations of its personhood?

    They may also expect him/it to be American or male. There are lot of things which may be relevant to the debate which are not usually revealed. One participant may have a learning difficulty or be a famous GW denier or have some other vested interest in denying human-caused GW.

    Much is implicitly hidden in Internet debates and this is expected. Given the simplicity of the stock responses, the fact that it is a bot responding seems less relevant than the potential facts I mentioned above.

  24. This is really interesting stuff, given my own forays onto the internet. So forgive the late comment.

    My hunch is there is something at least prima facie wrong about an argubot. Another hunch is many people’s motives for defending it are that it fools people (climate change sceptics) that have views they dislike. (Of course, it might be that annoying/’cheating’ against people who are dangerously wrong about stuff is a good, and further a good that justifies the evil of arguebot, but bracket that).

    In a purely ‘philosophical quest for the truth’ sense, it should make no difference whether your interlocutor is automated or not. If the programmer had a good point and the reply is on target, then you surely benefit from the dialogue.

    I suspect there are three main planks to think why it is wrong, though.

    1) was mentioned above by Jim. We have an anticipation of ‘genuine’ interaction with another human being. The fact the arguebot never says it isn’t a bot doesn’t mean it isn’t riding on peoples presumptions that other tweeters are humans.

    A good example (ironically, from someone I’ve criticized for just sort of talking point sambo mentioned in the OP). Suppose I join online chess communities and have automated chess bots that give/recieve challenges, or I manually input their moves into a chess computer and output and relay its results back. I think my ‘opponents’ have a right to be pissed off. They (for whatever reason) want to play real people, and they probably have access to chess computers themselves if they wanted. In a similar way, foisting an automated ‘opponent’ in twitter arguing is also bad.

    2) Denial of an audience. Most of the time (especially about hot button social issues) we’re probably arguing to try and further our cause, and that means persuading our opponent to see things our way. Although embittered partisans are hard to budge, they still are more likely to than an automata.

    3) It’s offensive. Although no doubt arguebot is clever, I’d be pretty offended if someone thought I was so rubbish he could beat me with some code. The very fact you can design a computer program that can ‘argue’ with the other side implies something not-very-flattering things about them: that you know what they’ll say in advance, that all they’ve got is talking points, etc. etc.

    There are more issues here, but I think I’ll blog those instead – or start coding myself… 😉

  25. I would hope all the arguments made are in-line with the current best scientific understanding. I have offered to disable any argument that doesn’t match this criteria but no one has point out any such argument, I guess that’s the whole point.

    The arguebot will automatically respond to a KNOWN person if it thinks it knows the answer, if the person is unknown or the bot doesn’t know the answer the tweet goes in a queue which is then examined and then I’ll manually teach the bot the correct answer.

    I fine the bot is a better person than me, it doesn’t get upset or frustrated, it’s always has an even temper 😉

    The queuing of unknown people is a change that has been made since this article was written and has got rid of the vast bulk of the misfires.

  26. Nigel,
    Interesting point-a bot does have the virtue of not being angry or spiteful. The infinitely patient debater.

    Of course, it might be an interesting exercise to create a bot that mimics human frustration and anger (as well as recovery from such states). Of course, really angry people are probably easy to mimic with AngryBots.

  27. …. if ten peer-reviewed papers say one thing and one says the opposite, sorry I’m going with the ten ….

    That’s your choice, to be carried along by mob rule. But it’s not science and does not impress the independently minded.

    Galileo was but one and stood against every Power That Be.

    And the settled science.

    Then there is the so-called settled science that murderously drive’s DDT’s banning and kills millions every year

    And the International Association for Population Science’s 20th-century settled science has so far been responsible for the death’s of 200-odd million.

    Or perhaps “settled science” is just another example of the kinds of weasel words used by those whose profit comes from lying to others also too damned stupid to know they’re being lied to — and/or are also too damned mean spirited and/or greedy to care?

  28. Brain, ok then please name one national/international scientific institution that doesn’t accept AGW.

    Sure have an open mind but not so much that your brains fall out.

  29. I was going to talk about Lister and Ignaz Philipp Semmelweis but I instead suggested that Galileo was one man against every other and the “settled science” and compared Mann-made “global warming with the 25% Hard-Left’s other couple hundred years of serial scientific frauds — and you ask me to name an organization that doesn’t like AGW?

    Too flippant with my comparisons, methinks.

    And although there are many I can name, (NASA, for example, disdains Hansen’s narcissistically-corrupt, self-aggrandizing and self-enriching machinations) for your question to have any real validity, you must first name one national/international “scientific” institution that does not directly profit from its promulgation of the fascist fraud morphed into a Jim Jones’ cult-like mass hysteria that, during the past 40-odd years has been named The Coming Ice Age, Environmentalism, Global warming, climate change and global climate disruption — and by any other name.

    To properly understand “AGW” one must “get it” that the whole point of that USD$15-Trillion exercise is the confiscation of as much of the West’s wealth as possible and its transfer to the Third World’s bums and dictators, to their pathologically-ingrate post-Judeo-Christian/Western/Human Civilization Euro-peon enablers — and to America’s traitorous 20% Hard Left.

    And then it is important to go back a day or two in America’s History and to re-visit the United States of America’s president, Dwight David Eisenhower’s farewell speech, in which, having noted that so many of America’s obscenely war-profiteering Left – the Kennedy Crime Family preeminent – had co-opted to itself America’s “scientists” and were maneuvering to continue to enrich themselves and their serially sycophantic peer-reviewing “scientists” to see to their being able to continue to loot the American taxpayer.

    Among what Mr Eisenhower said was the following:


    “…. Akin to, and largely responsible for the sweeping changes in our industrial-military (scientific) posture, has been the technological revolution during recent decades.

    “In this revolution, research has become central; it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by and/or at the direction of the Federal government.

    “Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity ….

    “The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present — and is gravely to be regarded.

    “Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite …. ”


    United States of America’s then president, Ronald Wilson Reagan, recognized that the same Kennedy-Crime-Family dominated war-profiteering military-industrial complex Lefties were still waxing Trillions of Cold War Dollars fat. As were the couple of generations and legions of faux “scientists” that had glommed on to that Lefty Gang and had swapped integrity and intellectual curiosity for government grants and contracts and moral bankruptcy and their snouts into the public trough. And so America’s greatest modern-era president ended the Cold War — and broke modern-era “science’s” rice bowl.

    And, as quickly as the pathologically parasitical Socialist International and its Frankensteinian UN and NATO and Europeon Neo-Soviet cohorts, in “AGW,” found themselves a new vehicle?

    So did the parasites-upon-parasites “scientific” prostitutes find themselves a new fraudulent freeload.

    However, I am a fair man, so how’s about you name me one national/international scientific institution that doesn’t wax fat from its cooked-books’ promotion of “AGW?”

  30. Brain, although that was a very long answer you seem to have avoided the question completely. I assume that means you concede that the overwhelming majority of scientific institutions accept AGW.

    You then go on the claim that all of these scientific institutions are brought or too stupid to see the truth or something.

    Surely if all these scientists have it so obviously wrong it should be quite easy for you to point to why CO2 aren’t going up despite the burning of fossil fuels or why an increase level of CO2 doesn’t cause a warming effect even thou we can demonstrate this in the lab.

  31. All I said was “Monckton is an idiot” and the bot responded.

  32. Bob: You sent that directly to the bot… so it didn’t much about anything Monckton said and responded.

    For the bot to respond to anything it first needed to work out which side your on as it did on 29 April 10:20… Did it mis categorize ?

  33. I shouldn’t have paraphrased. Here’s what I said that kicked it off:
    “@Dr_Aust_PhD Now “Lord” Monckton is in trouble, the confederation of dunces needs a new head.”

    So the single phrase “Lord Monckton” kicked it off with a weird reference to something odd? I’m not interested in how I am categorized. I doubt semantic analysis is up to doing it right.

    In April I had a very odd conversation with a young scientist in the UK that conflates Gavin’s group with old farts that deny his grants. I don’t see Gavin as part of “the establishment” as he is 25 years younger than I. It was not a typical conversation by a long shot.

  34. Bob,

    What did the bot say?

  35. You can see for yourself by looking at the feed

Leave a Comment

NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>