Tag Archives: Artificial intelligence - Page 2

Automation & Ethics

Suomi: Heronin aeolipiili Türkçe: Yunanlı mühe...

Suomi: Heronin aeolipiili Türkçe: Yunanlı mühendis Hero’nun yaptığı ilk örnek türbin (Photo credit: Wikipedia)

Hero of Alexandria (born around 10 AD) is credited with developing the first steam engine, the first vending machine and the first known wind powered machine (a wind powered musical organ). Given the revolutionary impact of the steam engine centuries later, it might be wondered why the Greeks did not make use of these inventions in their economy. While some claim that the Greeks simply did not see the implications, others claim that the decision was based on concerns about social stability: the development of steam or wind power on a significant scale would have certainly displaced slave labor. This displacement could have caused social unrest or even contributed to a revolution.

While it is somewhat unclear what prevented the Greeks from developing steam or wind power, the Roman emperor Vespasian was very clear about his opposition to a labor saving construction device: he stated that he must always ensure that the workers earned enough money to buy food and this device would put workers out of work.

While labor saving technology has advanced considerably since the time of Hero and Vespasian, the basic questions remain the same. These include the question of whether to adopt the technology or not and questions about the impact of such technology (which range from the impact on specific individuals to the society as a whole).

Obviously enough, each labor saving advancement must (by its very nature) eliminate some jobs and thus create some initial unemployment. For example, if factory robots are introduced, then human laborers are displaced. Obviously enough, this initial impact tends to be rather negative on the displaced workers while generally being positive for the employers (higher profits, typically).

While Vespasian expressed concerns about the impact of such labor saving devices, the commonly held view about much more recent advances is that they have had a general positive impact. To be specific, the usual narrative is that these advances replaced the lower-paying (and often more dangerous or unrewarding) jobs with better jobs while providing more goods at a lower cost. So, while some individuals might suffer at the start, the invisible machine of the market would result in an overall increase in utility for society.

This sort of view can and is used to provide the foundation for a moral argument in support of such labor saving technology. The gist, obviously enough, is that the overall increase in benefits outweighs the harms created. Thus, on utilitarian grounds, the elimination of these jobs by means of technology is morally acceptable. Naturally, each specific situation can be debated in terms of the benefits and the harms, but the basic moral reasoning seems solid: if the technological advance that eliminates jobs creates more good than harm for society as a whole, then the advance is morally acceptable.

Obviously enough, people can also look at the matter rather differently in terms of who they regard as counting morally and who they regard as not counting (or not counting as much). Obviously, a person who focuses on the impact on workers can have a rather different view than a person who focuses on the impact on the employer.

Another interesting point of concern is to consider questions about the end of such advances. That is, what the purpose of such advances should be. From the standpoint of a typical employer, the end is obvious: reduce labor to reduce costs and thus increase profits (and reduce labor troubles). The ideal would, presumably, to replace any human whose job can be done cheaper (or at the same cost) by a machine. Of course, there is the obvious concern: to make money a business needs customers who have money. So, as long as profit is a concern, there must always be people who are being paid and are not replaced by unpaid machines. Perhaps the pinnacle of this sort of system will consist of a business model in which one person owns machines that produce goods or services that are sold to other business owners. That is, everyone is a business owner and everyone is a customer. This path does, of course, have some dystopian options. For example, it is easy to imagine a world in which the majority of people are displaced, unemployed and underemployed while a small elite enjoys a lavish lifestyle supported by automation and the poor. At least until the revolution.

A more utopian sort of view, the sort which sometimes appears in Star Trek, is one in which the end of automation is to eliminate boring, dangerous, unfulfilling jobs to free human beings from the tyranny of imposed labor. This is the sort of scenario that anarchists like Emma Goldman promised: people would do the work they loved, rather than laboring as servants to make others wealthy. This path also has some dystopian options. For example, it is easy to imagine lazy people growing ever more obese as they shovel in cheese puffs and burgers in front of their 100 inch entertainment screens. There are also numerous other dystopias that can be imagined and have been explored in science fiction (and in political rhetoric).

There are, of course, a multitude of other options when it comes to automation.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Owning Intelligent Machines

Rebel ToasterWhile truly intelligent machines are still in the realm of science fiction, it is worth considering the ethics of owning them. After all, it seems likely that we will eventually develop such machines and it seems wise to think about how we should treat them before we actually make them.

While it might be tempting to divide beings into two clear categories of those it is morally permissible to own (like shoes) and those that are clearly morally impermissible to own (people), there are clearly various degrees of ownership in regards to ethics. To use the obvious example, I am considered the owner of my husky, Isis. However, I obviously do not own her in the same way that I own the apple in my fridge or the keyboard at my desk. I can eat the apple and smash the keyboard if I wish and neither act is morally impermissible. However, I should not eat or smash Isis—she has a moral status that seems to allow her to be owned but does not grant her owner the right to eat or harm her. I will note that there are those who would argue that animals should not be owner and also those who would argue that a person should have the moral right to eat or harm her pets. Fortunately, my point here is a fairly non-controversial one, namely that it seems reasonable to regard ownership as possessing degrees.

Assuming that ownership admits of degrees in this regard, it makes sense to base the degree of ownership on the moral status of the entity that is owned. It also seems reasonable to accept that there are qualities that grant a being the status that morally forbids ownership. In general, it is assumed that persons have that status—that it is morally impermissible to own people. Obviously, it has been legal to own people (be the people actual people or corporations) and there are those who think that owning other people is just fine. However, I will assume that there are qualities that provide a moral ground for making ownership impermissible and that people have those qualities. This can, of course, be debated—although I suspect few would argue that they should be owned.

Given these assumptions, the key matter here is sorting out the sort of status that intelligent machines should possess in regards to ownership. This involves considering the sort of qualities that intelligent machines could possess and the relevance of these qualities to ownership.

One obvious objection to intelligent machines having any moral status is the usual objection that they are, obviously, machines rather than organic beings. The easy and obvious reply to this objection is that this is mere organicism—which is analogous to a white person saying blacks can be owned as slaves because they are not white.

Now, if it could be shown that a machine cannot have qualities that give it the needed moral status, then that would be another matter. For example, philosophers have argued that matter cannot think and if this is the case, then actual intelligent machines would be impossible. However, we cannot assume a priori that machines cannot have such a status merely because they are machines. After all, if certain philosophers and scientists are right, we are just organic machines and thus there would seem to be nothing impossible about thinking, feeling machines.

As a matter of practical ethics, I am inclined to set aside metaphysical speculation and go with a moral variation on the Cartesian/Turing test. The basic idea is that a machine should be granted a moral status comparable to organic beings that have the same observed capabilities. For example, a robot dog that acted like an organic dog would have the same status as an organic dog. It could be owned, but not tortured or smashed. The sort of robohusky I am envisioning is not one that merely looks like a husky and has some dog-like behavior, but one that would be fully like a dog in behavioral capabilities—that is, it would exhibit personality, loyalty, emotions and so on to a degree that it would pass as real dog with humans if it were properly “disguised” as an organic dog. No doubt real dogs could smell the difference, but scent is not the foundation of moral status.

In terms of the main reason why a robohusky should get the same moral status as an organic husky, the answer is, oddly enough, a matter of ignorance. We would not know if the robohusky really had the metaphysical qualities of an actual husky that give an actual husky moral status. However, aside from difference in the parts, we would have no more reason to deny the robohusky moral status than to deny the husky moral status. After all, organic huskies might just be organic machines and it would be mere organicism to treat the robohusky as a mere thing and grant the organic husky a moral status. Thus, advanced robots with the capacities of higher animals should receive the same moral status as organic animals.

The same sort of reasoning would apply to robots that possess human qualities. If a robot had the capability to function analogously to a human being, then it should be granted the same status as a comparable human being. Assuming it is morally impermissible to own humans, it would be impermissible to own such robots. After all, it is not being made of meat that grants humans the status of being impermissible to own but our qualities. As such, a machine that had these qualities would be entitled to the same status. Except, of course, to those unable to get beyond their organic prejudices.

It can be objected that no machine could ever exhibit the qualities needed to have the same status as a human. The obvious reply is that if this is true, then we will never need to grant such status to a machine.

Another objection is that a human-like machine would need to be developed and built. The initial development will no doubt be very expensive and most likely done by a corporation or university. It can be argued that a corporation would have the right to make a profit off the development and construction of such human-like robots. After all, as the argument usually goes for such things, if a corporation was unable to profit from such things, they would have no incentive to develop such things. There is also the obvious matter of debt—the human-like robots would certainly seem to owe their creators for the cost of their creation.

While I am reasonably sure that those who actually develop the first human-like robots will get laws passed so they can own and sell them (just as slavery was made legal), it is possible to reply to this objection.

One obvious reply is to draw an analogy to slavery: just because a company would have to invest money in acquiring and maintaining slaves it does not follow that their expenditure of resources grants a right to own slaves. Likewise, the mere fact that a corporation or university spent a lot of money developing a human-like robot would not entail that they thus have a right to own it.

Another obvious reply to the matter of debt owed by the robots themselves is to draw an analogy to children: children are “built” within the mother and then raised by parents (or others) at great expense. While parents do have rights in regards to their children, they do not get the right of ownership. Likewise, robots that had the same qualities as humans should thus be regarded as children would be regarded and hence could not be owned.

It could be objected that the relationship between parents and children would be different than between corporation and robots. This is a matter worth considering and it might be possible to argue that a robot would need to work as an indentured servant to pay back the cost of its creation. Interestingly, arguments for this could probably also be used to allow corporations and other organizations to acquire children and raise them to be indentured servants (which is a theme that has been explored in science fiction). We do, after all, often treat humans worse than machines.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Programmed Consent

Sexbot YesScience fiction is often rather good at predicting the future and it is not unreasonable to think that the intelligent machine of science fiction will someday be a reality. Since I have been writing about sexbots lately, I will use them to focus the discussion. However, what follows can also be applied, with some modification, to other sorts of intelligent machines.

Sexbots are, obviously enough, intended to provide sex. It is equally obvious that sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped or not. Sorting this out requires considering the matter of consent in more depth.

When it is claimed that sex without consent is rape, one common assumption is that the victim of non-consensual sex is a being that could provide consent but did not. A violent sexual assault against a person would be an example of this as would, presumably, non-consensual sex with an unconscious person. However, a little reflection reveals that the capacity to provide consent is not always needed in order for rape to occur. In some cases, the being might be incapable of engaging in any form of consent. For example, a brain dead human cannot give consent, but presumably could still be raped. In other cases, the being might be incapable of the right sort of consent, yet still be a potential victim of rape. For example, it is commonly held that a child cannot properly consent to sex with an adult.

In other cases, a being that cannot give consent cannot be raped. To use an obvious example, a human can have sex with a sex-doll and the doll cannot consent. But, it is not the sort of entity that can be raped. After all, it lacks the status that would require consent. As such, rape (of a specific sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being in order for the sex to be morally acceptable. Naturally, I have not laid out all the fine details to create a necessary and sufficient account here—but that is not my goal nor what I need for my purpose in this essay. In regards to the main focus of this essay, the question would be whether or not a sexbot could be an entity that has a status that would require consent. That is, would buying (or renting) and using a sexbot for sex be rape?

Since the current sexbots are little more than advanced sex dolls, it seems reasonable to put them in the category of beings that lack this status. As such, a person can own and have sex with this sort of sexbot without it being rape (or slavery). After all, a mere object cannot be raped (or enslaved).

But, let a more advanced sort of sexbot be imagined—one that engages in complex behavior and can pass the Turning Test/Descartes Test. That is, a conversation with it would be indistinguishable from a conversation with a human. It could even be imagined that the sexbot appeared fully human, differing only in terms of its internal makeup (machine rather than organic). That is, unless someone cut the sexbot open, it would be indistinguishable from an organic person.

On the face of it (literally), we would seem to have as much reason to believe that such a sexbot would be a person as we do to believe that humans are people. After all, we judge humans to be people because of their behavior and a machine that behaved the same way would seem to deserve to be regarded as a person. As such, nonconsensual sex with a sexbot would be rape.

The obvious objection is that we know that a sexbot is a machine with a CPU rather than a brain and a mechanical pump rather than a heart. As such, one might, argue, we know that the sexbot is just a machine that appears to be a person and is not a person.  As such, a real person could own a sexbot and have sex with it without it being rape—the sexbot is a thing and hence lacks the status that requires consent.

The obvious reply to this objection is that the same argument can be used in regards to organic humans. After all, if we know that a sexbot is just a machine, then we would also seem to know that we are just organic machines. After all, while cutting up a sexbot would reveal naught but machinery, cutting up a human reveals naught but guts and gore. As such, if we grant organic machines (that is, us) the status of persons, the same would have to be extended to similar beings, even if they are made out of different material. While various metaphysical arguments can be advanced regarding the soul, such metaphysical speculation provides a rather tenuous basis for distinguishing between meat people and machine people.

There is, it might be argued, still an out here. In his Hitchhikers’ Guide to the Galaxy Douglas Adams envisioned “an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly.” A similar sort of thing could be done with sexbots: they could be programmed so that they always give consent to their owner, thus the moral concern would be neatly bypassed.

The obvious reply is that programmed consent is not consent. After all, consent would seem to require that the being has a choice: it can elect to refuse if it wants to. Being compelled to consent and being unable to dissent would obviously not be morally acceptable consent. In fact, it would not be consent at all. As such, programming sexbots in this manner would be immoral—it would make them into slaves and rape victims because they would be denied the capacity of choice.

One possible counter is that the fact that a sexbot can be programmed to give “consent” shows that it is (ironically) not the sort of being with a status that requires consent. While this has a certain appeal, consider the possibility that humans could be programmed to give “consent” via a bit of neurosurgery or by some sort of implant. If this could occur, then if programmed consent for sexbots is valid consent, then the same would have to apply to humans as well. This, of course, seems absurd. As such, a sexbot programmed for consent would not actually be consenting.

It would thus seem that if advanced sexbots were built, they should not be programmed to always consent. Also, there is the obvious moral problem with selling such sexbots, given that they would certainly seem to be people. It would thus seem that such sexbots should never be built—doing so would be immoral.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Sexbots are Persons, Too?

Page_1In my previous essays on sexbots I focused on versions that are clearly mere objects. If the sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots would lack the moral status needed to be wronged. Obviously enough, the sexbots of the near future will be in the class of objects. However, science fiction has routinely featured intelligent, human-like robots (commonly known as androids). Intelligent beings, even artificial ones, would seem to have an excellent claim on being persons. In terms of sorting out when a robot should be treated as person, the reasonable test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

While Descartes does not deeply explore the moral distinctions between beings that talk (that have minds) and those that merely make noises, it does seem reasonable to regard a being that talks as a person and to thus grant it the moral status that goes along with personhood. This, then, provides a means to judge whether an advanced sexbot is a person or not: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would presumably lack moral status.

Having sex with a sexbot that can pass the Cartesian test would certainly seem to be morally equivalent to having sex with a human person. As such, whether the sexbot freely consented or not would be a morally important matter. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is, of course, actually done). If such sexbots were mistreated, this would also be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition.

It might also be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and hence it would still be reasonable to hold that it is not a person. It would seem to be a person, but would merely be acting like a person. While this is a point well worth considering, the same sort of argument could be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the classic problem of other minds: all I can do is see your behavior and infer that you are self-aware based on analogy to my own case. Hence, I do not know that you are aware since I cannot be you. From your perspective, the same is true about me. As such, if a robot acted in an intelligent manner, it would seem that it would have to be regarded as being a person on those grounds. To fail to do so would be a mere prejudice in favor of the organic.

In reply, some people believe that other people can be used as they see fit. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are simply evil.

Those with religious inclinations would probably bring up the matter of the soul. But, the easy reply is that we would have as much evidence that robots have souls as we do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product that is as like a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do onerous tasks, but to the degree they are intelligent, they would be slaves.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would actually be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Superheroes, Robots & Killing

Batman the Animated Series: Volume 4 DVD (from...

(Photo credit: Wikipedia)

Even as a kid watching cartoons, I noticed that while the superheroes and heroes never really hurt living opponents, they had no qualms about bashing intelligent machines to bits. While animation of this sort is rather more violent than when I was a kid, the superhero genre still has an interesting distinction between how intelligent living creatures are treated and how even intelligent machines are treated. For example, Batman might give the Joker a solid beat down during an episode of the famous Batman animated series but he certainly does not kill anyone. Anyone organic anyway. Intelligent machines, which are common fare in superhero animation, are routinely destroyed by the same heroes who are sworn to never take a life. As might be guessed, I’ve given this matter some thought.

One rather obvious basis for the difference is psychological (or even biological): while people are generally distressed and even sickened by images of maimed and dead humans (and animals), they generally do not have a similar visceral reaction to damaged or destroyed machines. So, Superman punching Lex Luthor’s head off in a bloody mess would impact viewers rather differently than Superman punching the head off a robot. Interestingly, animators do portray mechanical beings being sliced to pieces and “bleeding” (provided the “blood” is oil or some other non-blood fluid). For example, Samurai Jack featured rather “gory” battles in which slaughtered machines gushed streams of blood. Organic opponents were, of course, never dealt with in that manner.

It is easy enough to dismiss the distinction between the violence against humans (and other living things) and machines as being purely a matter of keeping the action at the appropriate rating for the intended audience. However, there does seem to be more to the matter than this.

In the case of living opponents, the superheroes are generally careful to simply subdue them (even when the villains are mere generic minions and not the valuable comic book properties that are the main villains like Poison Ivy or the Parasite) rather than killing them or even hurting them badly. This is presumably because the heroes regard excessively harming or killing people to be morally unacceptable.

However, even obviously intelligent machines are not given the same treatment—unless the machine is a valuable property (like Braniac) the machine is typically destroyed rather than subdued. Even the main villain machines are also subject to far more violence than the living opponents, even if they do come back in later episodes or issues.

As such, there is a strong indication of organicism—a bias in favor of organic life and an accompanying contempt for non-organic people. This, of course, might seem like an absurd thing to say, however it does seem to be a matter well worth considering since this bias does extend (at least in fiction) beyond the realm of comic book animation and into science-fiction.

The main point of concern is that the treatment of the entity is often based not on whether it is person or not but based on its composition.  As such, intelligent machines are treated as things despite the fact that they show the key attributes of being people. For example, they think and engage in meaningful speech. Since there are presumably no actual intelligent machines today, this matter is still confined to fiction. However, heroes seem rather less heroic when they causally destroy people simply because they happen to be mechanical rather than biological. After all, they are not acting in a consistent way towards all people—they are biased against mechanical people.

It might, of course, be contended that the machines that act like people in the shows are not actually people (in the context of the show, of course). That is, they are cleverly programmed to create the appearance of being intelligent, but are no more a person than is a gun or dump truck.

While this does have a certain appeal, there is the obvious concern of whether or not the heroes know this metaphysical fact about the fictional world. That is, that the heroes know that a human minion is a person while a seemingly intelligent machine minion that talks and fights as well as a human minion merely has the appearance of personhood.

My Amazon Author Page

Enhanced by Zemanta

Automatic Grading

When I learned that EdX had developed software that would instantly grade written work, my first reaction was one of skepticism. After all, while

The Turing Test (Doctor Who)

(Photo credit: Wikipedia)

spell-checkers work well and grammar checkers work sort of well, it seems unlikely that software could properly evaluate written work. My second reaction was that of hope-after all, I grind through hundreds of papers each year and automating that task would make my job much easier. This lead to my third reaction, namely worry regarding the implications of such software.

While my knowledge of programming is mostly obsolete, I do know enough about artificial intelligence to know that the current technology is most likely not up to the task of properly grading written work such as essays. After all, while checking such things as spelling and grammar can be automated relatively easily, properly assessing a written work would seem to require robust language comprehension-something that existing artificial intelligence can not do. Interestingly, in a letter about animals, Descartes argues that purely mechanical systems cannot engage in true language. While he was writing about animals, his view also applied to automatons and would now apply to computers. While Descartes might be proven wrong someday, I would suspect that day has yet to arrive.

Of course, it would be foolish of me to take my view to be certain. After all, I am not an expert on artificial intelligence and perhaps EdX has made an exceptional break through in the field. Naturally, the rational approach is to consider what the experts have to say about the matter and to consider the available evidence.

One expert who has been critical of such software is Les Perelman. In a detailed paper, he does a careful analysis of the effectiveness of the grading software. While the paper is somewhat technical, it does make a compelling case against the claim that such grading software is effective. In any case, readers can review the paper and assess his reasoning and evidence. Perelman is also well known for crafting nonsense that receives high marks from grading software. That this occurs is hardly surprising. After all, the grading software is obviously not actually capable of comprehending the essay-it is merely running it through a series of programmed evaluations and someone who knows how specific software works can create nonsense essays that a human reader would recognize as nonsense yet pass the programmed evaluations with flying colors. This sort of thing could be seen as a variation on the Turing test: being able to properly grade a written essay and distinguish it from cleverly crafted nonsense would be a passing mark for the software/hardware.

In regards to the matter of hope, the idea of automatic essay grading is appealing. Like many professors at teaching schools, I grade hundreds of essays each year. Unlike many professors, I get the graded work back to the students within a few days.  In most cases, I am sad to say, students merely look at the grade and ignore the feedback and comments. As such, an automatic grader would reduce my workload dramatically, allowing me more time to handle my usual 6-9 committees, being the unit facilitator and so on.

Also, I believe the software might encourage students to write more drafts. My students have to wait about 15-30 minutes for me to review a draft during my office hours or as long as a day if they drop the paper off at the end of the day. But, if a student could get instant feedback, they would have more time to revise the paper and hence might be more likely to do so. Or perhaps not.

As might be imagined, not all professors have my rapid turnaround time on drafts and papers (my students alway seem shocked when they get their work back so quickly). In such cases, automatic grading would be even more useful-rather than waiting days, weeks or even months a student could get instant feedback. There is also the fact that some professors do not provide any feedback beyond a grade on the work. If the software provide more than that, it could be rather useful to the students. There is also the practical point that even not-so-great software could still be better than the evaluation provided by some professors.

Of course, the usefulness of the software is contingent on how well it actually works. If it can be gamed by nonsense or does not actually assess the essays properly, then it would be little more than a gimmick. That said, even if it was limited in functionality, it could still prove useful. For example, I already use Blackboard’s Safeassign to check papers for plagiarism. While it does yield false positives and can miss some cases of plagiarism, it is still a useful tool. As such, the grading software might also serve as a useful tool for drafts and for a preliminary evaluation. However, I am still skeptical about the ability of software to assess written work properly.

My final response was concern about the implications of the software. While it might be suspected that I would be worried that such software could put me out of a job, that is not my main worry. While I would obviously not want to be unemployed because I was replaced by some code, I am well aware of the nature of technological advance and that automation can make certain jobs obsolete. If a program could do my job as well as me, it would be unreasonable of me to insist that I be kept on the payroll just because firing me would be bad for me personally. After all, the university is not there to give me a job.

My main concern is not that I would be replaced by an automatic equivalent or better (that is being replaced because the task no longer requires a human), my main concern is that I would be replaced by something inferior for the main purpose of saving money. In more general terms, my worry is not that progress will make the professorship obsolete, but that the grading software will be used to cut costs by providing students with something inferior (most likely without informing students of this fact).

It might be countered that such grading software could be combined with the massive online courses and thus produce fully automated education factories that could provide education to people who could otherwise not afford it. To use an analogy, the old model for universities would be a fine (or less fine) restaurant with chefs and the new model would be the fast food joint with food technicians.

I will admit that this does have considerable appeal. After all, bringing education to people at a low cost would have numerous advantages, such as allowing people who could otherwise not afford education to be able to acquire it.

Of course, there is still the obvious concern that the software would be used to sell an inferior product at the price of the premium product and also the concern that education could become a degree mill in which students just click their way to a diploma.

Having been in higher education for quite some time I can attest to the desire to make education more like a business. Being able to automate education like a factory would certainly be appealing to some (such as certain politicians and the folks who would sell or license the software and hardware). As might be expected, while I do believe that certain things can be automated (like grading T/F tests), education does not seem well suited to the factory model.

Another obvious concern is that automated education might not democratize education by allowing everyone low-cost access to higher education. It might very well create an even more extreme inequality than exists today. That is, the premier institutions would have human professors providing high quality education while the other schools, such as state schools, would have automated classes providing education to the masses.  While this sounds like a science-fiction scenario, it is actually well within the realm of possibility. I can attest, from my own experience, the push to standardize and automate education and the education factory is not many steps away from the model being strongly pushed today. This is not to say that the education factory will arrive soon or even at all. But it is likely enough that it is worth being concerned about.

My Amazon Author Page

 

 

Enhanced by Zemanta