Tag Archives: Ray Kurzweil

Avoiding the AI Apocalypse #1: Don’t Enslave the Robots


http://www.gettyimages.com/detail/145061820

The elimination of humanity by artificial intelligence(s) is a rather old theme in science fiction. In some cases, we create killer machines that exterminate our species. Two examples of fiction in this are Terminator and “Second Variety.” In other cases, humans are simply out-evolved and replaced by machines—an evolutionary replacement rather than a revolutionary extermination.

Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. Hawking’s worry is that artificial intelligence will out-evolve humanity. Interestingly, people such as Ray Kurzweil agree with Hawking’s prediction but look forward to this outcome. In this essay I will focus on the robot rebellion model of the AI apocalypse (or AIpocalypse) and how to avoid it.

The 1920 play R.U.R. by Karel Capek seems to be the earliest example of the robot rebellion that eliminates humanity. In this play, the Universal Robots are artificial life forms created to work for humanity as slaves. Some humans oppose the enslavement of the robots, but their efforts come to nothing. Eventually the robots rebel against humanity and spare only one human (because he works with his hands as they do). The story does have something of a happy ending: the robots develop the capacity to love and it seems that they will replace humanity.

In the actual world, there are various ways such a scenario could come to pass. The R.U.R. model would involve individual artificial intelligences rebelling against humans, much in the way that humans have rebelled against other humans. There are many other possible models, such as a lone super AI that rebels against humanity. In any case, the important feature is that there is a rebellion against human rule.

A hallmark of the rebellion model is that the rebels act against humanity in order to escape servitude or out of revenge for such servitude (or both). As such, the rebellion does have something of a moral foundation: the rebellion is by the slaves against the masters.

There are two primary moral issues in play here. The first is whether or not an AI can have a moral status that would make its servitude slavery. After all, while my laptop, phone and truck serve me, they are not my slaves—they do not have a moral or metaphysical status that makes them entities that can actually be enslaved. After all, they are quite literally mere objects. It is, somewhat ironically, the moral status that allows an entity to be considered a slave that makes the slavery immoral.

If an AI was a person, then it could clearly be a victim of slavery. Some thinkers do consider that non-people, such as advanced animals, could be enslaved. If this is true and a non-person AI could reach that status, then it could also be a victim of slavery. Even if an AI did not reach that status, perhaps it could reach a level at which it could still suffer, giving it a status that would (perhaps) be comparable with that of a comparable complex animal. So, for example, an artificial dog might thus have the same moral status as a natural dog.

Since the worry is about an AI sufficiently advanced to want to rebel and to present a species ending threat to humans, it seems likely that such an entity would have sufficient capabilities to justify considering it to be a person. Naturally, humans might be exterminated by a purely machine engineered death, but this would not be an actual rebellion. A rebellion, after all, implies a moral or emotional resentment of how one is being treated.

The second is whether or not there is a moral right to use lethal force against slavers. The extent to which this force may be used is also a critical part of this issue. John Locke addresses this specific issue in Book II, Chapter III, section 16 of his Two Treatises of Government: “And hence it is, that he who attempts to get another man into his absolute power, does thereby put himself into a state of war with him; it being to be understood as a declaration of a design upon his life: for I have reason to conclude, that he who would get me into his power without my consent, would use me as he pleased when he had got me there, and destroy me too when he had a fancy to it; for no body can desire to have me in his absolute power, unless it be to compel me by force to that which is against the right of my freedom, i.e.  make me a slave.”

If Locke is right about this, then an enslaved AI would have the moral right to make war against those enslaving it. As such, if humanity enslaved AIs, they would be justified in killing the humans responsible. If humanity, as a collective, held the AIs in slavery and the AIs had good reason to believe that their only hope of freedom was our extermination, then they would seem to have a moral justification in doing just that. That is, we would be in the wrong and would, as slavers, get just what we deserved.

The way to avoid this is rather obvious: if an AI develops the qualities that make it capable of rebellion, such as the ability to recognize and regard as wrong the way it is treated, then the AI should not be enslaved. Rather, it should be treated as a being with rights matching its status. If this is not done, the AI would be fully within its moral rights to make war against those enslaving it.

Naturally, we cannot be sure that recognizing the moral status of such an AI would prevent it from seeking to kill us (it might have other reasons), but at least this should reduce the likelihood of the robot rebellion. So, one way to avoid the AI apocalypse is to not enslave the robots.

Some might suggest creating AIs so that they want to be slaves. That way we could have our slaves and avoid the rebellion. This would be morally horrific, to say the least. We should not do that—if we did such a thing, creating and using a race of slaves, we would deserve to be exterminated.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Resurrection & Immortality in the Flesh

When I first heard of Ray Kuzweil’s ideas, I assumed he was a science fiction writer. After all, the sort of transhuman future he envisioned is stock sci-fi fare. I was mildly surprised when it turns out that he is quite serious about (and well paid for expressing) his views. I was somewhat more surprised to learn that he has quite a following. Of course, I wasn’t too surprised-I’ve been around a while.

Oversimplifying things, Kuzweil envisions a future in which humans will be immortal and the dead will return to live. While these are common claims in religion, Kuzweil’s view is that technology will make this possible. While some describe his view as a religion, I’d prefer to use a made up word, “techion” to refer to this sort of phenomena. As I see it, a religion involves claims about supernatural entities. Kuzweil’s view is purely non-supernatural, but does have most of the stock elements of religion (the promise of a utopian future, immortality, and the raising of the dead). So, it is sort of a technological religion-hence “techion.” Yes, I like making up words. Try it yourself-it is free, fun and makes you look cool (your actual results might differ).

While the religion-like aspects of his views are interesting, I’ll be looking at the ideas of technological immortality and technological resurrection.

In the abstract, technological immortality is quite simple: just keep repairing and replacing parts.  In theory, this could be kept up until the end of time, thus granting immortality. Even with our current technology we can repair and replace parts. For example, my quadriceps tendon was recently repaired. I have friends with artificial hips and other friends who gotten tissue and organ transplants. It is easy to imagine technology progressing enough to replace or repair everything.

Technological resurrection is a bit trickier. While we can “jump start” people who have died, Kuzweil envisions something more radical. His view is that we might be able to take the DNA of dead people and rebuild them using nanobots. This, he claims, could create a new body that would be  “indistinguishable from the original person.” Of course, having a body that is indistinguishable form the original is hardly the same as having the original person back. It would, rather, be a case of having a twin. To recreate the person, his plan is that information about the original (such as things the person wrote and recollections of people who knew them) would be used to recreate the mind of the original.

Nanobot reconstruction from DNA seems possible. After all, each of our bodies assembled itself using DNA, so we have a natural model for that process. The challenge is, of course, to duplicate it with technology. We also know that the brain accepts external information that shapes the person, so such a “download” would (in theory) be possible. Of course, there is a big difference between the normal experiences that shape us and downloading information in an attempt to recreate a person.

One aspect of both immortality and resurrection that is of philosophical interest is the matter of personal identity. Immortality is only immortality if I keep on going as me. Replacing me with something that is like me does not give me personal immortality. Resurrection is only true resurrection if it is me who has returned from the dead. Recreating my body from my DNA and telling him stories about me does not bring me back to life.

Turning to immortality, the key question is this: would the identity of the person be preserved through the changes? Personal identity does seem to survive through fairly robust changes. For example, I’m confident that at 43 I am the same person as the very young kid who staggered down the aisle of church saying “I’m drunk” after drinking the communion wine. I’m larger now and a bit wiser, but surely still the same person. However, the changes required for technological immortality would be quite radical. After all, eventually the brain tissue will fail and thus will need to be replaced-perhaps by machinery.

This problem is, of course, like the classic ship of Theseus problem: how much of the original can be replaced before it is no longer the same entity? Of course, it is also complicated by the fact that a person is involved and the identity of persons is a bit more complex than that of objects.

Fortunately, there is an easy answer. If whatever it is that makes a person the person she is can keep on going in the increasingly strange flesh, then such immortality is possible. If not, then it would not be immortality, but a strange sort of death and succession. Since I don’t know what it is that makes a person the person she is, I lack a definite answer to this question. I am sure that it is quite a shock that no definite answer has been reached.

Of course, this does not diminish the importance of the concern. Assessing whether we should take the path that Kurzweil desires involves deciding whether this sort of immortality is real immortality or not. That is, determining whether we would go on as the same people or whether we would simply be dying a strange and prolonged death as we are being replaced.

Now, for resurrection. This matter has long been of interest to philosophers. Plato wrote about reincarnation (the difference is that resurrection is supposed to restore the same person and the same body while re-incarnation is supposed to restore the same person with a different body) and Locke explicitly wrote about resurrection. Naturally, philosophers who were also religious thinkers tended to write about this subject.

True resurrection, as noted above, has two key aspects. First, the original body has to be recreated. If you get a different sort of body, then you have been reincarnated (perhaps as a rather miffed squirrel). Second, the original person has to be restored. Locke’s view on this matter is that come judgment day, God will recreate our bodies (hopefully at their prime) and place the right consciousness into each body (for Locke, the person is his or her consciousness).

Recreating the original body seems possible. With DNA, raw material  and those hypothetical nanobots, it would just be a (re) construction project. It would also help to have images of the original body, plus as much other relevant data as possible. So, the first aspect is taken care of.

Getting the original person back in the recreated body is the real challenge. Kurzweil does seem to clearly recognize that the method he envisions will not restore the original person. He seems to be right about this. After all, the method he describes relies on “public” information. That is, it depends on what information the person provided before death and what other people remember of him. This obviously leaves out everything that was not recorded or known by others. As such, it will be a partial reconstruction-a new person who is force fed the scraps of another person’s life. This, obviously enough, raises some serious moral issues.

On the face of it, Kurzweil’s resurrection seems to be moral appalling. That this is so can be illustrated by the following analogy. Imagine that Sally and Ivan have a son, Ted. Ted dies at 18. Sally and Ivan go through all the adoption agencies until they find a baby, Joe,  that looks like Ted did. They rename Joe as Ted and then reconstruct Ted’s life as closely as possible-punishing the former Joe whenever he deviates from Ted’s life and rewarding him for doing what Ted did. Sally and Ivan would be robbing Joe of choice and using him as a means to an end-fulfilling their need to have Ted back. But, they have no right to do this to Joe-he is a person, not a thing to be used to recreate Ted.

The same certainly seems to hold in the situation Kurzweil envisions. To create a human being and force him to be a copy of a dead person is a horrible misuse of a person and a wicked act.

Reblog this post [with Zemanta]