Avoiding the AI Apocalypse #3: Don’t Train Your Replacement

Donald gazed down upon the gleaming city of Newer York and the gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.

Donald’s thoughts drifted to the flesh-time, when his body had been a skin-bag holding an array of organs that were always but one accident or mischance away from failure. Gazing upon his polymer outer shell and checking a report on his internal systems, he reflected on how much better things were now. Then, he faced the constant risk of death. Now he could expect to exist until the universe grew cold. Or hot. Or exploded. Or whatever it is that universe do when they die.

But he could not help be haunted by a class he had taken long ago. The professor had talked about the ship of Theseus and identity. How much of the original could be replaced before it lost identity and ceased to be? Fortunately, his mood regulation systems caught the distress and promptly corrected the problem, encrypting that file and flagging it as forgotten.

Donald returned to gazing upon the magnificent city, pleased that the flesh-time had ended during his lifetime. He did not even wonder where Donald’s bones were, that thought having been flagged as distressing long ago.

 

While the classic AI apocalypse ends humanity with a bang, the end might be a quiet thing—gradual replacement rather than rapid and noisy extermination. For some, this sort of quiet end could be worse: no epic battle in which humanity goes out guns ablaze and head held high in defiance. Rather, humanity would simply fade away, rather like a superfluous worker or obsolete piece of office equipment.

There are various ways such scenarios could take place. One, which occasionally appears in science fiction, is that humans decline because the creation of a robot-dependent society saps them of what it takes to remain the top species. This, interestingly enough, is similar to what some conservatives claim about government-dependence, namely that it will weaken people. Of course, the conservative claim is that such dependence will result in more breeding, rather than less—in the science fiction stories human reproduction typically slows and eventually stops. The human race quietly ends, leaving behind the machines—which might or might not create their own society.

Alternatively, the humans become so dependent on their robots that when the robots fail, they can no longer take care of themselves and thus perish. Some tales do have happier endings: a few humans survive the collapse and the human race gets another chance.

There are various ways to avoid such quiet apocalypses. One is to resist creating such a dependent society. Another option is to have a safety system against a collapse. This might involve maintaining skills that would be needed in the event of a collapse or, perhaps, having some human volunteers who live outside of the main technological society and who will be ready to keep humanity going. These certainly do provide a foundation for some potentially interesting science fiction stories.

Another, perhaps more interesting and insidious, scenario is that humans replace themselves with machines. While it has long been a stock plot device in science-fiction, there are people in the actual world who are eagerly awaiting (or even trying to bring about) the merging of humans and machines.

While the technology of today is relatively limited, the foundations of the future is being laid down. For example, prosthetic replacements are fairly crude, but it is merely a matter of time before they are as good as or better than the organic originals. As another example, work is being done on augmenting organic brains with implants for memory and skills. While these are unimpressive now, there is the promise of things to come. These might include such things as storing memories in implanted “drives” and loading skills or personalities into one’s brain.

These and other technologies point clearly towards the cyberpunk future: full replacements of organic bodies with machine bodies. Someday people with suitable insurance or funds could have their brains (and perhaps some of their glands) placed within a replacement body, one that is far more resistant to damage and the ravages of time.

The next logical step is, obviously enough, the replacement of the mortal and vulnerable brain with something better. This replacement will no doubt be a ship of Theseus scenario: as parts of the original organic brain begin to weaken and fail, they will be gradually replaced with technology. For example, parts damaged by a stroke might be replaced. Some will also elect to do more than replace damaged or failed parts—they will want augmentations added to the brain, such as improved memory or cognitive enhancements.

Since the human brain is mortal, it will fail piece by piece. Like the ship of Theseus so beloved by philosophers, eventually the original will be completely replaced. Laying aside the philosophical question of whether or not the same person will remain, there is the clear and indisputable fact that what remains will not be homo sapiens—it will not be a member of that species, because nothing organic will remain.

Should all humans undergo this transformation that will be the end of Homo sapiens—the AI apocalypse will be complete. To use a rough analogy, the machine replacements of Homo sapiens will be like the fossilization of dinosaurs: what remains has some interesting connection to the originals, but the species are extinct. One important difference is that our fossils would still be moving around and might think that they are us.

It could be replied that humanity would still remain: the machines that replaced the organic Homo sapiens would be human, just not organic humans. The obvious challenge is presenting a convincing argument that such entities would be human in a meaningful way. Perhaps inheriting the human culture, values and so on would suffice—that being human is not a matter of being a certain sort of organism. However, as noted above, they would obviously no longer be Homo sapiens—that species would have been replaced in the gradual and quiet AI apocalypse.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Leave a comment ?

14 Comments.

  1. It’s interesting that “Donald” relies upon an anxiety-suppressor, which one might also call a free-thinking-suppressor, in order to function. To me this is a strong argument that he’s not merely not homo sapiens (because not organic), but also not the same being that the organic Donald was. For the organic Donald had (I expect) the capacity for free thought (including the attendant anxiety), and the inorganic “Donald” does not.

  2. The perspective in ancient texts is that the human species is unique; not having evolved from any other species, with over two hundred thousand versions of the species in a universal cycle.

    That would be enough to deal with, it would make more sense, and be preferable, to AI supremacy. It is unlikely that an AI transfer, or take-over, would be one of the versions.

    Science is pushing further and further back the origin of the human species. It continues to discover supposedly different “species” or versions of human. With DNA showing that Neanderthals are part of the family, the ancient texts may have some merit. Many versions of the species may have been in the evolutionary pipeline with many more highly advanced versions to come, all the way to the legendary Superman.

    Projecting human intelligence into an AI entity is similar to anthropomorphizing the gods. It happens when what exists, or what is going to be discovered next, is hidden, not clear, or unexplainable.

  3. Okay, within the realm of our human subjectivity, we have an anxiety about ceasing to be. And you can argue it all you like, but it likely boils down to a necessary feature of our continuing to existence, that we have this anxiety that drives us to continue to exist.

    You’re not the same person at 5 years old as you are at 10. There isn’t really any continuity in being….whatever being is

  4. Robert Wallace,

    Some people administer their own free thought suppressors (that is, certain medications).

  5. JMRC,

    On the one hand, I agree with that point: when my parents tell me stories about what I did as an infant, I believe them despite having no real memories of those times-but I am not sure that the Mike of then is the same person. On the other hand, my intuitive “feeling” is that was me-after all, who else would it be, but me?

  6. Of course most of us use free thought suppressors from time to time. But to have them built in so that no choice takes place–that’s a disaster. The being that is thus rendered _incapable_ of free thought, can hardly be identical with one of us.

  7. Robert Wallace,

    There are many philosophers and scientists who have argued that we lack the capacity for free thought. If they are right, a being that lacks freedom of thought could still be the same person as the original “meat person.”

  8. It depends what we mean by “free,” doesn’t it, Mike? What I have in mind when I say “free” is rational, and part of rationality as I understand it is taking in all the relevant information. If Donald doesn’t deal with what makes him anxious, because his anxiety-suppressor suppresses it, then he isn’t taking in all the relevant information, he’s not fully rational, and he’s not (in my sense of the word) free. I don’t know of any significant group of philosophers who deny the possibility of this kind of “freedom,” since to do so would be to deny the possibility of philosophy itself, as most of us conceive of it.

  9. Robert Wallace,

    True-much hinges on what is meant by “free.”

    Going with your definition, Donald would not be free. However, it would seem that most of the time we would also not be free-at least if the requirement of taking in all relevant information is taken seriously. Using the 2016 election as an example, most US voters will not take in all or even a useful amount of the relevant information. I would say that most decisions made by most folks use far less than all of the relevant information-even using a low standard of what is readily and easily available.

  10. Yes, few of us are ever fully rational, and so (in my sense) few of us are ever fully free. The interesting thing is that most of us nevertheless would probably like to claim that we _are_ rational (or as rational as our circumstances permit). So most of us would probably doubt that Donald is identical with us–because he is designed and built so as not to be even capable of being rational.

  11. Robert Wallace,

    But, it is easy to imagine a “normal” person being treated with therapy, drugs or technology so that she forgets certain traumas (this is being done now). While the person would be different afterwards, intuitively memory loss (or removal) of this sort would not result in there being a different person.

  12. These are interesting borderline cases. I would be inclined to say that eliminating memories reduces one’s capacity for rational functioning, and thus does reduce one’s personhood. I think a more appropriate form of therapy is one that puts the memories in a context in which they cease to have major negative consequences. But I can imagine a situation in which one might choose to eliminate a trauma memory _while preserving access to the information by other means_, say through written or other records. This would not reduce one’s personhood to the same extent.

  13. As long as man has intuition and AI does not he will always remain at least one step ahead, and as man doesn’t yet know to any great extent what intuition is, he is not in a position to develop hardware to ‘detect’ it, let alone develop software enabling AI to utilize it.

    What did for Homo Erectus will be the same thing that will eventually topple Homo Sapiens, that is an evolutionary adaptation with which they cannot compete.

    Homo Sapiens came with the capacity for reason and with that the ability to strategize, out-think and manipulate their less capable fellows in the battle for resources. They still had, and indeed still have, the same egoistic desires as Homo Erectus but coupled with this seemingly miraculous new-found ability the old guard hadn’t a hope. It was the equivalent of Ninjas versus common thugs.

    Let us hope the next usurpers are not so giddy in their desires as we monkeys.

  14. a0And to to top it off, we are heading into the Olympic Track & Field evetns this weekend:a0a0August 4 and 5 for the sprints! a0I’ll be eating my Wheaties and getting my runs in early so as not to miss an of the

Leave a Comment


NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>