Tag Archives: technology

Automation & Administration: An Immodest Proposal

It has almost been a law that technological advances create more jobs than they eliminate. This, however, appears to be changing. It is predicted that nearly 15 million jobs will be created by advances and deployment of automation and artificial intelligence by 2027. On the downside, it is also estimated that technological change will eliminate about 25 million jobs. Since the future is not yet now, the reality might be different—but it is generally wise to plan for the likely shape of things to come. As such, it is a good idea to consider how to address the likely loss of jobs.

One short term approach is moving people into jobs that are just ahead of replacement. This is rather like running ahead of an inextinguishable fire in a burning building—it merely postpones the inevitable. A longer-term approach is to add to the building so that you can keep on running as long as you can build faster than the fire can advance. This has been the usual approach to staying ahead of the fire of technology. An even better and rather obvious solution is to get out of the building and into one that will not catch on fire. Moving away from the metaphor, this would involve creating jobs that are technology proof.

If technology cannot fully replicate (or exceed) human capabilities, then there could be some jobs that are technology proof. To get a bit metaphysical, Descartes argued that merely physical systems would not be able to do all that an immaterial mind can do. For example, Descartes claimed that the ability to use true language required an immaterial mind—although he acknowledged that very impressive machines could be constructed that would have the appearance of thought. If he is right, then there could be a sort of metaphysical job security. Moving away from metaphysics, there could be limits on our technological abilities that preclude being able to build our true replacements. But, if technology can build entities that can do all that we can do, then no job would be safe—something could be made to take that job from a human. To gamble on either our special nature or the limits of technology is rather risky, so it would make more sense to take a more dependable approach.

One approach is creating job preserves (like game preserves, only for humans)—that is, deciding to protect certain jobs from technological change. This approach is nothing new. According to some accounts, one reason that Hero of Alexandria’s steam engine was not utilized in the ancient world was because it would have displaced the slaves who provided the bulk of the labor. While this option does have the advantage of preserving jobs, there are some clear and obvious problems with creating such an economic preserve. As two examples, there are the practical matters of sustaining such jobs and competing against other countries who are not engaged in such job protection.

Another approach is to intentionally create jobs that are not really needed and thus can be maintained even in the face of technological advancement. After all, if there is really no reason to have the job at all, there is no reason to replace it with a technological solution. While this might seem to be a stupid idea (and it is), it is not a new idea. There are numerous jobs that are not really needed that are still maintained. Some even pay extremely well. One general category of such jobs are administrative jobs. I will illustrate with my own area of experience, academics.

When I began my career in academics, the academy was already thick with administrators. However, many of them did things that were necessary, such as handling finances and organizing departments. As the years went on, I noticed that the academy was becoming infested with administrators. While this could be dismissed as mere anecdotal evidence on my part, it is supported by the data—the number of non-academic administrative and professional employees in the academics has doubled in the past quarter century. This is, it must be noted, in the face of technological advance and automation which should have reduced the number of such jobs.

These jobs take many forms. As one example, in place of the traditional single dean, a college will have multiple deans of various ranks and the corresponding supporting staff. As another example, assessment has transformed from an academic fad to a permanent parasite (or symbiote, in cases where the assessment is worthwhile) that has grown fat upon the academic body. There has also been a blight of various vice presidents of this and that; many of which are often linked to what some call “political correctness.” Despite being, at best, useless, these jobs continue to exist and are even added to. While a sane person might see this as a problem to be addressed, a person with a somewhat different perspective would be inspired to make an immodest proposal: why not apply this model across the whole economy? To be specific, a partial solution to the problem of technology eliminating jobs is to create new administrative positions for those who lose their jobs. For example, if construction jobs were lost to constructicons, then they could be replaced with such jobs as “vice president of constructicon assessment”, ‘constructicon resource officer”, “constructicon gender identity consultant” and supporting staff.

It might be objected that it would be wrong, foolish and wasteful to create such jobs merely to keep people employed as jobs are consumed by technology. The easy and obvious reply is that if useless jobs are going to flourish anyway, they might as well serve a better purpose.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Body Hacking III: Better than Human

While most of the current body hacking technology is merely gimmicky and theatrical, it does have potential. It is, for example, easy enough to imagine that the currently very dangerous night-vision eye drops could be made into a safe product, allowing people to hack their eyes for good or nefarious reasons. There is also the model of the cyberpunk future envisioned by such writers as William Gibson and games like Cyberpunk and Shadowrun. In such a future, people might body hack their way to being full cyborgs. In the nearer future, there might be such augmentations as memory backups for the brain, implanted phones, and even subdermal weapons. Such augmenting hacks do raise various moral issues that go beyond the basic ethics of self-modification. Fortunately, these ethical matters can be effectively addressed by the application of existing moral theories and principles.

Since the basic ethics of self-modification were addressed in the previous essay, this essay will focus solely on the ethical issue of augmentation through body hacking. This issue does, of course, stack with the other moral concerns.

In general, there seems to be nothing inherently wrong with the augmentation of the body through technology. The easy way to argue for this is to draw the obvious analogy to external augmentation: starting with sticks and rocks, humans augmented their natural capacities. If this is acceptable, then moving the augmentation under the skin should not open up a new moral world.

The easy and obvious objection is to contend that under the skin is a new moral world—that, for example, a smart phone carried in the pocket is one thing, while a smartphone embedded in the skull is quite another.

This objection does have merit: implanting the technology is morally significant. At the very least, there are the moral concerns about potential health risks. However, this moral concern is about the medical aspects, not about the augmentation and this is the focus of the moral discussion at hand. This is not to say that the health issues are not important—they are actually very important; but fall under another moral issue.

If it is accepted that augmentation is, in general, morally acceptable, there are still legitimate concerns about specific types of augmentation and the context in which they are employed. Fortunately, there is already considerable moral discussion about these categories of augmentation.

One area in which augmentation is of considerable concern is in sports and games. Athletes have long engaged in body hacking—if the use of drugs can be considered body hacking. While those playing games like poker generally do not use enhancing drugs, they have attempted to make use of technology to cheat. While future body hacks might be more dramatic, they would seem to fall under the same principles that govern the use of augmenting substances and equipment in current sports. For example, an implanted device that stores extra blood to be added during the competition would be analogous to existing methods of blood doping. As another example, a poker or chess player might implant a computer that she can use to cheat at the game.

While specific body hacks will need to be addressed by the appropriate governing bodies of sports and games, the basic principle that cheating is morally unacceptable still applies. As such, the ethics of body hacking in sports and games is easy enough to handle in the general—the real challenge will be sorting out which hacks are cheating and which are acceptable. In any case, some interesting scandals can be expected.

The field of academics is also an area of concern. Since students are quite adept at using technology to cheat in school and on standardized tests, it must be expected that there will be efforts to cheat through body hacking. As with cheating in sports and games, the basic ethical framework is well-established: creating in morally unacceptable in such contexts. As with sports and games, the challenge will be sorting out which hacks are considered cheating and which are not. If body hacking becomes mainstream, it can be expected that education and testing will need to change as will what counts as cheating. To use an analogy, calculators are often allowed on tests and thus the future might see implanted computers being allowed for certain tests. Testing of memory might also become pointless—if most people have implanted devices that can store data and link to the internet, memorizing things might cease to be a skill worth testing. This does, however, segue into the usual moral concerns about people losing abilities or becoming weaker due to technology. Since these are general concerns that have applied to everything from the abacus to the automobile, I will not address this issue here.

There is also the broad realm composed of all the other areas of life that do not generally have specific moral rules about cheating through augmentation. These include such areas as business and dating. While there are moral rules about certain forms of cheating, the likely forms of body hacking would not seem to be considered cheating in such areas, though they might be regarded as providing an unfair advantage—especially in cases in which the wealthy classes are able to gain even more advantages over the less well-off classes.

As an example, a company with considerable resources might use body hacking to upgrade its employees so they can be more effective, thus providing a competitive edge over lesser companies.  While it seems likely that certain augmentations will be regarded as unfair enough to require restriction, body hacking would merely change the means and not the underlying game. That is, the well-off always have considerable advantages over the less-well off. Body hacking would just be a new tool to be used in the competition. Hence, existing ethical principles would apply here as well. Or not be applied—as is so often the case when vast sums of money are on the line.

So, while body hacking for augmentation will require some new applications of existing moral theories and principles, it does not make a significant change in the moral landscape. Like almost all changes in technology it will merely provide new ways of doing old things. Like cheating in school or sports. Or life.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Drone Ethics is Easy

English: AR Drone part

English: AR Drone part (Photo credit: Wikipedia)

When a new technology emerges it is not uncommon for people to claim that the technology is outpacing ethics and law. Because of the nature of law (at least in countries like the United States) it is very easy for technology to outpace the law. However, it is rather difficult for technology to truly outpace ethics.

One reason for this is that any adequate ethical theory (that is, a theory that meets the basic requirements such as possessing prescriptively, consistency, coherence and so on) will have the quality of expandability. That is, the theory can be applied to what is new, be that technology, circumstances or something else. An ethical (or moral) theory that lacks the capacity of expandability would, obviously enough, become useless immediately and thus would not be much of a theory.

It is, however, worth considering the possibility that a new technology could “break” an ethical theory by being such that the theory could not expand to cover the technology. However, this would show that the theory was inadequate rather than showing that the technology outpaced ethics.

Another reason that technology would have a hard time outpacing ethics is that an ethical argument by analogy can be applied to a new technology. That is, if the technology is like something that already exists and has been discussed in the context of ethics, the ethical discussion of the pre-existing thing can be applied to the new technology. This is, obviously enough, analogous to using ethical analogies to apply ethics to different specific situations (such as a specific act of cheating in a relationship).

Naturally, if a new technology is absolutely unlike anything else in human experience (even fiction), then the method of analogy would fail absolutely. However, it seems somewhat unlikely that such a technology could emerge. But, I like science fiction (and fantasy) and hence I am willing to entertain the possibility of that which is absolutely new. However, it would still seem that ethics could handle it—but perhaps something absolutely new would break all existing ethical theories, showing that they are all inadequate.

While a single example does not provide much in the way of proof, it can be used to illustrate. As such, I will use the matter of “personal” drones to illustrate how ethics is not outpaced by technology.

While remote controlled and automated devices have been around a long time, the expansion of technology has created what some might regard as something new for ethics: drones, driverless cars, and so on. However, drone ethics is easy. By this I do not mean that ethics is easy, it is just that applying ethics to new technology (such as drones) is not as hard as some might claim. Naturally, actually doing ethics is itself quite hard—but this applies to very old problems (the ethics of war) and very “new” problems (the ethics of killer robots in war).

Getting back to the example, a personal drone is the sort of drone that a typical civilian can own and operate—they tend to be much smaller, lower priced and easier to use relative to government drones. In many ways, these drones are slightly advanced versions of the remote control planes that are regarded as expensive toys. The drones of this sort that seem to most concern people are those that have cameras and can hover—perhaps outside a bedroom window.

Two of the areas of concern regarding such drones are safety and privacy. In terms of safety, the worry is that drones can collide with people (or other vehicles, such as manned aircraft) and injure them. Ethically, this falls under doing harm to people, be it with a knife, gun or drone. While a flying drone flies about, the ethics that have been used to handle flying model aircraft, cars, etc. can easily be applied here. So, this aspect of drones has hardly outpaced ethics.

Privacy can also be handled. Simplifying things for the sake of a brief discussion, drones essentially allow a person to (potentially) violate privacy in the usual two “visual” modes. One is to intrude into private property to violate a person’s privacy. In the case of the “old” way, a person can put a ladder against a person’s house and climb up to peek under the window shade and into the person’s bedroom or bathroom. In the “new” way, a person can fly a drone up to the window and peek in using a camera. While the person is not physically present in the case of the drone, his “agent” is present and is trespassing. Whether a person is using a ladder or a drone to gain access to the window does not change the ethics of the situation in regards to the peeking, assuming that people have a right to control access to their property.

A second way is to peek into “private space” from “public space.” In the case of the “old way” a person could stand on the public sidewalk and look into other peoples’ windows or yards—or use binoculars to do so. In the “new” way, a person can deploy his agent (the drone) in public space in order to do the same sort of thing.

One potential difference between the two situations is that a drone can fly and thus can get viewing angles that a person on the ground (or even with a ladder) could not get. For example, a drone might be in the airspace far above a person’s backyard, sending back images of the person sunbathing in the nude behind her very tall fence on her very large estate. However, this is not a new situation—paparazzi have used helicopters to get shots of celebrities and the ethics are the same. As such, ethics has not been outpaced by the drones in this regard.  This is not to say that the matter is solved—people are still debating the ethics of this sort of “spying”, but to say that it is not a case where technology has outpaced ethics.

What is mainly different about the drones is that they are now affordable and easy to use—so whereas only certain people could afford to hire a helicopter to get photos of celebrities, now camera-equipped drones are easily in reach of the hobbyist. So, it is not that the drone provides new capabilities that worries people—it is that it puts these capabilities in the hands of the many.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Our Father vs Big Brother

The tape of Mitt Romney speaking to his cohorts in what could be described as a proverbial back-room seems to have had a lasting effect – we’ll see if it turns out to make all the difference, but it certainly brought into focus the image of Romney as oblivious aristocrat.

But even more interesting to me than the specifics of this candidate’s attitudes was the evidence of a change in certain social and technological expectations. Many people responded to Romney’s comments by shaking their heads at the fact that he would say those things out loud, that he would speak so candidly. Sure, he was at a fundraiser with other super-rich political puppeteers, but he must have known the information could get out…

Of course, a couple decades ago, it probably would not have. Even if a member of the staff could afford a hidden camera it would have taken a lot of planning and setting up to get the material, and once it was on tape it would have taken a lot of work to get it nationally aired. It may not seem like that’s that much commitment, but it’s definitely active and organized: hide tiny expensive specialty technology beforehand, and then transfer incriminating material to a standard medium, and try to get a national news outlet’s attention without being dismissed as some kind of conspirator (in fact, many journalists back then might have rejected the tape as unethical just because Romney clearly doesn’t realize he’s being taped).

Today, a person does not even have to really care about the consequences – sometimes people will record things just because they can. In a room with a famous person and some number of non-guests with iPhones, it is not at all surprising that someone recorded Romney speaking and then put a portion of it on YouTube—there did not even need to be intent behind it. The ease of catching a person in the act has increased so monumentally that the very idea of a backroom deal is in trouble.* Anyone can tape the conversation and show it to a potential audience of millions, and they don’t even need to dislike you or want to cause harm. It’s just information sharing—the connotations or potential impact of the information is not always considered (this happens on Facebook all the time: a photo posted in fun in one context is evidence of a promise broken in another, for instance).

The idea that we are losing privacy, and even losing the desire for privacy, has been argued about since technology and the internet especially first began allowing for these new methods of disclosure. An angle I want to focus on is the concurrence this has with a rise in atheism. There are plenty of other reasons that the idea of God is not as popular as it once was, and technology and the internet can contribute to the phenomenon in other ways. But there’s a social, pragmatic level at which God is becoming obsolete that could be a factor.

One of the classic reasons to have a concept of God from society’s point of view is the same as a reason to have Santa: “he knows when you’ve been bad or good, so be good for goodness’ sake.” From an intellectual standpoint this may not be convincing – Plato, for instance, attempts to show why we can’t use God as a referee when discussing the question of ethics in The Republic. The story of the Ring of Gyges, a ring which allows its wearer to become invisible and thus get away with any sort of immoral behavior she chooses with no repercussions, leads to the argument that even if the wearer is invisible, surely the Gods still know and can still judge. The original argument illustrated by the story of the ring is that people only act ethically when they are being watched, and this comeback says, well, you are always being watched by God so the point is moot. God serves as an external conscience.
But in The Republic, this idea is debunked—God is unreliable, and can be appeased by gifts or pleas for forgiveness. If you do something wrong, you can always get back on His good side. In other words, your conscience may know you were unethical this once, but do something extra-nice next week, and you’ll feel it’s been evened out.

In that way, Big Brother is more effective. If a person wants to steal something in a store, but thinks “No, God will know what I’ve done,” they might stop themselves. But they may also imagine that they can bargain with the big guy and promise to never do something like this ever again. On the other hand, if they believe there is a camera coming at them in every direction it will be harder to make that kind of deal. Our increasingly Panoptic forms of life make it possible to see this particular utility of God being overshadowed, since people with videos are a lot more direct and aggressive.

I am not suggesting that would consciously affect beliefs, but if the fear of moral oversight were to shift realistically toward peers, one of God’s greatest strengths would be made irrelevant. Sure, no video can see into your heart: but if it becomes widely expected that everything that happens in a public or semi-public space could be broadcast, that knowledge could play the part of an external conscience just as well as religion.

It’s true that God was famously described as dead over a century ago by Nietzsche, and he too was concerned with moral issues. However, his focus was on the lack of cohesion or agreement in beliefs, whereas I am addressing the much more mundane but perhaps more convincing issue of the cohesion of facts. That is, Nietzsche thought the concept of God was coextensive with the idea of absolute truth, and as that became untenable, religion would die. It’s arguable to what degree that happened, but the issue here is not what is right, but whether the right thing has to be done. God as an externalized conscience becomes less effective when society is doing the job in a more obvious and graspable way (which doesn’t require that God isn’t real, just that His methods are less convincing).

It could easily be coincidence that secularism is on the rise at the same time as surveillance and general recording become the norm, but I’m suggesting that it is part of larger cultural shift, and that the notion of God just fits less easily into a world where we can already picture a very ordinary kind of “all-seeing, all-knowing” presence. What was once supernatural is now merely artificial.

*I wouldn’t want to imply that therefore people will start being ethical, however. There are always adaptations and ways around – the idea is just that a fear of being seen is becoming much more real.

Resurrection & Immortality in the Flesh

When I first heard of Ray Kuzweil’s ideas, I assumed he was a science fiction writer. After all, the sort of transhuman future he envisioned is stock sci-fi fare. I was mildly surprised when it turns out that he is quite serious about (and well paid for expressing) his views. I was somewhat more surprised to learn that he has quite a following. Of course, I wasn’t too surprised-I’ve been around a while.

Oversimplifying things, Kuzweil envisions a future in which humans will be immortal and the dead will return to live. While these are common claims in religion, Kuzweil’s view is that technology will make this possible. While some describe his view as a religion, I’d prefer to use a made up word, “techion” to refer to this sort of phenomena. As I see it, a religion involves claims about supernatural entities. Kuzweil’s view is purely non-supernatural, but does have most of the stock elements of religion (the promise of a utopian future, immortality, and the raising of the dead). So, it is sort of a technological religion-hence “techion.” Yes, I like making up words. Try it yourself-it is free, fun and makes you look cool (your actual results might differ).

While the religion-like aspects of his views are interesting, I’ll be looking at the ideas of technological immortality and technological resurrection.

In the abstract, technological immortality is quite simple: just keep repairing and replacing parts.  In theory, this could be kept up until the end of time, thus granting immortality. Even with our current technology we can repair and replace parts. For example, my quadriceps tendon was recently repaired. I have friends with artificial hips and other friends who gotten tissue and organ transplants. It is easy to imagine technology progressing enough to replace or repair everything.

Technological resurrection is a bit trickier. While we can “jump start” people who have died, Kuzweil envisions something more radical. His view is that we might be able to take the DNA of dead people and rebuild them using nanobots. This, he claims, could create a new body that would be  “indistinguishable from the original person.” Of course, having a body that is indistinguishable form the original is hardly the same as having the original person back. It would, rather, be a case of having a twin. To recreate the person, his plan is that information about the original (such as things the person wrote and recollections of people who knew them) would be used to recreate the mind of the original.

Nanobot reconstruction from DNA seems possible. After all, each of our bodies assembled itself using DNA, so we have a natural model for that process. The challenge is, of course, to duplicate it with technology. We also know that the brain accepts external information that shapes the person, so such a “download” would (in theory) be possible. Of course, there is a big difference between the normal experiences that shape us and downloading information in an attempt to recreate a person.

One aspect of both immortality and resurrection that is of philosophical interest is the matter of personal identity. Immortality is only immortality if I keep on going as me. Replacing me with something that is like me does not give me personal immortality. Resurrection is only true resurrection if it is me who has returned from the dead. Recreating my body from my DNA and telling him stories about me does not bring me back to life.

Turning to immortality, the key question is this: would the identity of the person be preserved through the changes? Personal identity does seem to survive through fairly robust changes. For example, I’m confident that at 43 I am the same person as the very young kid who staggered down the aisle of church saying “I’m drunk” after drinking the communion wine. I’m larger now and a bit wiser, but surely still the same person. However, the changes required for technological immortality would be quite radical. After all, eventually the brain tissue will fail and thus will need to be replaced-perhaps by machinery.

This problem is, of course, like the classic ship of Theseus problem: how much of the original can be replaced before it is no longer the same entity? Of course, it is also complicated by the fact that a person is involved and the identity of persons is a bit more complex than that of objects.

Fortunately, there is an easy answer. If whatever it is that makes a person the person she is can keep on going in the increasingly strange flesh, then such immortality is possible. If not, then it would not be immortality, but a strange sort of death and succession. Since I don’t know what it is that makes a person the person she is, I lack a definite answer to this question. I am sure that it is quite a shock that no definite answer has been reached.

Of course, this does not diminish the importance of the concern. Assessing whether we should take the path that Kurzweil desires involves deciding whether this sort of immortality is real immortality or not. That is, determining whether we would go on as the same people or whether we would simply be dying a strange and prolonged death as we are being replaced.

Now, for resurrection. This matter has long been of interest to philosophers. Plato wrote about reincarnation (the difference is that resurrection is supposed to restore the same person and the same body while re-incarnation is supposed to restore the same person with a different body) and Locke explicitly wrote about resurrection. Naturally, philosophers who were also religious thinkers tended to write about this subject.

True resurrection, as noted above, has two key aspects. First, the original body has to be recreated. If you get a different sort of body, then you have been reincarnated (perhaps as a rather miffed squirrel). Second, the original person has to be restored. Locke’s view on this matter is that come judgment day, God will recreate our bodies (hopefully at their prime) and place the right consciousness into each body (for Locke, the person is his or her consciousness).

Recreating the original body seems possible. With DNA, raw material  and those hypothetical nanobots, it would just be a (re) construction project. It would also help to have images of the original body, plus as much other relevant data as possible. So, the first aspect is taken care of.

Getting the original person back in the recreated body is the real challenge. Kurzweil does seem to clearly recognize that the method he envisions will not restore the original person. He seems to be right about this. After all, the method he describes relies on “public” information. That is, it depends on what information the person provided before death and what other people remember of him. This obviously leaves out everything that was not recorded or known by others. As such, it will be a partial reconstruction-a new person who is force fed the scraps of another person’s life. This, obviously enough, raises some serious moral issues.

On the face of it, Kurzweil’s resurrection seems to be moral appalling. That this is so can be illustrated by the following analogy. Imagine that Sally and Ivan have a son, Ted. Ted dies at 18. Sally and Ivan go through all the adoption agencies until they find a baby, Joe,  that looks like Ted did. They rename Joe as Ted and then reconstruct Ted’s life as closely as possible-punishing the former Joe whenever he deviates from Ted’s life and rewarding him for doing what Ted did. Sally and Ivan would be robbing Joe of choice and using him as a means to an end-fulfilling their need to have Ted back. But, they have no right to do this to Joe-he is a person, not a thing to be used to recreate Ted.

The same certainly seems to hold in the situation Kurzweil envisions. To create a human being and force him to be a copy of a dead person is a horrible misuse of a person and a wicked act.

Reblog this post [with Zemanta]

Umbrage & The Web

Jonathan Alter of Newsweek recently wrote a column on umbrage and the web. While I agree with some of his claims, the article does require a response. As such, I will reply to his main points and offer both commentary and criticism.

Alter begins with a common theme: the umbrage that is present on the web. As Alter notes, the web provides an anonymous vehicle for lies, crudeness and degradation. Of course, the use of the written (or typed) word as a vehicle of umbrage is nothing new. While I am not a professional historian, as a philosophy professor I research the times and backgrounds of many philosophers.  Based on what I have learned over the years, I can assure you that umbrage has been with humanity since we started writing things down. Interestingly, after I read Alter’s article this morning, I saw a show on the History Channel about two rival Chinese gangs who wrote slurs against each other in the American newspapers. This was during the 1800s. I later read an article in the June 2008 Smithsonian about Darwin (Richard Conniff, “On the Origin of a Theory”, 86-93). The article noted some of the written sniping between various people regarding the concept of evolution.  Before Darwin published his work, Robert Chambers wrote Vestiges of the Natural History of Creation in 1845. One geologist replied to the work by expressing his desire to stamp “with an iron heel upon the head of the filthy abortion, and put and end to its crawlings.” (page 90). That is eloquent bit of umbrage every bit as venomous as the comments inflicted on the web today. Of course, it does not quite match the concise with of “boitch u r teh suckz.”

If one turns to politics, examples of venom throughout history are far too numerous to list. For those who wish to search for examples, I suggest beginning with political cartoons from the 1700s and 1800s. You will find that the poison pens of old crafted many venomous cartoons.  Other excellent sources the are various anonymous political tracts from the same time period. As such, umbrage and venom in print are nothing new.

Like Alter, I believe that the umbrage and venom are negative and undesirable. Such venom adds nothing to the quality of discussions and simply serves to inflame emotions to no good end. It also encourages intellectual sloppiness because people feel that they have made an adequate reply when they have merely vented their spleens (to use the old phrase).

Alter next turns to a matter of significant concern: while bloggers offer a great deal of commentary, they rarely provide people with news in the true sense. While some blogs do post the news, it is (as Alter points out) generally taken from some traditional media source. Newspapers and other traditional media sources are, as he notes, are currently laying off reporters due to financial problems. This means that there will be less original investigation and reporting. Fortunately, some bloggers are stepping in and doing their own investigations. I suspect that this might lead to the more substantial blogging sites gradually stepping into the openings created by the decline of traditional  print media. Of course, there is the obvious question of whether a web based organization can afford to do robust investigation and reporting. In principle, however, there seems to be no reason why they cannot partially replace traditional print media.

A third point made by Alter is that print media is moving towards the web’s style of writing. To be specific, there is a push towards short articles like those in blogs. Presumably this is to match the alleged shorter attention span of the modern audience. I do agree with Alter that there can be a negative side to taking this approach. While a short piece can be fine, there is still a clear need for depth and details and this requires more than a blog entry sized block of text. As you can see from most of my own blogs, I tend to go on at considerable length. Hence, it is hardly shocking that I would support him in this matter.

A fourth point that Alter makes is the very common criticism that people exploit the anonymity of the web to launch attacks and spew venom. This is, of course, a concern. However, this is nothing new. History is full of examples of anonymous writings that are quite critical and venom filled. The web merely makes it easier to make such works public and to avoid being identified. After all, if I have to print and distribute an anonymous tract, there will be a fairly clear trail leading back to me. But, on the web I can easily make use of a free service that ensures my identity will remain unknown by making my posting effectively untraceable.

As Alter points out, the “web culture” tolerates anonymity. However, many writers do identify themselves and people are often quite critical of those who hide behind anonymity when they spew forth venom. While there can be good reasons to hide one’s identity (such as fear of reprisals from oppressive governments), most people lack a legitimate reason to remain hidden. My view is that if someone believes what she is typing, then she should have enough courage to actually claim her own words. There is also the matter of courtesy. Anonymous posting is like talking to people while wearing a mask. That is a bit rude. Unless, of course, you happen to be a superhero.

His fifth point is that people often prefer rumors to facts. As he points out, some people believe the emails about Obama being a Muslim and similar such things. What is new here is not that people often prefer rumors, but the delivery mechanism of the rumors. In the past, people had to rely on newspapers, gossip, and public broadsheets in order to learn of rumors. Today, rumors can be sent via email. As such, we have the same sort of rumors using a different medium.

Since I teach critical thinking, I am well aware that people prefer a rumor that matches their biases over truth that goes against them. I am also well aware that people generally prefer something dirty, juicy, or titillating over dull facts. Hence, the appeal of rumors is hardly surprising. Obviously, people should have better rumor filters so as to avoid believing false things (or even true things on the basis of inadequate evidence). The internet has just changed the medium and not the basic problem: most people are poor critical thinkers. Fixing this requires what philosophers have been arguing for since before Socrates: people need to learn to think in a critical manner.

Alter’s sixth point is about a commonly remarked upon phenomena: the internet (email and web comments) seems to be especially awash in venom. As noted above, this is nothing new. However, as Alter points out, the web and email lead to disinhibition. While he does not explore the reasons for this, there are three plausible causes. First, email and web comments are effectively instant. With a written letter, you have time to think about it as you put it in the envelope and go to mail it. During this time you might think better of what you said. With an email or web comment, you just push a button and it is done. Second, email and web comments are generally not edited. Professional newspapers and magazines are edited and hence venomous comments generally do not get into print. Hence, the web seems like a more venomous place. Since people know that what they type will appear, they are less inclined to be restrained. Naturally, this feeds the beast-when people see the first venomous remark, they are (like someone who sees trash already on the ground) more inclined to follow suit. Third, the web allows for anonymous posting and emailing so people can (as noted above) spew from behind a mask. This, naturally enough, encourages people to be less nice.

Some web sites deal with this problem by reviewing comments before publishing them. On the plus side, this does help filter out some of the venom. On the minus side, such editing does tend to interfere with the freedom of expression. It is, obviously enough, very tempting for an editor to delete comments because she disagrees with the contents. Of course, this approach does not deal with the main causes of the problem: poor impulse control, poor ethics and poor reasoning skills.

Philosophers have been trying to deal with those two problems for centuries. Aristotle provided some of the best advice on how to deal with poor impulse control  and poor ethics in his Nicomachean Ethics. Of course, most people do not seem very inclined to follow that advice. Almost all philosophers have tried to encourage people to work on their reasoning skills. However, this has not met with great success. Until more people have better impulse control, better ethics and better reasoning skills, the deluge of venom can expect to continue.

Alter’s seventh point is the usual lamentation about how the web was supposed to bring us breadth in coverage but did not live up to the dream. As he notes, bloggers tend to mainly follow right along with the cable networks. For example, as the American financial system was taking serious hits,  most bloggers and the cable news focused mainly on the “satirical” Obama cover on the New Yorker.

Obviously, this behavior is hardly shocking. Bloggers do the same thing the traditional media does: they focus on the stories they think people will want to hear about. While they can be criticized for pandering to the masses, the masses should also be criticized for wanting such things. When my students ask me why the media focuses on the sensational over the substantive, I provide the easy and obvious answer: the media gives people what they want. Thus, in order to have more substantial coverage, people would need to switch their desire from what is sensational to what is substantive. Good luck with that.

That said, there is actually significant breadth in the realm of blogs. If you leave the mainstream blogs and search around a bit, you will easily find blogs on vast array of topics. For example, there are many blogs devoted to philosophical issues (such as this one). As another example, there are blogs devoted to science. These bloggers do not blindly follow the main media. This, obviously, means that they do not get as much attention as the bloggers who stick with the mainstream. As such, much of the perceived lack of breadth is merely a lack of looking.

The Ethics of E-Waste

People in the West enjoy their technological gadgets and new technology appears at a relentless pace. Thus, it is hardly surprising that there is an ongoing replacement of older technology by newer items. Mobile phone users generally switch to a new phone every 18 months. People update their computers less often, typically every three years, but a computer system is considerably larger than a phone. Televisions and other items are updated less often, but are replaced as they break or are considered obsolete. Naturally enough, most people just toss the old hardware into the trash. In accord with the cute naming practices of the internet age, this waste is commonly known as e-waste. While most people do not think about what happens after their old technology is carted away, obviously all that e-waste must be going somewhere.

A significant proportion of the items, at least in the United States, end up in landfills. For example, currently less than 1% of mobiles phones are recycled. There are two main problems with the landfill solution. First, it is wasteful of resources and space. Second, many high tech items contain toxic elements and thus pose environmental and health risks. Give the harm generated by dumping high tech items into landfills, this approach is not morally acceptable.

Some high tech items do end up being recycled. While there are some recycling plants in the United States and Europe, a significant amount of e-waste is shipped outside the West to places in Asia and Africa. While Western recycling centers must meet fairly stringent guidelines, those outside the West tend to be poorly regulated at best. Even worse, a considerable amount of the recycling is done by individuals who, from necessity, follow extremely risky practices. For example, people burn the insulation off copper wire in open fires-thus exposing themselves and the environment to dioxins and heavy metals. In another example, people melt lead from circuit boards using the same pots and pans they later cook their meals in. Not surprisingly, the impact on the health of those around the plants and those individual working directly with the e-waste is rather serious. This recycling can come back to harm the West as well. For example, it is suspected that the lead tainting those toys imported from China was recycled from Western sources.

From a moral standpoint, these recycling practices are unacceptable for two main reasons. The first is the matter of responsibility. The West is enjoying the benefits of high technology while passing a serious cost on to people who do not benefit from such technology. While this has long been the way of the world, it is irresponsible to cause others to pay the price for the benefits one receives. To use an analogy, this would be like one person getting the enjoyment out of smoking cigarettes while contributing to someone else suffering all the ill effects of smoking. The second is the matter of harm. The unregulated and crude recycling practices are clearly injurious to the health of the people involved (and those in the area) as well as to the environment. Allowing such unnecessary harm to take place is, intuitively, wrong.

One common proposed solution is that the West should recycle its e-waste and thus bear the cost of its technology luxury. This addresses both of the moral concerns raised above. First, the West would be taking responsibility for its e-waste. Second, the recycling conditions in the West would be far safer for individuals and the environment.

Of course, this solution does raise another problem. The people outside the West who are involved in this recycling are obviously not doing it for their health or as a hobby. They are recycling the material in order to make a living. Thus, one irony is that recycling in the West would deny them the means by which they have been earning a living. While they would be protected from the harms of dangerous recycling, they would need to find another way to earn a living. Presumably these people chose recycling over something they regarded as even less desirable. Hence, they could well be worse off if the West were acting responsibly by recycling the e-waste.

A more ethical solution would be to establish properly equipped and regulated recycling plants in these countries- with the West bearing a portion of the costs. This practice would have three main virtues. First, the people of the West would be acting in a responsible manner by taking a role in dealing with the waste generated by their way of life. Second, the environment and individuals would be protected from the harms of unregulated and unsafe recycling practices. Third, a better source of income would be available to the local people, thus enabling at least some people to have a better life. The recycling would also save money in the West. For example, a PC built using recycled material would require 43% less energy and thus would be cheaper to make. Obviously, it could also be cheaper to buy-thus allowing Westerners to save money. Thus, e-waste could very well become an opportunity for doing what is right while also doing what is economically advantageous.