Robo Responsibility

It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.

If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.

While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.

While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.

Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.

While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.

The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.

It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.

To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.

As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.

Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.

Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.

Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.

Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

  1. Autonomous systems, in the conventional sense, are just more complex systems than non-autonomous ones. The degree of autonomy ranges:

    None – for an inert object such as a rock or piece of metal.

    A little – for something like a plan that can autonomously direct itself toward the sun.

    A little – for a clockwork toy.

    A bit more – for an automatic vacuum cleaner / cat amusement ride.

    A lot – Goggle car, Asimo, …

    A real lot – Mammals of various kinds.

    The most, so far – Humans.

    Autonomy is only a measure of how localised decision making capacity is, in location, time, amount (power), adaptability, and any other measures one might seem fitting. If we take a mechanistic view of humans, then we are at one end of this scale (based on current experience, not on maximum autonomy, unless we are the most autonomous systems).

    So far, it’s been easy to categorise the last, humans, as independent intelligent agents, and have historically perceived ourselves as so different in kind that our autonomy includes features like the mysterious free will, minds, souls. With hindsight this now looks at least suspect and for many, wrong headed.

    With regard to ‘responsible’ autonomous systems, we still apply that mostly to ourselves. Any of the lesser autonomous systems above are seen less responsible the ‘simpler’ they are. But we can still use the language of autonomy, and are temped to do so, when identifying the cause of an accident.

    If a car’s brakes fail unexpectedly, the car is at fault and is removed from the road to prevent further harm. Though if it can be shown that a more ‘intelligent’ autonomous system is the cause of the brake failure – the neglect of the owner, or the maintainer – then more onerous responsibility may be attributed.

    We might be inclined to think human-like responsibility would apply to artificial (non-biologically-evolved) intelligent systems – ‘robots’.

    If we are going to go that far then we need first some way to measure and determine when such responsibility applies to a ‘robot’. But then, having made such an attribution we ought to be fair and give such a robot all the benefits of being a ‘person’: right to a defence, innocent until proven guilty, etc.

  2. Dennis Sceviour

    Hart and Honore’s superstitious theory “X can be seen as the cause of Y if Y would not have happened but for X” can be reduced to “Bad luck can be seen as the cause of the accident if the accident would not have happened but for Bad luck.” This does not seem to accomplish very much. Has there ever been a satisfactory answer in philosophy for finding responsibility for good fortune and bad luck?

    Some ideas on liability for passengers in robotic vehicles can be borrowed from Maritime Law. Steamship Lines are held responsible for the safety of passengers. There need be no Captains responsibility since Maritime liability for passengers covers hazardous weather, piracy and war. Ship builders are not usually held liable for bad luck. If the robotic vehicle is chartered then the liability falls on the licensed charter.

    If a single person purchases and uses a robotic vehicle then, who is responsible for the accident? As Mike LaBossiere concludes, it appears the individual is responsible even if manual control of the vehicle is not available.

  3. Dennis Sceviour,

    Hart and Honore work out a much more detailed account-I just used the basic principle. They would not take bad luck as a causal factor.

    The maritime laws, as you say, would seem a good source for legal precedents.

  4. Dennis Sceviour

    Mike,
    True, Hart and Honore did publish a book on causation in law and gave detailed accounts. It is my conclusion that causal theories in law often amount to superstition.

Leave a Comment


NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>