In writing about autonomous weapon systems (AWS) and the law of armed conflict, we have several times observed the similarities between programming AWS and programming other kinds of autonomous technologies, as well as the similarities of ethical issues arising in each. Machine decision-making is gradually being deployed in emerging technologies as different as self-driving cars and highly automated aircraft, and many more will join them in such areas as elder-care machines and robotic surgery. These include decisions that involve potentially lethal consequences and decisions to engage in potentially lethal behaviors. As we put the point in a paper last year (co-authored with Daniel Reisner):
“Development of many of the enabling technologies of autonomous weapons systems—artificial intelligence and robotics, for example— are being driven by private industry for many commercial and societally- beneficial purposes (consider self-driving cars, surgical robots, and so on). They are developing and proliferating rapidly, independent of military demand and investment. Such civilian automated systems are already making daily decisions that have potential life and death consequences, such as aircraft landing systems. While most people are generally aware that these types of systems are highly automated (or even autonomous for some functions), and have become wholly comfortable with their use, relatively little public discourse has addressed the increasing decision-making role of autonomous systems in potentially life-threatening situations.”
A recent online article in MIT’s Technology Review raises the question of how self-driving cars used on roads in ordinary society ought to be programmed to be able to deal with situations in which, for example, the choices for a self-driving car are collide with a school bus (probably killing many children) or run the self-driving car into a wall (probably killing the car’s occupants). Such scenarios have been raised before - by Gary Marcus several years ago in the New Yorker, for example - but the issues are gaining greater practical traction as carmakers (Tesla, for example, and not just Google) gradually push their autopilot functions into new territory that begin to cross into genuinely “autonomous” driving.
The Technology Review article is titled, provocatively, “Why Self-Driving Cars Must Be Programmed to Kill.” It offers up the now-classic ethical dilemma - a sort-of technologically ramped-up version of the famous “trolley car” hypotheticals in moral philosophy - how should the car “be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?”
The article goes on to observe that answers given to these ethical questions could have a “big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?” It goes on to describe recent studies in “experimental ethics” aimed at assessing the public’s moral intuitions about such scenarios as the pre-programmed sacrifice of the self-driving car and its occupants.
One striking feature of these ethical dilemmas is that they are new (at least in the driving context) because they offer the possibility that the machine could make a decision according to an ethical calculus that a human would be quite unlikely to be able to perform in the moment of an accident. Or unwilling; tort law in the context of humans driving automobiles, for example, does not impose an affirmative duty of self-sacrifice on a human driver in order to save the children. One cannot be sued in tort for failing to drive one’s car into a wall and likely kill oneself in order to avoid harming the children, however virtuous such an act might be. Programming built into a self-driving car’s computer, however, may allow for such life-and-death decisions to be made in advance.
Some ethical dilemmas that trouble many with respect to AWS are thus not necessarily unique to the weapons context. Other autonomous systems will have to address (even by simply ignoring the ethical issue) questions of if and when an autonomous system can be programmed to take actions likely to kill, including scenarios of killing the few in order to save the many. Our view continues to be that to the extent such automation, autonomous, and robotics technologies come to be widely accepted as “more effective, safe and reliable than human judgment in many non-military realms, their use will almost certainly migrate into military ones. Indeed, future generations that perhaps come to routinely trust the computerized judgments of self-driving vehicles are likely to demand, as a moral matter, that such technologies be used to reduce the harms of war. It is largely a question of whether such systems work or not, and how well.”