Today’s Wall Street Journal carries an op-ed piece by Matt and me on the regulation of autonomous weapon systems, “Killer Robots and the Laws of War: Autonomous Weapons Are Coming and Can Save Lives. Let’s Make Sure They’re Used Ethically and Legally.” Although the topic has not been especially visible in the United States (at least not by comparison to drone warfare), it is much more so in Europe and, in the last couple of weeks, at the United Nations, where the General Assembly’s First Committee (which deals with disarmament, weapons, etc.) heard a series of statements by countries, many of which called on the UN to address autonomous weapons. These countries were largely taking their cue from an NGO campaign launched a year by Human Rights Watch against what it calls “Killer Robots,” or, in less loaded terms, fully autonomous weapons, followed by a report from a UN Special Rapporteur, Christof Heyns, calling for a “moratorium” rather than (for now) a flat-out ban.
We began writing in this area before the ban campaign was launched, initially urging the Department of Defense to be more transparent about the standards of legal review being applied to weapon systems as they became increasingly automated and might ultimately raise questions about who or what was actually in control of firing decisions. After the ban campaign was launched, we pivoted to include arguments as to why an international ban was deeply mistaken. Our arguments are for careful, granular reviews and regulation of weapon technologies incorporating increasingly automated or even autonomous capabilities. Nothing in our positions says that these machines should be anything other than carefully regulated. But we also think that there is a moral obligation to seek to try and develop new technologies that might increase the precision of weapons and reduce harms on the battlefield; sweeping, preemptive bans, not just on “use” but “development” will not be effective and they might well impede emergence over time of new technologies that make war less harmful. So we’re delighted to announce “Killer Robots and the Laws of War.” Here is a short excerpt:
[A] ban is unlikely to work, especially in constraining states or actors most inclined to abuse these weapons. Those actors will not respect such an agreement, and the technological elements of highly automated weapons will proliferate. Moreover, because the automation of weapons will happen gradually, it would be nearly impossible to design or enforce such a ban. Because the same system might be operable with or without effective human control or oversight, the line between legal weapons and illegal autonomous ones will not be clear-cut.
If the goal is to reduce suffering and protect human lives, a ban could prove counterproductive. In addition to the self-protective advantages to military forces that use them, autonomous machines may reduce risks to civilians by improving the precision of targeting decisions and better controlling decisions to fire. We know that humans are limited in their capacity to make sound decisions on the battlefield: Anger, panic, fatigue all contribute to mistakes or violations of rules. Autonomous weapons systems have the potential to address these human shortcomings. No one can say with certainty how much automated capabilities might gradually reduce the harm of warfare, but it would be wrong not to pursue such gains, and it would be especially pernicious to ban research into such technologies.
That said, autonomous weapons warrant careful regulation. Each step toward automation needs to be reviewed carefully to ensure that the weapon complies with the laws of war in its design and permissible uses. Drawing on long-standing international legal rules requiring that weapons be capable of being used in a discriminating manner that limits collateral damage, the U.S. should set very high standards for assessing legally and ethically any research and development programs in this area. Standards should also be set for how these systems are to be used and in what combat environments.