Last week, Human Rights Watch (along with the Harvard Law School International Human Rights Clinic) published a report titled "Losing Humanity: The Case Against Killer Robots." It argues for a preemptive prohibition by multilateral treaty on the development and use of "fully autonomous weapons that could select and engage targets without human intervention" -- a technological feat likely to be possible a few decades from now (and, in fact, already possible in some very limited forms). Certainly the direction of weapons development is toward greater automation, and this raises a legitimate concern about whether highly automated or autonomous weapons systems would comply with the laws of war. While HRW's new report is an important contribution to this debate - containing useful background discussion of emergent precursor systems currently in use or under development - we think it rests on some questionable premises both factual and moral, and reaches some overly sweeping and unwise conclusions.
As we detail in our new Policy Review essay, Law and Ethics for Robot Soldiers, we of course share the report's goals of protecting civilians in armed conflict and strengthening the law of armed conflict. Instead of a multilateral treaty prohibition on gradually emerging technologies whose capabilities and limitations are still unknown, however, a better path would be for the United States to lead the gradual development of internal state norms applicable to weapons systems that might become increasingly automated to the point of genuine autonomy. Such norms would not necessarily be legally binding rules - and different countries might reach different judgments about weapons capabilities. They could, however, lead to increasingly widely-held expectations about legally or ethically appropriate conduct, whether formally binding or not, and best practices. Worked out incrementally, debated, and applied to the United States’ own weapons development processes, they could be carried outwards to engage with others around the world.
Part of our disagreement with the report and its proposal stems from differing empirical assessments or predictions - for example, whether artificial intelligence and computer analytic systems could ever reach the point of satisfying the fundamental ethical and legal principles of distinction and proportionality (they are more pessimistic than we are, though we're pretty agnostic on this). At bottom, we don't think it's wise or even possible to decide today what targeting technology might or might not be able to do - say, a generation from now. We are disinclined to support a treaty regime that would prejudge that fact in advance and, if successful, might preclude the benefits of precision autonomous weaponry that might significantly reduce human targeting error rates through machine analytics.
Another disagreement stems from different views about the challenges of holding individuals criminally liable for war crimes when lethal systems operate autonomously (we think this is important, but just one of many mechanisms for law-of-war enforcement). Other disagreements are moral or philosophical, at something close to first principles. They include whether lethal autonomy is inherently morally unacceptable (they suggest that it is, but we don't); and, additionally, whether political leaders will be too tempted to use force if their own human soldiers are no longer at risk on the battlefield (we're concerned that this logic amounts, in a moral sense, to holding both ordinary soldiers and civilians' lives hostage, merely as a means to constrain decision-making by political leadership, by deliberately depriving them of the most humane method of projecting force so that they won't overuse it; that there are dangers to over-constraining, too; and that this objection is overbroad in that it would apply to current stand-off weapons technologies). Finally, part of our disagreements are about the practical difficulties that face international legal prohibitions of military technologies (we think such efforts are likely to fail).
At a more fundamental level than any of these specific differences, though, our view is that autonomy in weapons systems will develop very incrementally. Instead of some determinate, ascertainable break-point between the human-controlled system and the machine-controlled one, it is far more likely that the evolution of weapons technology will be gradual, slowly and indistinctly eroding the role of the human in the firing loop. As to a preemptive prohibition on developing such systems (distinct from deploying them), even if it were desirable, the technologies at the heart of such weapons are fundamentally the same as at the heart of a wide variety of civilian or non-weapons military systems, and weapons systems will frequently be so interwoven into the machine system as a whole that disentangling what's prohibited and what's not, and at what point in the path of weapons development, will not be feasible.
The better approach is to embed evolving internal state standards into incrementally advancing automation, as systems are designed at the front-end and as they are gradually developed from design concept to deployable system, including the usual requirements to be met on the law of distinction and proportionality in the final weapon system. This approach, we believe, is more likely to be successful in adapting advancing technology, including its possible benefits, to the fundamental requirements of law and ethics on the battlefield over the long run, than a prohibitory multilateral treaty aimed primarily at preempting the much-feared, but still unclear, end-state of this slow technological creep.