Skip to content

Autonomous Weapons Systems: Recent Events and a Response

By and
Monday, May 6, 2013 at 8:00 AM

In recent weeks, a coalition of NGOs launched a global campaign to ban “killer robots,” or fully autonomous weapon systems (see reporting here).  Its statement calls “for urgent action to preemptively ban lethal robot weapons that would be able to select and attack targets without any human intervention.”  We critique that campaign and its empirical and moral assumptions in a recent paper: Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can.

As we note in that paper, “Some concerned critics portray that future, often invoking science-fiction imagery, as a plain choice between a world in which those systems are banned outright and a world of legal void and ethical collapse on the battlefield.”  For us, the question is not whether autonomous weapons should be regulated — we agree entirely that they should — but how.  We propose a combination of national-level regulation and development of shared best interpretations and practices through international dialogue.

Enter now to that debate a report last week by UN Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns.  Its summary states:

Lethal autonomous robotics (LARs) are weapon systems that, once activated, can select and engage targets without further human intervention. They raise far-reaching concerns about the protection of life during war and peace. This includes the question of the extent to which they can be programmed to comply with the requirements of international humanitarian law and the standards protecting life under international human rights law. Beyond this, their deployment may be unacceptable because no adequate system of legal accountability can be devised, and because robots should not have the power of life and death over human beings. The Special Rapporteur recommends that States establish national moratoria on aspects of LARs, and calls for the establishment of a high level panel on LARs to articulate a policy for the international community on the issue.
This report shares many assumptions with the Ban Killer Robots campaign, but it does a better job than most ban advocates in presenting the other side of each and recognizing spaces of uncertainty.
We disagree strongly, though, with the proposal that the UN — and in particular the UN High Commissioner for Human Rights — play a lead role in developing a legal and ethical framework to regulate these systems globally.  As we argue in our paper (and as US Naval War College professors Michael Schmitt and Jeffrey Thurnher have shown in doctrinal detail) existing law of armed conflict already provides a robust set of baseline requirements and standards.  Those requirements and standards are much more likely to have normative force in discussions among states already leading the way in developing such technologies, counter-intuitive as that might seem to those who think that UN human rights mechanisms confer the greatest legitimacy.
Moreover, the proposal by the UN Special Rapporteur for a moratorium on such systems suffers from many of its difficulties as a permanent ban.  A moratorium will have difficulty specifying when a weapon system is “autonomous” rather than merely highly automated, not as a matter of formal definition but in practical terms of machine-human interaction. It is also likely to attract states that have the least investment in such systems (and unlikely to attract those that have the most) and to suffer from defection by actors about whose conduct we should be especially worried.