In recent weeks, a coalition of NGOs launched a global campaign to ban "killer robots," or fully autonomous weapon systems (see reporting here). Its statement calls "for urgent action to preemptively ban lethal robot weapons that would be able to select and attack targets without any human intervention." We critique that campaign and its empirical and moral assumptions in a recent paper: Law and Ethics for Autonomous Weapon Systems: Why a Ban Won't Work and How the Laws of War Can.
As we note in that paper, "Some concerned critics portray that future, often invoking science-fiction imagery, as a plain choice between a world in which those systems are banned outright and a world of legal void and ethical collapse on the battlefield." For us, the question is not whether autonomous weapons should be regulated -- we agree entirely that they should -- but how. We propose a combination of national-level regulation and development of shared best interpretations and practices through international dialogue.
Enter now to that debate a report last week by UN Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns. Its summary states:
Lethal autonomous robotics (LARs) are weapon systems that, once activated, can select and engage targets without further human intervention. They raise far-reaching concerns about the protection of life during war and peace. This includes the question of the extent to which they can be programmed to comply with the requirements of international humanitarian law and the standards protecting life under international human rights law. Beyond this, their deployment may be unacceptable because no adequate system of legal accountability can be devised, and because robots should not have the power of life and death over human beings. The Special Rapporteur recommends that States establish national moratoria on aspects of LARs, and calls for the establishment of a high level panel on LARs to articulate a policy for the international community on the issue.