Cyber & Technology

A Policy Paper on Autonomous Weapon Systems

By Matthew Waxman
Tuesday, April 9, 2013, 3:36 PM

Ken Anderson and I have just published a new policy paper through the Hoover Institution: Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can.

Our paper begins:

Public debate is heating up over the future development of autonomous weapon systems. Some concerned critics portray that future, often invoking science- fiction imagery, as a plain choice between a world in which those systems are banned outright and a world of legal void and ethical collapse on the battlefield. Yet an outright ban on autonomous weapon systems, even if it could be made effective, trades whatever risks autonomous weapon systems might pose in war for the real, if less visible, risk of failing to develop forms of automation that might make the use of force more precise and less harmful for civilians caught near it. Grounded in a more realistic assessment of technology—acknowledging what is known and what is yet unknown—as well as the interests of the many international and domestic actors involved, this paper outlines a practical alternative: the gradual evolution of codes of conduct based on traditional legal and ethical principles governing weapons and warfare.

In short, we recommend the following:

For its part, the United States should assert that existing international law of war requirements and so-called Article 36 weapons reviews (based on Article 36 of the first additional protocol to the Geneva Conventions), must be applied by all parties in the development and deployment of automated weapon systems, with special scrutiny to systems that are autonomous with respect to target selection and engagement. At the same time, it should assert equally that these systems raise novel issues of legal review, and that while the United States has internal processes and standards seeking to give content to such reviews, it also understands these as ongoing attempts to develop shared frameworks of best practices and norms. It should propose and welcome discussion, comment, and the shared experience of other states, particularly those that are also actively developing new weapon systems.

National-level processes like these should be combined with international dialogue aimed at developing common ethical standards and legal interpretations. Such efforts will help develop a gradually coalescing set of shared practice and expectations that can be shaped over time as the technologies emerge and evolve.

We conclude:

The incremental development and deployment of autonomous weapon systems is inevitable, and any attempt at a global ban will be ineffective in stopping their use by the states’ whose acquisition of such weaponry would be most dangerous. Autonomous weapon systems are not inherently unlawful or unethical. Existing legal norms are sufficiently robust to enable us to address the new challenges raised by robotic systems. The best way to adapt existing norms to deal with these new technologies is a combined and international-national dialogue designed to foster common standards and spread best practices. ...

Some view the emergence of automated and autonomous weapon systems as a crisis for the law and ethics of war. To the contrary, provided we start now to incorporate legal and ethical norms adapted to weapons that incorporate emerging technologies of automation, the incremental movement from automation to machine autonomy can be both regulated and made to serve the ends of law on the battlefield.