Cyber & Technology

Autonomous Weapon Systems: Why a Ban (Still) Won't Work and How the Laws of War Can

By Kenneth Anderson, Matthew Waxman
Friday, October 18, 2013, 12:45 PM

Over at TNR's Security States, Matt and I have a new piece about international calls to ban autonomous weapon systems.  It begins like this:

What if armed drones were not just piloted remotely by humans in far-away bunkers, but they were programmed under certain circumstances to select and fire at some targets entirely on their own? This may sound like science fiction, and deployment of such systems is, indeed, far off. But research programs, policy decisions, and legal debates are taking place now that could radically affect the future development and use of autonomous weapon systems.

To many human rights NGOs, joined this week by a new international coalition of computing scientists, the solution is to preemptively ban the development and use of autonomous weapon systems (which a recent U.S. Defense Department directive on the topic defines as one “that, once activated, can select and engage targets without further intervention by a human operator”). While a preemptive ban may seem like the safest path, it is unnecessary and dangerous.

The Security States essay develops points made in our Hoover policy paper, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won't Work and How the Laws of War Can.