Autonomous Weapon Systems

As the use of drones has expanded, so too has interest in autonomous weapon systems. Drones, unmanned but remotely piloted, are not themselves autonomous weapons which are characterized by their ability to cut humans “out of the loop.” A 2012 Department of Defense policy directive defines fully autonomous weapon systems as systems that, “once activated, can select and engage targets without further intervention by a human operator.” Similarly, weapons such as “fire-and-forget” missiles, which require no guidance after firing but will hit only targets pre-selected by a human, are sometimes described as “semi-autonomous.” Both semi-autonomous and autonomous weapons systems have triggered concerns that they will increase costs to civilian life in wartime and reduce accountability for war crimes.

Latest in Autonomous Weapon Systems

Foreign Policy Essay

Will Killer Robots Be Banned? Lessons from Past Civil Society Campaigns

Editor’s Note: One of the most successful NGO anti-war efforts was the campaign to ban landmines, which led to a treaty banning their use and production in 1997. Activists, not surprisingly, are using this model as they focus on other technologies, including autonomous weapons systems—and yes, here is a Terminator link.

Autonomous Weapon Systems

A Primer on Debates over Law and Ethics of Autonomous Weapon Systems

For Lawfare readers interested in law and regulation of autonomous weapon systems (AWS), we’re pleased to note our new essay, recently posted to SSRN, “Debating Autonomous Weapon Systems, Their Ethics, and Their Regulation Under International Law.” It appears as a chapter in a just-published volume, The Oxford Handbook of Law, Regulation, and Technology, edited by Rog

Subscribe to Lawfare

EmailRSSKindle