The annual WeRobot program has emerged as the key conference on the legal, policy, moral, and other normative questions related to robotics. It is underway at this moment at the University of Miami, hosted by the law school and organized by law professor Michael Froomkin, who is one of the leaders of the field. The live-stream of the sessions can be found here, and the program page, including the papers, is here. It is a genuinely interdisciplinary conference, with engineers and designers, business and investment people, and folks from law, philosophy, public policy, and other fields that might not obviously seem relevant to automation and robotics. But robotics is social–even in the caged setting of the industrial robot arms in a factory–and as soon as it starts moving into ordinary social life, then the question is never strictly the machine, but the machine-human interaction.
It might also not seem obvious how a discussion about social, legal, and policy issues in robotics is relevant to national security law and policy. But robotics, automation, and autonomy in weapons and many other military systems are also social. They have to be operated by people, and most of these systems can best be understood today as human-machine tag-teaming, a human-machine dyad. The questions of control and management of a robotic system in a social world are at the core of this conference, and military robotics (with its long-standing, highly developed structures of legal weapons review, among other things) has very considerable experience to contribute to the social aspects of robotics. And many of the issues of human-machine interaction form the moral, legal, and policy substratum from which military robotics is not separate. The first session this morning was on a paper by Meg Leta Ambrose (Georgetown), on human-machine management of the “loop.” Her paper examined five different cases in which loop questions were key, and although none was autonomous weapon systems, the legal and regulatory considerations for these other areas were fascinating and relevant to national security applications, with a little imagination.
But one of the important conceptual questions is what is a robot in the social sense, and what makes robots in society different from automation, cyber, or other technologies. Ryan Calo (UWashington) takes this up in one of the most useful, thoughtful papers on what makes a robot and why care I’ve read in years. He offers features of advancing social robots that importantly distinguish them, and are likely to distinguish them from other technologies in law, morals, regulation, and policy. These characteristics include physical movement and mobility in the world around us and in proximity to humans; “emergence,” or the features of increasing artificial intelligence, or machine learning or self-learning; and “social meaning,” meaning the ways in which humans respond to robots of one design or features or another. Calo lays these categories out in his article, Robotics and the New Cyberlaw, and it is one that anyone who either deals with robotic or automated or cyber systems or is trying to understand their core features, I think it is required reading. (I’ll discuss it at some point in a Readings post.)