Editor’s Note: Autonomous weapons systems—or, as it is much more fun to call them, killer robots—are a controversial weapon of war. Critics worry that they make war more likely because they do not put soldiers at risk and that having no human in the loop makes these systems more likely to kill innocents. Alexander Velez-Green of the Center for a New American Security takes on these concerns, using the case of the Korean DMZ to argue that in some circumstances killer robots raise the bar for conflict.
A new sentry guards the demilitarized zone separating North and South Korea. It is South Korea’s SGR-A1, a robot with the ability to autonomously identify and destroy targets. This autonomous capability would make the SGR-A1 one of the “lethal autonomous weapon systems” targeted today by activists campaigning to “stop killer robots.” One of their principal fears is that such weapons will make conflict cheaper for states and therefore more likely to occur. But the SGR-A1 is a case in point for the opposite: it shows that in some situations, autonomous weapon systems can actually raise the threshold for aggression between states, thereby making war less likely to occur.
The SGR-A1’s autonomy in question
The demilitarized zone (DMZ) between North and South Korea is one of the most watched places on Earth, and the SGR-A1 is its newest set of “eyes.” Entered into development in 2006, the robot was reportedly first tested in the DMZ in 2010, and multiple units are reported to now be operational. The system has three low-light cameras, heat and motion detectors, and pattern-recognition software that enable it to spot targets up to two miles away during the daytime and one mile away at night. When an intruder is spotted, the SGR-A1 can issue verbal warnings and recognize surrender motions, such as if the target drops their weapon and raises their hands. If an intruder does not surrender, the robot can engage them with a Daewoo K3 light machine gun from up to 800 meters away.
Reliable reports indicate that the SGR-A1 can function as a “human on the loop” system. That means that it can autonomously select and engage targets, but a nearby human operator can intervene to turn off the system, if necessary. This stands in contrast to a “human in the loop” system, which would alert the human operator to an intruder but would have to wait for the operator’s command to engage the target.
One of the first reports indicating that the SGR-A1 has a “human on the loop” capability came from the leader of the Samsung team that built it. In 2007, Myung Ho Yoo told the Institute of Electrical and Electronic Engineers that when the robot finds an intruder, “the ultimate decision about shooting should be made by a human, not the robot.” This implies that the robot can make the decision—it just should not be the one to do so. A report by California Polytechnic State University for the Office of Naval Research confirms that the SGR-A1 has an autonomous function. So do expert roboticist Ronald Arkin and reports in the International Review of the Red Cross, The Atlantic, BBC, NBC, The Verge, and other major international news outlets.
Despite these reports, Samsung Techwin has publicly denied that the SGR-A1 is capable of autonomous engagement. Instead, company representatives say it is a “human in the loop” system. According to a 2007 Popular Science article, Samsung “insists that…a person must engage a key before hitting SGR-A1’s ‘fire’ button.” In 2010, Huh Kwang-hak, a Samsung Techwin spokesperson, corroborated this report, saying that “[t]he robots, while having the capability of automatic surveillance, cannot automatically fire at detected foreign objects or figures.”
Activists fear that robots will lower the threshold for armed conflict
Given these conflicting reports, we cannot be sure that the SGR-A1 has an autonomous function. But we can confidently say that neither Samsung Techwin nor South Korea could admit to it even if it did. Why is that? Peter Asaro, co-founder of the International Committee for Robot Arms Control (ICRAC), hints at one plausible reason: the South Koreans “got a lot of bad press about having autonomous killer robots on their border.” This “bad press” is largely a result of advocacy groups out to ban lethal autonomous weapon systems (LAWS). There are a number of reasons for their concern, but a primary one is the fear that LAWS will lower the threshold for states to engage in armed conflict.
This fear is central to ICRAC’s efforts to ban autonomous weapon systems “in all circumstances.” Indeed, the top concern listed on ICRAC’s original mission statement is the “potential [of LAWS] to lower the threshold of armed conflict.” This language is mirrored in the 2013 report by the United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, who wrote that LAWS might result in “armed conflict no longer being a measure of last resort.” The Campaign to Stop Killer Robots and Human Rights Watch have both raised this concern as well.
The SGR-A1 deters aggression along the DMZ
Fears that robots could make states more inclined to use force are not unfounded. But taken to the extreme, as ICRAC does with its call to ban LAWS in all circumstances, these fears depart from reality and ignore the real peacemaking potential of defensive LAWS. The SGR-A1 is a case in point. A “human on the loop” SGR-A1 would not lower the threshold for conflict on the Korean Peninsula. Instead, it would raise the costs of aggression for North Korea, thereby making war less likely.
That is because wars often begin when states’ leaders calculate that violence will change the status quo for the better—yielding more territory, resources, or political leverage—and that they stand a good chance of winning. Whether contemplating a full-out invasion or a lower-scale border transgression, their calculations take into account many variables, especially the strength of their opponent’s defense. If their opponent’s defense is strong, it raises the risk of defeat—and losing a war that you initiated is usually worse than the original status quo.
Today, thousands of soldiers man South Korea’s DMZ defenses. Their primary strategic purpose is to deter North Korean aggression through the show of a strong and ready defense. But, as South Korea’s government and Samsung both recognize, these soldiers are naturally fallible. They can suffer from laziness, low morale, or exhaustion. During battle, they can lose situational awareness to the fog of war, and be beset by the thirst for revenge or fear of the enemy.
The SGR-A1 suffers from none of these functional impediments. Even if it is just a “human in the loop” system as Samsung contends, the robot would drastically improve the South Koreans’ monitoring and reaction ability. Since robots do not tire, become distracted, or feel emotion, the SGR-A1 would identify intruders and raise the alarm quicker than human sentries could. That alarm and corresponding video feeds would travel to remotely stationed operators charged with monitoring multiple SGR-A1s from a base in Seoul or elsewhere. Set back from the combat zone, these operators could more effectively quarterback defensive responses to intrusions than if they were stationed on the frontline.
But while “human in the loop” systems can defend more effectively than humans alone can, they are not as capable as “human on the loop” systems for handling time-critical attacks. For starters, humans have difficulty controlling multiple systems simultaneously, even in non-combat situations. Thus, in a rapidly evolving combat scenario, the “human in the loop” system operators’ ability to make timely decisions would be limited. Waves of attackers or synchronized intrusions at different points of the battlespace could overwhelm operators, preventing them from tracking multiple threats and allocating defensive resources appropriately. This, in turn, raises the possibility that the defensive line could be overrun. Further, a “human in the loop” system’s reliance on human input before engaging a target creates a window of opportunity for targets to escape. That can be costly. In a “counter-sniper” situation, for instance, the split second required for such input could mean the difference between neutralizing an enemy and allowing them to continue to attack defenders.
The ability to deal with time-critical threats really matters at the DMZ, as South Korean defenses are already stretched thin over the zone’s 250-kilometer length. That means that forces have to react as quickly as possible to any threats in order to hold their line until reinforcements arrive and prevent a North Korean advance on Seoul, located just 35 miles to the south.
The “human in the loop” system lends to this objective better than human-only defenses. But the “human on the loop” system’s ability to engage targets without waiting for an operator’s permission makes it far better prepared to defend against waves of attacks that could overrun outposts and against hit-and-run tactics that could bleed South Korean defenses. From a strategic standpoint, the “human on the loop” system thus raises the risk of failure for a North Korean offensive more than any human or “human in the loop” system. This would significantly raise the threshold for armed conflict, thereby deterring aggression and promoting stability on the Korean Peninsula.
Preventing the wrongful targeting of non-combatants by “human on the loop” systems
That is not to say that a “human on the loop” system is without its limitations. For instance, since current robotic sensors cannot distinguish between combatants and non-combatants, how can we know that a “human on the loop” system is engaging legitimate targets rather than misplaced innocents? This is a valid concern, and it points to the need to ensure that “human on the loop” systems can be readily disabled by operators, and most importantly, are deployed in controlled environments.
By definition, “human on the loop” systems are able to engage targets without human input, but always function under an operator’s supervision. This means that should a system like the SGR-A1 engage the wrong targets, a nearby operator can disable it using either a “soft-kill” or “hard-kill” option. A “soft-kill” option relies on communications, either wired or wireless, between a remote position and the robot; should something go awry, the operator sends a kill signal that terminates the robot’s activity. A “hard-kill” option is a hardware-level access point on the machine itself that an operator can use to manually turn it off.
In the event that a system like the SGR-A1 mistakenly targeted non-combatants, the operator could disable it using a “soft-kill” or “hard-kill” option. Tragically, that might not save the initial non-combatants engaged by the machine, as the operator would likely be unable to foresee the wrongful targeting and preemptively terminate the engagement. However, it would prevent massive wrongful casualties since the operator would be able to disable the machine after the initial wrongful engagement.
In order to avoid the tragedy of losing even a few innocent lives to bad targeting by a “human on the loop” system, militaries must ensure that “human on the loop” systems are deployed in controlled environments. Here again, the SGR-A1 case is instructive. Today, the DMZ is so heavily fortified that there are no civilians in it and most North Korean refugees must travel through China, Laos, and Thailand to get around it. This makes the DMZ a controlled environment where any individuals that can physically enter the SGR-A1’s targeting range are reasonably presumed to be combatants. For this very reason, Samsung engineers programmed the SGR-A1 to identify any human in the DMZ as an enemy.
To be sure, this programing decision cannot be made at all borders because not all borders are equally controlled environments. For instance, a “human on the loop” weapon would surely do more harm than good at the U.S.-Mexican border, where the vast majority of border violators are not military threats. But along borders where civilians do not travel or can be prevented from traveling—where a controlled environment can reasonably be established—“human on the loop” systems can be used to deter incursions and thereby raise the threshold for armed conflict while posing minimal threat to non-combatants.
Differentiating between defensive and offensive LAWS
Thus, the SGR-A1 case study shows that defensive LAWS have the potential to promote stability in some cases, rather than inevitably leading to conflict. Moreover, they can do so while posing a minimal threat to non-combatants. Yet robot skeptics are right to raise another concern, which is that opportunists could label autonomous technologies as “defensive” while preparing to use them for offensive purposes. Fortunately, there are things that states wishing to use robots defensively could and should do to demonstrate their defensive intent.
First, they could acknowledge in policy that the defensive role of robots is limited to border defenses. That would mean that defensive robots should be stationary and have limited weapons range so as to be unable to strike deep into neighboring countries’ territory. These limitations signal that the robot will not be used for conquest. Second, states should participate in an international dialogue to clearly define the proper uses of autonomous weapons, agree upon methods to verify that states are using these weapons properly, and designate consequences for violators.
Steps like these would let states signal to the international community that their LAWS are intended for defensive use. To be sure, they would not constrain bad actors who are intent on building offensive LAWS clandestinely. But nor could a universal ban on LAWS, because such actors do not care about international agreements that run contrary to their perceived self-interest.
Robots can prevent war
The robotic age is coming, and citizens all over the world are rightly concerned about how autonomous systems will be used for military purposes. But activists who argue that LAWS will lower the threshold for armed conflict wrongly ignore the stabilizing effect that the systems like the SGR-A1 can bring to some conflict zones.
Should the universal ban on LAWS that is rooted in this misguided belief succeed, it would needlessly deny South Korean leaders and others in similar situations an invaluable tool for promoting peace through deterrence. That would be tragic, as the suffering caused if war broke out would be felt most by these nations’ citizens. Thus, activists should stop treating LAWS in blanket terms. Instead, they should try to envision an international regulatory regime for LAWS that promotes rather than undercuts the peacemaking potential of autonomous defenses.
Alexander Velez-Green is a Joseph S. Nye, Jr. Research Intern in the 20YY Warfare Initiative at the Center for a New American Security. His research focuses on the impact of autonomous weapon systems on international strategic stability. Mr. Velez-Green graduated from Harvard College where he focused on Middle Eastern politics and the challenges of modern warfare.