Plan X -- The Future of Cyberwarfare

By Paul Rosenzweig
Thursday, May 31, 2012, 3:42 PM

The Washington Post had a fascinating story today about "Plan X" -- the program funded by the Defense Advanced Research Projects Agency (DARPA) to jump start research into new modes and methods of cyberwarfare.  As with most DARPA programs the research areas are speculative and many won't pan out.  But the list of possible technological developments is fascinating:

  • A real-time digital battlefield map of activity in cyberspace, identifying both friendly and potential enemy nodes of activity;
  • A robust operating system capable of surviving cyberattacks (sort of like an armored vehicle); or
  • An "auto pilot" system for the delivery of cyber attacks that eliminates the need for human intervention.

And those are just the ones mentioned in the article -- I can only imagine the other possibilities. But even at this early stage it is amusing to speculate on some of the implications for this research on legal issues relating to cyber warfare.  Consider just a few:

First, one of the more significant limitations in the lawful exercise of force in cyberspace has been our inability to systematically effectuate the principles of distinction and proportionality. Which is to say that the domain is still sufficiently ambiguous in nature that we cannot with great confidence assure ourselves that we are targeting military systems and that we are taking reasonable steps to limit collateral damage to non-combatant populations.  A real-time, accurate map of the digital battlefield will change that dynamic in ways that foster the ability to use cyber force in a manner consistent with the laws of armed conflict.  Perhaps even more importantly, such a map might greatly reduce the likelihood of ms-identification and curtail the possibility of friendly fire incidents in cyberspace.  This is fundamentally a good thing.

On the other hand, it is likely that the battlefield map will make clear what we already assume to be true -- that any successful cyberattack will, necessarily, transit through routers and servers in neutral countries.  Some say that this sort of attack is like armed overflight and a violation of neutrality principles.  Others argue that it is more like radio wave or telegraph transmissions which are entitled to a right of transit even in wartime.  A battlefield map will bring this legal dispute into stark relief and demand resolution.

Second, the DARPA proposals also highlight critical aspects of command and control that will need to be addressed.  The push to find automated systems to fight cyber wars is one that we should approach with some real caution, I think.  The issue is how to maintain effective human control and setting the right default options. For example, we use certain control structures so that a human unlocks weapons, like a nuclear weapons systems, but we use an automatic system with human intervention required to override for the operation of other systems, like, say, a nuclear power reactor.   Making the right choice can have real world consequences -- for example, the problems associated with automated responses were demonstrated, in a prosaic fashion a couple of years ago when automated trading rules caused a 1000 point decline in the Dow Jones Industrial Average in less than 10 minutes of trading on the New York Stock Exchange.

Getting the balance right is essential.  As the DARPA request recognizes, our organizational structures and processes have not yet matured sufficiently in the cyber domain to understand this distinction, much less enable the implementation of policies that maximize the sustainment of human control at senior policy levels.  The governing rule should be, wherever possible, to “go slow” and permit human control.  We have already seen how easy it is for automated systems to create a “flash crash;” we want to make sure that they don’t start a “flash war.”  What we can hope for from the DARPA research is an identification of those situations where a rapid response is deemed essential and the default policies that must be implemented without human intervention.