A Typology for Evaluating Active Cyber Defenses

By Paul Rosenzweig
Monday, April 15, 2013, 3:00 PM

As readers of this blog know, many in the US have begun to debate the legal and policy questions surrounding private sector “hack back,” also sometimes known as “active defenses.”  Of course to some of us these defensive measures look an awful lot like “offense” but the actual labels don’t really matter.  What does matter (or at least ought to, in my judgment) is a) the actual nature of the actions undertaken – in other words their practical effect; and b) their legal effect or consequence.  The label is irrelevant.

To that end the ABA Standing Committee on Law and National Security had a meeting the other day to consider “Comprehensive Cyber Defenses.”  The discussion was under Chatham House Rules, so I can’t quote the attendees – but for me the discussion that day was eye-opening.  It highlighted both the current paucity of understanding about the practical effects we are talking about and the dearth of legal analysis.  We had both technologists and lawyers in the room and they were, to a large degree, talking past each other.  They lacked a common understanding of the technical capabilities and of the legal framework.  Worse yet, to put it concisely, the meeting made it clear to me that we are obsessed with the hard cases and that, if we unpack the question a bit we will find a large swath of areas where agreement is wide spread.  We will also, I think, readily identify boundary issues where law and policy have a role to play.

Here’s an example of what I mean.  At the meeting there was general agreement that, as a matter of policy (and probably of ethics and/or morality), actions taken by a private sector actor on its own network were likely to be more privileged and allowable than those it took outside its own network.  There was even general agreement that most (some said all) such internal actions were likely legal under existing law – though this was a matter of more dispute.

But there was no agreement at all about where the boundary between in- and out-of-network was or how it was defined – either as a technical matter or as a matter of law.  Some spoke, for example, of the case in which a malicious actor uses a Virtual Private Network (or VPN) to connect to a private actor’s network.  If the VPN connection is long-standing and persistent does the node where the malicious actor is become, in effect, a part of the network to which it is connected?  Would it matter if the malicious actor complied with technical protocols in making the connection (as for example, if he used a stolen password and “clicked through” an advice of rights screen)?  Would that be different in a meaningful way from the case where he achieved access by circumventing technical restrictions? 

To be honest, at this point I don’t have the answer to those questions.  And in truth I’m not even sure there is a canonically “right” answer – either as a technical matter or as a matter of policy.  But I am sure that there are some easy cases on either side of the line, and I’m sure that the boundary issue of drawing the line is likely one of technical and legal significance.  So it seems to me that the legal project at hand is to begin constructing a typology of active defensive measures so that we identify all of those types of boundary issues and that we advance the discussion by recognizing that the in-network/out-of-network distinction (for example) is one worth considering.

Another valence of consideration that developed over the course of the meeting was the clear idea that there were practical and technical differences among various types of counter-actions that a private sector actor might undertake.  Here we can consider a wide range of tactics, of course, with many technical variables.

On the other hand, law works by categories.  The answer “it depends” is not an effective response to questions about the legality of conduct.  And so I was reasonably pleased when one of the techies in the room suggested a three-part categorization of technical responses:

  • Acts of attribution that are intended to identify who the malicious intruder is.  These could include rules of authentication and/or intelligence collection techniques much as the Mandiant report and a follow on by Luxembourg security specialist, Paul Rascagnères, did with the Chinese APT-1.
  •  Acts of prevention that block the malicious actor from achieving his goal, either by preventing his intrusion in the first instance; preventing the intrusion from executing effectively; or, perhaps, by rending the intruder incapable of exploitation, say by infecting your own data with traps.
  • Acts of response (or even retribution) that might seek to impose adverse consequences on the malicious intruder that are more than “mere preventive” measures.

Again, these three categories are not necessarily comprehensive.  They may be the wrong categories in the first place.  And they certainly will bleed into each other at the margins.  But if we accept the categories provisionally then, again, we see that law can play a role in developing some boundary definitions between the categories that serves to sort potential counter-measures into identifiable categories.

Indeed, taking the two sets of distinctions I’ve just identified (where the action occurs and what the action is) into account we actually can see that they create six pretty well-defined categories of action.  We might graphically represent it like this:

  Attribution Prevention Response
In Network      
Out of Network      

And that, I think, is actually the most useful value in creating some sort of typology for active cyber defenses.  We can readily imagine that rules for “attribution in-network” would be more lenient in permitting private sector self-help than “response out-of-network.”  Indeed, that probably is the law right now – but nobody is quite sure exactly what the law is in either case.

Having thought about this some more and discussed it, it seems to me that there are two further refinements that you could make to this rough typology, both of which might be “layered” on top of this 6-box structure, if you will.

The first is the question of adverse effect.  In many, though perhaps not all, of our typology boxes we could imagine technical methods that had no effect whatsoever on the malicious actor’s system – in other words, they were pure reconnaissance.  We could also imagine those that had some pretty significant effects.  And our instinct, of course, is that whether there are such effects is a matter of legal significance.  Here, again, defining what constitutes an “effect” is as much a matter of law and policy as it is a technical matter.

And, finally, we might also consider the doctrine of consent.  In law, consent matters.  And sometimes, consent can be implied or imputed and need not be express.  We might therefore ask, for example, questions like whether coming “on” to a network gives implied consent to having effects imposed on you for malicious actions.


So, what’s the value of this typology, other than, of course, my intellectual pride in constructing an edifice of categorization?  Let me make explicit in summary what I think the value is:

  • If we get the typology right it helps to identify important definitional questions that the law and policy must answer.  We need to define legally what constitutes a network and probably we need to identify the difference between attribution techniques and prevention techniques.
  • If we get the definitions right, or close to right, a typology then helps us identify the appropriate legal régimes that would apply in various domains.  We can ask a sensible question like “what should be the legal limits of a private sector actors off-network attribution efforts that have no appreciable effect?” and mean something that actually says “is this beaconing technique legal?”
  • And, finally, this makes clear to me that the debate that is starting in the US over active defenses is rushing very, very quickly to decide the hard questions first, without thinking about the easy questions.   At the conference, for example, it was clear that there is even some ambiguity on what actions a private sector actor can take to do attribution inside its own network – a question I think should be relatively easy to answer.   We can leave the harder questions (should we allow active hack back outside of the network to have responsive effect?) for later in the discussion and still make a lot of legal and policy headway.