Representative Tom Graves (R-GA) recently released a discussion draft of a bill that would create a defense to liability under the Computer Fraud and Abuse Act (CFAA) (18 USC 1030) for “active cyber defense measures,” with an explicit request for feedback from any “interested parties.” I’m taking up that invitation as I think the general goal is very worthy, yet the draft illustrates that it is really hard to frame the precise language needed to obtain greater legal space for active defense while still preserving reasonable—and reasonably clear—boundaries. [Note: “Active defense” is a phrase of contested scope, but the general idea is that when someone has hacked into your system, there are steps the victim might take (or might hire someone to take) that help identify or even disrupt that unauthorized access (including, perhaps, steps that take place outside your system, giving rise to the phrase “hacking back”). If you’d like a primer, I strongly recommend the report issued recently by a task force convened by the Center for Cyber and Homeland Security at George Washington University (under the leadership of GW’s Frank Cilluffo and Christian Beckner and with financial support from the William and Flora Hewlett Foundation and the Smith Richardson Foundation). I should disclose that I was a member of that task force, though having typed those words I now also feel obliged to confess that I was a less-than-active participant in the task force’s affairs. At any rate, let’s get back to the bill...]
Why is such a bill needed?
Some entities have the practical ability (or can hire someone who does) to engage in rapid and effective self-help when they detect an intrusion, including the ability to identify the source of the intrusion, to disrupt the intrusion or attack, and perhaps even locate and destroy exfiltrated data. Bearing in mind how long it might take to get government authorities to do these things, the idea of allowing victims to help themselves is quite appealing. But if such measures require that you go outside your own network—and more specifically, if you must enter another person’s system without authorization—then you would appear to face criminal liability under the CFAA. To be sure, you might mount an effective self-defense argument by analogy to various tangible crime scenarios, but it’s quite hard to know whether and to what extent such a defense would hold. The point of the bill Representative Graves has in mind is to provide statutory certainty that a cognizable defense exists under the right conditions, and thereby reduce the legal disincentive to engage in certain forms of active defense.
Sounds like a good idea. What’s the catch?
The catch is that it is hard to open the door wide enough to make a genuine difference for victims, without opening the door to a host of unintended problems under two big headings: mistaken attribution and unintended collateral impacts. Put more directly, it is not hard to see how the more aggressive forms of active defense might result in harms to innocent parties. Some amount of risk along those lines may be worth it, depending on the benefits also obtained; it’s just awfully hard to know for sure. And thus I recommend that any legislative intervention should include some form of data-gathering and oversight mechanism regarding its use in the field, as well as a sunset clause in order to force further deliberation informed by actual experience after a year or two.
So let’s talk about the draft “Active Cyber Defense Certainty Act.” Can we call that ACDC, by the way?
I admit I do love this acronym. So many puns present themselves!
What exactly would ACDC change about the CFAA?
In its operative part, ACDC just provides that “[i]t is a defense to a prosecution under [the CFAA] that the conduct constituting the offense was an active cyber defense measure.”
Who can invoke this protection?
Not just anyone. ACDC uses the word “victim” as a term of art to define the set of persons who can invoke the protections of the statute, and the statute defines that term as follows:
the term ‘victim’ means an entity that is a victim of a persistent unauthorized intrusion of the individual entity’s computer. [emphasis added]
Why did you highlight the word “persistent”?
The word “persistent” seems to be intended to prevent invocation of ACDC by someone who has experienced only a fleeting intrusion, presumably on the theory that fleeting equals insignificant. This is sensible insofar as actually using the word “significant” as the test would be much too vague. Yet “persistent” is not exactly a precise term, either. It could refer to dwell-time in relation to a particular intrusion, or to a series of intrusions by the (apparently) same actor, or some combination of both. But how much is enough to count as “persistent”?
It’s hard to say how tightly this element ought to be calibrated. One can imagine relatively-brief yet highly-significant intrusions, just as there might be long-running intrusions that have as yet resulted in little or no hostile actions.
The uncertainty—and the difficulty of resolving it—is enough to raise the question whether it is worth the candle to screen out insignificant intrusions in this manner. I’m not yet sure what I think.
Why did you highlight the word “intrusion”?
The use of this particular word suggests to me that ACDC is not meant to greenlight hackback by those suffering from a DDOS attack, for DDOS may flood your system but it does not penetrate it and hence does not seem to qualify as an “intrusion.” This matters insofar as the DDOS scenario is one that might well tempt some victims to hack back—and also one in which hacking back might entail a substantial impact on third parties. Probably smart to exclude it for the latter reason, but I thought it worth flagging this point simply to ensure that this is a purposeful exclusion.
Let’s assume we have a workable definition of “victim” and thus we know more or less who qualifies to act under ACDC. What exactly are the “active cyber defense measures” that ACDC permits them to use?
ACDC answers this question in two stages. First it describes what counts as a permissible active cyber defense measure in general, and then it goes on to carve out a few exceptions meant to limit harm that might be caused by such measures.
Both parts raise some difficult issues, but let’s look first at the affirmative definition of qualifying active cyber defense measures. The statute describes these as an action:
(I) undertaken by, or at the direction of, a victim; and
(II) consisting of accessing without authorization the computer of the attacker to the victim’s own network
to gather information in order to establish attribution of criminal activity to share with law enforcement or
to disrupt continued unauthorized activity against the victim’s own network…. [emphasis and indentions added]
It seems to me that part (I) of this definition is fine as is, but that there are a few tricky elements in the portions of part (II) that I highlighted.
1. More than one computer in the attack chain: ACDC blesses hacking back into “the computer of the attacker.” Here, it is important to bear in mind that there may well be a chain of computers involved in the original intrusion, and reaching the attacker itself may require working back through that chain. This language is probably best read as authorizing unauthorized access of all links in the chain, but probably best to spell that out if so.
2. Sharing attribution data with law enforcement: one of the two authorized purposes under ACDC is to gather attribution data “to share with law enforcement.” I recommend expanding that to include the phrase “…or other U.S. government entities with responsibility for cybersecurity or intelligence functions.” Or something like that. The point is to make sure the attribution data can be shared with more than just the FBI. Unless, of course, the point of this qualifier is to avoid alarming those who fear that this somehow will become a font of data flowing to NSA. At any rate, the question of who should receive the attribution data is an important one. Perhaps the best answer is that there should be a specific designated federal recipient, which can then determine with whom else to share it. Separately, the statute should make clear whether it is required that the victim in fact follow through by actually sharing the data it gathers, as opposed to just having such an intent but perhaps not following through in the end.
OK, now let’s assume we have a good working default definition. What was that about exceptions to otherwise-qualifying active cyber defense measures?
ACDC states that an otherwise-qualifying active cyber defense measure is excluded from the statute’s coverage if it:
(I) destroys the information stored on [another’s] computer[…];
(II) causes physical injury to another person; or
(III) creates a threat to the public health or safety….” [emphasis added]
This is a critical part of the framework, since one of the big objections to hackback is the risk that the victim’s self-help measure will produce disproportionate or collateral harms. Having exceptions for particularly problematic consequences is certainly a good start. But this section needs a lot of work.
Is this supposed to be a strict-liability regime? I know we are talking about the availability of a criminal law defense and not tort liability, but you get the point. This looks like a rule that turns entirely on consequences, without regard for intent. The draft does not speak of any particular mens rea, but instead simply removes the protections of the statute should any of these three consequences result from an active cyber defense measure. That’s a lot of risk to take (at least vis-à-vis the “destroys the information” exception) given a high-stakes, high-speed, high-complexity, and high-uncertainty environment.
Speaking of the destroying-information exception, is that the right place to draw the line? The “destroys” condition probably would extend not only to someone who deletes information but also to someone who alters information on another’s network. But it would not seem to extend to someone who decides to encrypt-and-ransom the data. Easy to imagine an aggressive victim taking that approach, and I don’t think it would be covered by this exception. Perhaps it should not be—perhaps we are OK with that type of self-help—but this seems a point worth pondering very carefully.
What about the exception for physical injury? Good to exclude that, certainly, but don’t be too quick to dismiss the prospect of highly-problematic injuries of other kinds. Financial harm, for one. Or how about severe embarrassment, via doxing or other public dissemination of sensitive information? Again, the point is to ensure we have thorough deliberation on these points.
And the exception for threats to public health or safety? A very good idea in principle, but I do have a concern about vagueness here, and wonder if it would be better to have an extensive treatment on this point that tries to spell out in far greater detail what might be covered. Also note that “threat” in this context seems a question of degree, and the draft does not help us pin down the precise degree of risk that crosses the line.
So is this a useful start, at least?
Certainly so! Kudos to Representative Graves for putting this discussion draft out there. I hope these comments are helpful. ACDC 2.0 awaits…