“Lawful hacking” is an interesting and potentially very useful future path for law enforcement and the intelligence community. But lawyers and policymakers rushing to address potential problems are getting ahead of the technology. This is true for the Vulnerability Equities Process, the legal difficulties surrounding the Playpen NIT prosecutions, and with many issues in the national security space regarding the “domesticity” of particular packets with regards to Internet surveillance and signals intelligence and the increasingly-shaky legs of the third-party doctrine.
Applying laws and policy to new technical contexts requires making some broad categorizations. In order to construct a regime that is workable for non-technologists, generalizations become necessary: This is an exploit, this other thing is a vulnerability or a trojan or “network investigatory technique.” The problem is that these are terribly inaccurate metaphors. They seem to be useful in individual cases to understand the basic issues without needed to think about how technical experts would grapple with the implications and subtleties of the code and situations at hand. But the trap is that it allows lawyers and judges and policymakers to say “Great! A vulnerability is this thing, a NIT is that thing; let’s go make some rules!”
The inevitable result is that helpful, imprecise analogies are converted into pretend scientific terms, with fixed meanings. And the proposed general frameworks end up being based on fictions—further confusing the already complex debate.
Take the “Network Investigative Technique.” From a technical perspective, there is no such thing. It’s a term the FBI made up, presumably because “FBI Memory Resident Malware” had a bad ring to it in court. And non-technically speaking, there is some intuitive sense to the idea that what federal law enforcement does pursuant to a court order is meaningfully different than when an ordinary criminal or adversary nation state does the same thing. But technically-speaking there isn’t actually a difference. And offering NITs up as their own distinct technological category leads to definitional problems.
Like “QUANTUM” before it, what are the boundaries of the NIT? Is it the trojan only? Is it the trojan plus the exploit plus the launcher scripts on the Playpen website, plus the methodology of the FBI operators running it? Keep in mind that the “NITs” deployed today are extremely small and simplistic versions of what the FBI will need in the future. We know the trajectory, because the FBI is now climbing the same technology tree that the NSA and legions of penetration testers have climbed before. So the technological distinctions will grow more complex, not less.
Take the question of cryptography used in the Playpen NIT. Currently, it is already rather complex for a court to draw lines around which part of the FBI exploit or trojan—let’s call it by its real name—might have modified a computer in a way relevant to the defense’s theory of the case. And of course limiting some types of disclosure is necessary for these kind of operations to be sustainable for the FBI—future platforms will be made to resist whatever FBI techniques end up revealed in court.
As a technical observation, that the Playpen operation appears rushed is evident by the government’s brush-off defense for sending network data back in cleartext. As I’ve noted in the past, most organizations begin without any cryptography built into their hacking tools and then evolve towards more advanced tools that do include those types of protections. In that particular instance, sending the data back in cleartext was likely fine. But moving forward, the FBI will have to adopt using Message Authentication Codes to defend against the at-least-theoretically possible random errors in network traffic (see this paper for a demonstration of the kinds corruptions that the FBI might want to be aware of and protect against).
In order to address the “Going Dark” issue, the justice system is going to have to get sophisticated about how it deals with lawful hacking. This may mean a move towards allowing law enforcement to touch the disk with a cryptographic flag—thereby proving a particular machine was observed doing particular behaviors—and to take pictures from the built-in camera to prove who was in front of the machine, and examine nearby machines and wireless devices to prove geo-location. We all know as a community where this activity is headed, but even in the far simpler and limited capability involved in the current case, we can’t agree on the proper legal boundaries!
This issue is hardly limited to specific court cases, where at least the facts anchor the discussion. The phenomenon of adding more noise than signal is now practically characteristic of misguided policy proposals in this area.
The policy community is often wrapped around the smallest issues in this space, especially regarding unpatched vulnerabilities or “0days.” I’m aware of the heady intoxication of speculating about how 0days work, but the current statements being offered as facts in policy papers are just not possible to make without extensively analyzing classified data. For example, the EFF recently posted a blog stating,“The problem is that if a vulnerability has been discovered, it is likely that other actors will also find out about it, meaning the same vulnerability may be exploited by malicious third parties, ranging from nation-state adversaries to simple thieves.” This is a fundamental misstatement of how 0days work in the real world—in reality, the vulnerabilities used by the US government are almost never discovered or used by anyone else—and these falsehoods further confuse the conversation about 0day policy solutions.
Or consider the recent Belfer Center proposal for Vulnerabilities Equities Process. The simplicity of proposals structured around vaguely-defined “vulnerabilities” may have been appealing to a non-technical policy community. But this kind of framework ignores the fact that techniques are actually far more difficult to replace than exploits (and are more likely to implicate sensitive intelligence operations).
This is a sign that we need to slow our roll. It is okay to live with a little discomfort as we make fact-specific determinations in individual contexts. That is where basic categories might actually be useful in reaching conclusions. But the technology and investigative process is not yet ripe for broad frameworks. We should not be forcing executive action from the President on the Vulnerability Equities Process or determining a long-term plan for how the FBI should be allowed to hack suspects. We just don’t have the data necessary for larger decisions on these issues, and policy that precedes data is apt to get it disastrously wrong. The only way to obtain the necessary information is to proceed carefully, step-by-step, committing to specific and fact-based analysis and not empty process bromides. Bad policy is far worse than no policy at all.