At Motherboard, Joseph Cox reports that defense attorneys representing dozens of defendants nabbed in an FBI child pornography sting have pooled their resources in a “national working group.” A Network Investigative Technique (NIT) conducted on the child pornography website Playpen resulted in the arrest of well over 100 perpetrators, many of whom are now arguing their cases in court. A few, though not all, defendants have prevailed on motions to suppress all evidence resulting from the NIT, and the defense working group now seeks to convince courts nationwide to follow suit. The defense arguments relate to complex legal and technological matters which, if accepted by other courts, could have broad negative implications for the future of “lawful hacking.”
Beyond the consequences to wide-ranging law enforcement and national security interests, Operation Pacifier has lead to "137 criminal cases, the identification of at least 35 ‘hands on’ child sexual offenders, and at least 17 individuals who have been determined to be producers of child pornography.” More importantly, it “led to the identification or recovery from abuse of at least 26 child victims.” In other words, the stakes here are extraordinarily high and it is important that judges get the answer right.
As is common in adversarial proceedings, expert testimony regarding highly-technical matters is being deployed in these cases, intentionally or inadvertently, to effectively deceive and confuse the court regarding the function of information technology and how that technology plausibly interacts with the facts. Because of the broad consequences, additional clarity is required.
Lawfare readers may be wondering what exactly a set of child pornography cases has to do with “hard national security choices.” The answer is, it turns out, a whole lot.
In the “Going Dark” debate, lawful hacking is often posited as an alternative to encryption regulation. Rather than introducing new vulnerabilities—in the form of lawful access mechanisms—proponents suggest that instead law enforcement exploit existing vulnerabilities to resolve investigatory challenges created by various forms disk encryption and internet anonymization. This proposal potentially offers an attractive solution to Going Dark challenges, which could theoretically satisfy equities on both sides of the debate. But a number of practical and legal obstacles need to be resolved in order for a “lawful hacking” regime to address the practical challenge—the subject of a forthcoming paper by one of us—and this set of cases raises a number of particularly important issues, one of which we discuss in detail here.
We will leave arguments related to the territorial jurisdiction of a magistrate judge under Federal Rule of Criminal Procedure 41 for another day. Here, instead, we turn to a more immediate problem facing courts. The basic question is whether a defendant needs to see the “exploit” of the NIT in order to receive a fair trial. The answer relies on questions of both law and technology. But courts appear to be stymied in untangling the various representations of the defense, prosecution, and their respective experts. Because of the importance of the question, we—as a lawyer and a technologist—have decided to take a stab at figuring it out. (Plus, we always wanted to be expert witnesses but no one’s ever asked.)
Fair warning: we’re going to get in the weeds on both the law and technology. Bear with us, the details matter.
Why was the NIT needed and what did it do?
The website in question, known as Playpen, was a “hidden service,” only reachable through Tor. Hidden services, by default, attempt to hide the locations of both servers and the computers being used to visit the site. The FBI learned, through a foreign partner, a website dedicated to the distribution of child sexual abuse materials was determined to be located within the United States. While the FBI was able to locate the server, and bring the site under government control, it was still unable to determine the physical location of individuals who were accessing and posting child pornography on the site. Typically, the physical location of a computer can be determined from its IP address. However, Tor passes an individual IP address through a series of intermediary nodes, such that, a visitor’s genuine IP address cannot be determined at the ultimate destination website.
The FBI used, in essence, a court authorized hacking method to circumvent the operation of Tor to determine genuine IP addresses. After obtaining a warrant—the subject of a distinct controversy not addressed here—the FBI operated the site for two weeks, during which it deployed the NIT to learn the location of any users who logged in and accessed particular pages hosting contraband child pornography.
How does an NIT work?
The NIT consists of a number of distinct components.
- A “generator” which runs on the hidden service.
- An “exploit” which, when transmitted from the hidden service to the visitor’s computer, enables running the FBI’s code on the visitor’s system.
- The “payload” which the exploit fetches, runs on the visitor’s system, and conducts the actual search, transmitting the information discovered to an FBI server.
- A “logging server”, a system run by the FBI that records the information transmitted by the payload
The primary role of the generator, which runs on the hidden service itself, is to generate a unique and random ID number (a “nonce” in technical terms), associate the ID with a logged-in user of the site, and then transmit the exploit, the ID, and the payload to the user’s computer. This makes it possible to associate an individual user of the site throughout the site’s logs with a particular NIT execution.
The exploit takes control over the Tor browser used by the visitor, control it uses to load and execute the payload. A helpful analogy is that the exploit opens a window in the owner’s house that the owner believed was locked but which can be removed from the frame. The exploit removes the window and lets in the payload to conduct the search. Knowledge of how the exploit works is the most sensitive part of an NIT—public disclosure not only risks losing the opportunity to use the technique against other offenders but would also permit criminals or authoritarian governments to use it for illicit purposes until a patch is developed and deployed. This is the component the government refuses to disclose in the instant cases.
The payload is the program which conducts the actual search on the visitor’s computer. In theory, a payload could search for anything. In this case, the payload searched for the items authorized in the search warrant: the computer’s MAC address (a unique identifier associated with the computer’s network card), the username of the current user, the computer’s name, and other related information. The payload then transmits that information, as well as the ID, to the logging service over the unencrypted Internet. In the process of transmitting this information the logging service also sees the public IP address of the visitor’s computer (something that a computer doesn’t generally know). This identifies the site visitor’s network connection and creates a record of the computer used to visit the site.
The logging service, running on a separate FBI computer, receives the NIT response. This system may perform other bookkeeping functionality but the key component is the “packet capture” portion which produces “pcap” files. A pcap file records all network traffic transmitted over the network between the NIT payload and the logging service. Since the NIT transmits data without encryption, the pcap file records all the information seized by the NIT and transmitted to the logging service.
Using the unmasked IP addresses, the FBI served subpoenas on internet service providers to learn the names and house addresses of users. The FBI then obtained a search warrant for the premises based on probable cause that the unmasked IP address has been used to access contraband child sexual abuse images and criminal evidence likely existed at the designated address. The search warrants authorized federal agents to look for evidence of child pornography and to seize the physical computer which executed the NIT. The seizure of the computer is technically relevant because the NIT recorded information that identified the particular computer, and therefore a match demonstrates that an individual visited Playpen using the actual computer in question rather than an unknown third-party using the suspect’s network connection without his knowledge.
The Problem of Disclose or Dismiss
As discussed above, there are now at least 137 defendants facing charges based on evidence obtained during the NIT itself and also evidence that the FBI discovered during the subsequent physical search—authorized pursuant to NIT-based probable cause—related to the possession of child pornography and hands on abuse of child victims. Many of those defendants now seek to capitalize on a significant unanswered question regarding lawful hacking in order to walk on very serious charges. And for some, it’s working.
In a nutshell the fusion of law and technology is this: The court agrees that under the law the exploit is privileged and cannot be disclosed safely, even under a protective order. Nevertheless if the defendant can demonstrate that the technical information in the exploit code is material to his defense and that it is essential to a fair trial, then the government faces a choice to “disclose or dismiss.”
Here, it is not difficult to see what is going on. Defendants are aware that the government would rather dismiss charges than disclose the exploit, and therefore argue for disclosure in hopes of forcing dismissal. This is a new variant of what, in the context of national security prosecutions, is known as “graymail.” In that context, as a recent law review note puts it, graymail exists when a “potential criminal defendant threatens to expose sensitive classified information if he is prosecuted. This creates a ‘disclose or dismiss’ dilemma, whereby ‘[t]he Government . . . must choose between going forward with the prosecution, thereby compromising the classified material, or safeguarding the material but dropping the prosecution.’” Congress passed the Classified Information Procedures Act (CIPA) to solve that particular problem. But even if this code were covered by CIPA—it may include classified material—the judge must still answer the fundamental question of the defendant’s interests in examining the exploit.
Washington state defendant, Jay Michaud, successfully pioneered the tactic for the Playpen cases—his lawyer appears to now lead the defense collaboration efforts. Michaud asserted that the code involved in the NIT was material to his defense, and the government agreed to produce a large, though incomplete, portion of the code, as well as the pcap files recording the data transmission. Initially, the government asserted that the NIT only consisted of the “payload” and not information related to other functions, while the defense claimed that the NIT includes all component parts. We agree with the defense that, both legally and technically, the NIT is best understood to consists of a number of component parts, including the exploit code.
However, the government successfully asserted law enforcement privilege over the exploit. Despite the judge’s initial skepticism, following an ex parte hearing, he was convinced that the assertion of law enforcement privilege was proper, and that the government could not safely disclose the exploit to defense experts, even under a protective order.
A number of defense experts question this claim, alleging that surely the government could allow a cleared defense expert to examine the code in a secured facility. Those claims, as well as a significant amount of public commentary, misapprehend the potential equities at stake. The question here is not necessarily just the ability to secure a vulnerability that may or may not facilitate the future ability to unmask Tor users in child pornography investigations (though that would be a very strong interest standing alone). The tools are comprised of hundreds or thousands of lines of code, much of which is implicated in highly sensitive law enforcement, military, and intelligence activity. So a compromise to one small part of an exploit could harm a vast array of incredibly important national interests. The question is one of balance and the ultimate determination is for a judge.
And in the Michaud case, whatever the government showed the judge was utterly convincing on the point. He reversed a previous order requiring disclosure and agreed with the government that no protective order was sufficient to guard the interests at stake.
The Legal Standard for Disclosure
For our purposes, we will simplify the legal analysis and ignore individual circuit precedent and instead focus on the basic rights. Under Brady v. Maryland, defendants have a constitutional right, rooted in due process of the 14th Amendment, to any exculpatory evidence possessed by the prosecution. The Supreme Court, however, has held that there is no constitutional right to discovery in a criminal trial and that Brady did not create one. Here, the right at issue is not constitutional—though the defense raises broad constitutional claims—but instead is a procedural right under Federal Rule of Criminal Procedure 16. Rule 16 grants discovery to a criminal defendant to that which is discoverable and “material to the defense.”
Under Roviaro v. U.S., where discoverable information is “relevant and helpful to the defense of an accused, or is essential to a fair determination” the government privilege must give way. Under Rovario and Jencks v. U.S., if the court determines that discoverable, material information is relevant, helpful, or essential to a fair trial, and the government declines to provide discovery, then the case is dismissed. In Michaud, the court was presented with significant conflicting expert testimony. Balancing the confusing material against the defendant’s basic rights, the judge held that the exploit was material the the defense and excluded all evidence.
We believe the Michaud court erred. And in many pending Playpen cases, courts are being asked to determine whether examining the exploit code has any plausible bearing on the defense’s case. Below is a framework for questions to ask when evaluating NIT components for materiality, along with the application to the available evidence in the Playpen cases.
How to Evaluate a NIT
Below are questions for courts to ask when evaluating an NIT, along with an assessment of whether the exploit code is required for a complete understanding. Unlike others forms of searches and seizures, NITs offer the defense an opportunity to perform a detailed evaluation of the functionality, to determine what the NIT searched for, how it conducted the search, what data was seized, and the chain of custody. Evaluating those different questions requires access to different components, and what failures within these particular systems might mean has different evidentiary consequences.
- 1. Does the ID number uniquely identify a single user of the site?
A key question to ask is whether the generator ensures that every NIT activation uses a different ID. In order to validate this, a defense expert would need access to the generator’s source code or, at a minimum, the log maintained by the generator which shows that every user received a different ID.
If a defect in the NIT were to reuse the ID between the visitor’s invocation and another visitor’s invocation that would affect evidence which mapped the NIT’s invocation to a particular user of the site. Importantly, this kind of defect would in no way undermine that evidence that an individual at the user’s IP site visited Playpen and it would not undermine evidence than a particular physical computer had visited the site. Therefore, a defective generator would not affect a probable cause determination for issuing a physical search warrant for the target’s computer—someone at that address, using that computer visited a page within the site which triggered the NIT. However, any evidence that depended on the mapping a particular site user to evidence collected during the NIT could theoretically be compromised and could affect charges that rested on demonstrating that a particular person accessed a specific page within the site—essentially charges specific to a form of child pornography not also applicable to all other pages where the NIT was deployed.
A defective generator would be visible in the logs which would show multiple invocations of the NIT with the same ID number. So the non-existence of a defect related to unique identifiers is verifiable without seeing the exploit and the exploit is not necessary to understanding the generator defect.
- 2. What did the NIT seize from the defendant’s system and send to the logging service?
This information is recorded in the pcap files on the logging service and is directly verifiable because the NIT transmits the information without encryption. Presenting the pcap file to the defense for all communication between the logging system and the defendant’s IP address enables the defense to validate the scope of the information seized. The fact that the pcap files were not transmitted while encrypted has no evidentiary consequence. Encryption performs various operations, including “confidentiality” (ensuring that data is not visible to third parties) and “integrity” (ensuring that data is not tampered with). Here, for the purposes of understanding what the NIT transmitted to the logging server, confidentiality would actually reduce the evidentiary value. As discussed below, there is no technologically plausible argument, by which a third-party could have altered the pcap file.
The exploit code would not provide any additional information as to what was seized and sent beyond what the pcap reveals.
- 3. Who did the NIT target?
For the defense to ensure that there is no possibility that the NIT was deployed in an overbroad manner beyond those individuals who logged into the site, experts would only need to evaluate the logs from the generator service and Playpen server to conclusively determine that this NIT only targeted users who logged into the Playpen site as authorized by the warrant. The exploit code has no plausible bearing on this question.
- 4. Is there a solid chain of custody?
The primary NIT evidence—identification of the defendant’s computer with a particular MAC address, hostname, and a given username—is captured in the pcap file at the logging service. The pcap file—combined with the site’s logs and generator log to ensure that the ID is associated with a unique user on the site—represents the key evidence. As long as these files are properly generated and stored, there is proof of an effective chain of custody because any errors in the larger logging server will result in discrepancies between the results of the larger logging program and the captured pcaps.
The lack of encryption on the information transmitted from the payload to the logging server is a feature which enhances the chain of custody by providing visibility. For an unknown third party to tamper with this communication in a way which would have been prevented by encryption, that third party would need to have advance awareness of the FBI’s activity, posses a valid login for the hidden site hosting the NIT (to obtain the ID which was used used), and simultaneously have a detailed profile of the target’s computer, including the MAC address as well as control of the target’s network as a man-in-the-middle.
This activity would represent highly-sophisticated tradecraft and would suggest capabilities on par with nation-states, particularly in obtaining detailed knowledge of protected FBI operational plans. Because this kind of conspiracy against a defendant would defy ordinary logic, there would likely need to be a threshold showing of probability. Ignoring the absurdity of the hypothetical, as a technical matter, it is almost certain that a third-party attacker would have control over the target’s computer directly. Therefore, the most likely evidence as to whether a target may have been framed by someone capable of tampering with the NIT’s communication would be obtained by examining the defendant’s computer for signs of that third party. Even if that examination indicated third party interference, or other evidence emerged to corroborate an otherwise fantastical explanation of sabotage, the exploit itself is still not relevant as it would not add any material insight into the actions of the third party.
- 5. Did the NIT correctly gather the seized information?
In order to evaluate that the NIT had correctly operated, an expert would need to examine only the payload, but not the exploit, in order to validate that it correctly identifies the MAC address, username, system name, and other system properties. Here, the NIT’s data is actually self-validating because any errors in this process would result in mismatched data--the information from the NIT would not match the computer seized from the defendant. The fact that law enforcement seized a matching computer from a defendant’s residence, would overwhelmingly suggests that the NIT had operated correctly even without access any of the NIT’s source code whatsoever.
- 6. Did the NIT seize additional information from the defendant’s system and send it to a different system on the internet?
One limitation of the pcap files captured by the logging service is they capture what the payload transmits to the logging service and not to any hypothetical third government system. Validating that the NIT exploit and payload only communicated seized information to the logging system would require access to the source code for both the exploit and payload, because both components run on the target’s system and any fraudulent addition to the NIT could live in the exploit rather than the payload.
It is reasonable to conduct such an evaluation for the payload and exploit independently. If the exploit’s only functionality is to take control of the target’s computer and retrieve the payload, it clearly is not able to send information to other systems. A government expert could testify to the fact that the exploit only transmitted to the logging system. A defense expert would not discover additional or overlooked functionality, but could verify that the FBI was not lying to the court in the trial proceedings or on the warrant application. If there is a determination that independent verification is needed as to whether the government expert has committed perjury--it is not plausible that the government could be merely mistaken here--then that expert would not need to be the same individual who had examined all other portions of the code. If security conditions require it, a mutually agreeable expert could examine only the exploit code under secure conditions and then attest as to whether the FBI had lied.
- 7. Did the search conducted by the NIT exceed the scope of the warrant without seizing information?
The other limitation of the pcap files captured by the logging service is that it does not reveal what the NIT searched for on the target’s computer. Validating that the search itself did not exceed the scope of the warrant without finding data that was transmitted back to the FBI would require evaluating both the exploit and payload. This would not have any bearing on the integrity of the NIT or the evidence obtained from the computer, but would go to the potential invalidity of the government’s warrant. As discussed above, here it is also possible to have an expert examine only the exploit in secure conditions and would not need to be part of a more holistic analysis by defense experts. To be clear, the allegation which would require examination of the exploit court rests on the presumption that the FBI affirmatively lied on the warrant application, as it is not technology plausible to make this type of mistake.
- 8. Did the NIT introduce additional weaknesses to the defendant’s computer?
It is highly unlikely that either the exploit or payload introduced additional weaknesses onto the target computer. Almost all tools which create such a “backdoor” are deliberately engineered to do so, it would be very uncommon to accidentally create one. Nevertheless, in order to evaluate whether the NIT introduced a weaknesses would require examining both the exploit and payload (as before, exploit and payload separately would be sufficient).
This information would have no bearing on evidence obtained during the NIT that an individual using a computer at the defendant’s IP addressed accessed child pornography within Playpen, as that information would have been obtained prior to the introduction of a weakness. Instead, this would be relevant to a defendant who conceded he visited pages within Playpen hosting contraband, but evidence seized from the computer in the course of the physical search had been planted by a malicious third party in the interval between the execution of the NIT and the physical search and seizure. This kind of malicious third party would need to be highly technologically capable and possess significant confidential information regarding an FBI operation, or extraordinarily lucky, in knowing that the defendant’s computer had been targeted during a secret NIT and the precise nature of the newly introduced vulnerability which this malicious party then used in order to plant child pornography. It strains belief, but if this is in fact the defense theory, than examination of the exploit is material to that claim.
The Three Basic Defense Scenarios
The above questions and technological explanations demonstrate that examining the NIT source code is relevant to a defense’s case as it related to three scenarios: An honest but careless FBI, an unknown third party, and a dishonest FBI.
The defense could assert that an honest but careless FBI intended to follow the warrant but made mistakes in programming the NIT. Under the scenario, there is no plausible reason for needing to see the exploit code. Verifying the correct operation of the NIT requires examining the pcaps, generator, generator logs, server logs, and the computer seized from the defendant. If the NIT operated properly, then the generator should generate unique IDs, the logs should show that each invocation used a unique ID, and the seized data in the pcaps should match a system seized with a physical search warrant. If this is the case, then any errors that might have existed in the payload or exploit resulted in a failure to collect evidence and did not in any way compromise evidence which was successfully collected.
The defense could assert that an unknown third party planted or otherwise compromised evidence. Seeing the exploit—and in fact the payload—would be immaterial to this claim. The method to identify if an unknown third party acted to frame the defendant requires examination of the defendant’s computer for signs that such an intruder left on the system. The NIT itself would not benefit or enable an intruder under any realistic circumstances. And if, implausibly, the NIT actually did open the defendant’s computer up to attack, that attack would have occurred at some period after the NIT. If the defense wished to explore a theory of elaborate framing, then experts should examine the computer and not the exploit.
In the previous two scenarios, the defense does not obtain any meaningful information from examining the exploit code. The only scenario in which a defense could gain new information material to its case is if the FBI deliberately programmed the NIT to exceed the scope of the warrant, either by searching for (ultimately undiscovered) evidence beyond the authorization or by transmitting information to a separate FBI server. This activity would certainly speak to the validity of the government’s warrant, however it would not have any bearing on the veracity or forensic integrity of the evidence collected. If the court permits the defense to explore the theory of corrupt law enforcement, then it could minimize risk to the sensitive exploit information by permitting examination of the exploit by a neutral expert who could verify the accuracy of the FBI’s representations.