For the Senate Intelligence Committee hearing yesterday, I prepared a written statement for the record, and I had intended merely to summarize that statement in my oral presentation. Two things happened that caused me to tear up my remarks and speak more extemporaneously. One was that Chairman Feinstein’s opening statement, in which she gave an overview of the FISA reform bill she and Vice Chairman Saxby Chambliss are writing, seemed to cover a lot of the ground I had suggested in my written statement. So there seemed little point in urging the committee to do things the chairman had just said she was contemplating. The second was a brief exchange I had before the hearing started with my co-panelist—the always thoughtful Tim Edgar—and John DeLong, who runs NSA’s compliance program.
Sen. Feinstein asked both me and Edgar to write up our oral presentations to the extent they differed from our prepared statements. Since mine involves one major substantive point I did not make in the written statement, I thought I would do so in the form of a Lawfare post adapted from my oral statement yesterday and making the same two broad, high-altitude points about the stakes involved in this discussion of FISA reform.
When I was walking into the hearing yesterday afternoon, I had a conversation, standing in the aisle, with Delong and Edgar about the coming hearing. Edgar informed us that he had been annoyed to find four typos in his written testimony.
DeLong, without missing a beat, jokingly but quite aptly responded: “You know, if that were us, we would have to notify the FISA Court about each of them and the committee about each of them.”
It was an amusing quip, but also an informative and telling one, which captures something deep and important about two key aspects of the current debate. The first involves the integrity of the oversight structures. The second involves the extent to which our comfort level with data acquisition and exploitation in the era of Big Data needs to depend pervasively on the compliance regime that undergirds it. Compliance is not a sexy subject. But in this context, it is a big part of the whole ball game.
The first major point involves our oversight structures and their integrity and what we are talking about when we say the word “transparency” in this context. Everyone in this debate—from President Obama to Senators Feinstein and Chambliss to the DNI to the ACLU—say their goal is transparency. But they mean different things by transparency, and it’s worth smoking out that difference.
Transparency is obviously is a crucially important value in any democracy, but when you’re talking about intelligence collection activity, it is not a simple value. In fact, normally, when you’re talking about intelligence programs, we have traditionally seen transparency as an evil. Some things have to be secret. And with respect to those things, transparency is not necessarily a virtue, and sunlight is not necessarily a disinfectant. Or, rather, it's disinfectant in somewhat the same way that arsenic is an antibiotic.
In the wake of Watergate, Congress set up a series of reforms to the oversight and accountability system for intelligence, and notably, transparency as such was not really part of those reforms. The system was designed, rather, to create accountability without transparency. At its deepest level, the controversy that has followed the Snowden leaks reflects a loss of faith in the continued vitality of this set of reforms.
I very much support the effort to add transparency to the system, but it’s important to distinguish between adding transparency to this system and replacing this oversight system with one predicated on transparency. That is, when we speak of transparency, we must decide as a threshold matter whether what we are talking about is upending the post-Watergate system of intelligence oversight or whether we are talking about greater transparency in the context of that system, a system which presumes that the intelligence community will be keeping big secrets and that the oversight mechanisms need to protect its ability to do.
Let me lay my cards on the table about that and say that I still believe in the basic integrity of the post-Watergate structures of which this committee is a part, and that necessarily constrains the impulse towards transparency. I took a beating yesterday on Twitter for saying this at the hearing. I stand by it.
As I have said before, I believe that nothing in the current disclosures should cause us to lose faith in the essential integrity of the system of delegated intelligence oversight, delegated both within the congressional context and within the judicial context. Rather, the disclosures should give the public great confidence both in the oversight mechanisms at work here and in the underlying activity by the intelligence community and its legality.
I have gone through the declassified documents very carefully, and these disclosures to my mind show no evidence of any intentional spying on Americans or abuse of civil liberties. They show a remarkably low rate of the sort of errors that any complex system of technical collection will inevitably produce. They show robust compliance procedures---as DeLong’s quip in the aisle yesterday accurately reflects. They show earnest and serious efforts to keep the Congress informed, notwithstanding some members’ protestations that they were shocked to learn that NSA—having repeatedly informed Congress that it was engaged in bulk metadata collection—was actually telling the truth. And they show a remarkable dialog with the FISC about the parameters of the agency’s legal authority and a real commitment both to keeping the court informed of activity and to complying with the FISC’s judgment. The FISC, meanwhile, in these documents looks nothing like the rubber stamp that it’s portrayed to be in countless caricatures. It looks, rather, like a serious judicial institution of considerable energy.
To the extent that members of Congress agree with this analysis---and many members of the intelligence committee do---the principal task in the current environment is to defend the existing structures, publicly and energetically, as both Feinstein and Chambliss have done. It is not to race to correct imagined structural deficiencies in the system and thereby to appear to be reforming what one actually supports---and thereby contribute to the delegitimizing of those structures. To be sure, there are reforms that would be valuable in the way of increasing transparency, increasing accountability, codifying now-public standards, and even tightening those standards. But to my mind, we must pursue these reforms in the context of a defense of the basic oversight structures themselves. And the defense of these mechanisms necessarily involves a defense of some degree of limitations on transparency. In other words, the challenge of transparency here is a really subtle one: It is to inject transparency within the basic confines of an oversight system that is actually designed to protect secrets.
The second broad point concerns the stakes with respect to the underlying collection activity: The essential model for the protection of civil liberties must be different in the era of Big Data than it was before and the change puts great weight on compliance.
There is no word for the era that came before Big Data. Much the way folks in the Neolithic Era did not use that name for their epoch and people before Jesus did not count their years as B.C., we couldn’t name the era with reference to that which had not yet happened. But in the Small Data era—or whatever you want to call it—we had a model for thinking about the relationship between data collection and the use of data by government. Broadly speaking, that model involved a narrow aperture for collection of data but very few restrictions on what government could do with data once it had collected it. Collection was hard, but use was easy.
Consider, for example, the text of the Fourth Amendment, which demands that searches and seizures be reasonable and specifies the conditions on which warrants will issue. These are restrictions on the collection aperture. The amendment says nothing, not a word, about what government can do with the fruits of searches that are reasonable.
The era of Big Data, I think, inevitably flips that model on its head, at least to some extent. The reason, quite simply, is that technologies of mass empowerment today create platforms for human activity---some of it very dangerous---that we expect government to police and will hold government accountable for the failure to police successfully. At the same time, however, huge datasets in the hands of third parties create powerful tools for potential surveillance. So we expect government to do more to protect us, and resources for that protection, in fact, exist. The pressure to exploit those resources will be inexorable, and many of the same people who viscerally object to the use of such data for surveillance purposes think nothing of complaining when bad things happen because government fails to connect the dots. They can’t have it both ways, and we have migrated as a society---and will continue to migrate---towards exploitation of the opportunities Big Data offers for security. We could make other choices, but I don't think we will.
The result is that we have---and we’re going to have---a much wider aperture for collection as an initial matter than we did in the era of Small Data. And on the flip side, we’re going to have neurotically detailed rules of use, exploitation, and handling. I wrote about this trend in Law and the Long War, and I was not the first. The Markle Foundation national security task force was onto this point a long time before that.
Two big forces constrain the use of Big Data material once collected.
The first is law. Unless you believe that the intelligence community is a lawless enterprise that will not follow the rules, this puts a premium on the substantive content of the law Congress writes to govern this area. In other words, the reason it matters what the rules are is that we assume that the law actually will constrain NSA.
The second, to return to DeLong’s quip, is the compliance regime. How scared should you be of our new model—a model in which we have a wide aperture for collection, really restrictive rules of use, and substantive legal rules as a major force restraining improper use? One can, I believe, only answer that question with reference to how confident you are in the compliance procedures that undergird it.
It is really important to distinguish between the technical capacity to do something and whether that thing is actually going to happen. The D.C. police could easily raid my house today. They have the technical capacity to do it. Yet I have near-total confidence that it will not happen. The FBI could wiretap my phone. It certainly has the technical capacity to do so, yet I have a near-total confidence that it will not happen. The reason for my confidence is two-fold: the substantive law would not support either action, and there are robust compliance measures that mean that were lawless action were to take place, there would be accountability at many levels and I would have a remedy.
In the era of Big Data, the compliance regime is a big part of the whole ballgame. If you believe the compliance regime inadequate, after all, the government already has the data. But one thing we have learned an enormous amount about is the compliance procedures that NSA uses. They are remarkable. They are detailed. They produce data streams that are extremely telling---and, to my mind, deeply reassuring.
And here’s the rub: I believe that my liberty is more secure with NSA collecting this material subject to these rules and this compliance regime than it would be if NSA declined to do so.