Surveillance: Snowden NSA Controversy

Reforming the NSA Surveillance Programs – The Testimony I Would Have Given

By Paul Rosenzweig
Thursday, October 24, 2013, 1:26 PM

The House Permanent Select Committee on Intelligence was to have had a hearing today on proposed reforms to the NSA surveillance programs.  I was invited as a witness on a panel with Steve Bradbury and Steve Vladek and prepared testimony.  Unfortunately, Representative Bill Young's untimely death resulted in the House not being in session today and the hearing has been rescheduled for next Tuesday when I cannot attend.

Preparing the testimony did, however, give me a chance to work through my thoughts about the right framework for analysis and how it might apply to some of the proposed reforms.  Never one to let work go to waste, I post these thoughts here for such value as others might find in the analysis.  Like most issues, in the end I think the only answer you can really give is "it depends."

For those who don't want to read the entire statement, here is the short bullet summary:

 In my statement, I want to make four basic points:

  • First, the reality of data analytics has fundamentally changed.  We may wish that were not the case, but it is and in my judgment, Congress would be wise to recognize this fact.  Our privacy laws must, in turn, change to meet this reality;

  • Second, transparency is good.  Too much transparency defeats the very purpose of democracy;

  • Third, applying these concepts to the consideration of NSA surveillance leads me to the following conclusions (and here I have selected only a few of the most prominent proposals for discussion):

    • An in-house advocate before the FISA court, called at the court’s discretion, might improve decision-making;
    • Data retention rules and distributed databases will be ineffective and no more privacy protective;
    • Post-collection judicial assessment of reasonable articulable suspicion is worth considering;
    • Codifying existing judicial interpretations of FISA is not necessary but may be beneficial;
    • Requiring disclosure of aggregate (but not company specific) data about collection efforts will improve transparency;
    • We should reject the assertion that the FISA court is somehow either a rubberstamp or a packed court; and
    • Finally, the most effective reforms are likely structural rather than legislative
  • Fourth, our current system of intelligence oversight generally works.  It is incumbent on this Committee and those in Congress with knowledge of how our intelligence apparatus operates to defend that system as effective and appropriate.

For those who want to skip the framework and go right to the analysis, scroll down to the sections "Assessing Reforms of the NSA" and "Congressional Action."  The entire statement is below the jump.  Enjoy.

DRAFT Statement for HPSCI

As an initial matter, two caveats are in order.  First, as the current holder of an active Top Secret security clearance I am enjoined not to access classified materials that have been illegally disclosed.  Naturally, that has caused a bit of a challenge in preparing a statement, since some of what is the subject of discussion today is public only because of such illegal disclosures.  Fortunately, however, many of the most important underlying materials have been properly declassified by the Director of National Intelligence and may, therefore, be discussed in open session.  Equally fortunately, I can confidently state that none of the programs we will be discussing today were within my purview when I was at the Department of Homeland Security.  Hence everything I write about today is based on the public record, as I understand it – without, by the way, necessarily assuming that everything in that record is an accurate reflection of what is actually happening within NSA and the Intelligence Community.

Second, in offering my statement to you, I necessarily tread where others who are far smarter than I have already walked.[1]  In particular, I have relied upon two truly magnificent legal analyses of the topic, one by Steve Bradbury, who served in the Office of Legal Counsel during the Bush Administration,[2] and the other by David Kris, who served as Assistant Attorney General for the National Security Division during the Obama Administration.[3]

In my statement, I want to make four basic points:

  • First, the reality of data analytics has fundamentally changed.  We may wish that were not the case, but it is and in my judgment, Congress would be wise to recognize this fact.  Our privacy laws must, in turn, change to meet this reality;
  • Second, transparency is good.  Too much transparency defeats the very purpose of democracy;
  • Third, applying these concepts to the consideration of NSA surveillance leads me to the following conclusions (and here I have selected only a few of the most prominent proposals for discussion):
    • An in-house advocate before the FISA court, called at the court’s discretion, might improve decision-making;
    • Data retention rules and distributed databases will be ineffective and no more privacy protective;
    • Post-collection judicial assessment of reasonable articulable suspicion is worth considering;
    • Codifying existing judicial interpretations of FISA is not necessary but may be beneficial;
    • Requiring disclosure of aggregate (but not company specific) data about collection efforts will improve transparency;
    • We should reject the assertion that the FISA court is somehow either a rubberstamp or a packed court; and
    • Finally, the most effective reforms are likely structural rather than legislative.

    Fourth, our current system of intelligence oversight generally works.  It is incumbent on this Committee and those in Congress with knowledge of how our intelligence apparatus operates to defend that system as effective and appropriate.

Cyberspace is the natural battleground for enhanced analytical tools that are enabled by the technology of data collection. If our goal is to combat terrorists or insurgents (or even other nations) then the cyber domain offers us the capacity not just to steal secret information through espionage, but to take observable public behavior and information and use cyber tools to develop a more nuanced and robust understanding of their tactics and intentions. Likewise, it can be used by our opponents to uncover our own secrets.

In considering this new capability we can’t have it both ways.  We can’t with one breath condemn government access to vast quantities of data about individuals, as a return of “Big Brother”[4] and at the same time criticize the government for its failure to “connect the dots” (as we did, for example, during the Christmas 2009 bomb plot attempted by Umar Farouk Abdulmutallab.[5]

More to the point —these analytical tools are of such great utility that governments will expand their use, as will the private sector. Old rules about collection and use limitations are no longer technologically relevant. If we value privacy at all, these ineffective protections must be replaced with new constructs. The goal then is the identification of a suitable legal and policy regime to regulate and manage the use of mass quantities of personal data.

 The Power of Data Analytics[6]

Ten years ago, surveying the technology of the time which, by and large, was one hundred times less powerful than today’s data processing capacity Scott McNealy, then-CEO of Sun Microsystems, said, “Privacy is dead. Get over it.”[7] He was, it seems, slightly wrong. Pure privacy—that is, the privacy of activities in your own home—remains reasonably well-protected.[8] What has been lost, and will become even more so increasingly, is the anonymity of being able to act in public (whether physically or in cyberspace) without anyone having the technological capacity to permanently record and retain data about your activity for later analysis. Today, large data collection and aggregation companies, such as Experian and Axicom, may hire retirees to harvest, by hand, public records from government databases.[9] Paper records are digitized and electronic records are downloaded. These data aggregation companies typically hold birth records, credit and conviction records, real estate transactions and liens, bridal registries, and even kennel club records. One company, Acxiom, estimates that it holds on average approximately 1,500 pieces of data on each adult American.[10]

Since most, though not all, of these records are governmental in origin, the government has equivalent access to the data, and what they cannot create themselves they can likely buy or demand from the private sector. The day is now here when anyone with enough data and sufficient computing power can develop a detailed picture of any identifiable individual. That picture might tell your food preferences or your underwear size. It might tell something about your terrorist activity. Or your politics.

This analytical capacity can have a powerful influence in law and policy¾and in particular in revealing links between the cyber personas and the real world activities of individuals. When we speak of the new form of “dataveillance,” we are not speaking of the comparatively simple matching algorithms that cross check when a person’s name is submitted for review¾when, for example, they apply for a job. Even that exercise is a challenge for any government, as the failure to list Abdulmutallab in advance of the 2009 Christmas bombing attempt demonstrates.[11] The process contains uncertainties of data accuracy and fidelity, analysis and registration, transmission and propagation, and review, correction, and revision. Yet, even with those complexities, the process uses relatively simple technologically—the implementation is what poses a challenge.

By contrast, other systems of data analysis are far more technologically sophisticated. They are, in the end, an attempt to sift through large quantities of personal information to identify subjects when their identities are not already known. In the commercial context, these individuals are called “potential customers.” In the cyber conflict context, they might be called “Anonymous” or “Russian patriotic hackers.” In the terrorism context, they are often called “clean skins” because there is no known derogatory information connected to their names or identities. In this latter context, the individuals are dangerous because nothing is known of their predilections. For precisely this reason, this form of data analysis is sometimes called “knowledge discovery,” as the intention is to discover something previously unknown about an individual. There can be little doubt that data analysis of this sort can prove to be of great value.

Modernizing Privacy

Our privacy laws and our conceptions of privacy cannot withstand the technological change that is happening and the cyber conflict that is developing. We must put theories of data availability and anonymity on a sounder footing—a footing that will withstand the rigors of ever-increasing computational capacity. To do so we need to define what values underlie our instinctive privacy-protective reaction to the new technology, assess how realistic threats of abuse and misuse are, and create legal and policy incentives to foster positive applications while restraining adverse ones.

Privacy is really a misnomer. What it reflects is a desire for independence of personal activity, a form of autonomy. We protect that privacy in many ways. Sometimes we do so through secrecy which effectively obscures both observation of conduct and the identity of those engaging in the conduct. In other instances we protect the autonomy directly. Even though conduct is observed and the actor identified, we provide direct rules to limit action as, for example, in the criminal context where we have an exclusionary rule to limit the use of illegally collected evidence.

The concept of privacy that most applies to the new information technology regime is the idea of anonymity or “practical obscurity,” a middle ground where observation is permitted that is, we expose our actions in public but we are not subject to identification or scrutiny. The information data-space is suffused with information of this middle-ground sort, e.g., bank account transactions, phone records, airplane reservations, and Smartcard travel logs to name but a few. They constitute the core of transactions and electronic signature or verification information available in cyberspace. The anonymity that one has in respect of these transactions is not terribly different from “real-world anonymity.” Consider, as an example, the act of driving a car. It is done in public, but one is generally not subject to routine identification and scrutiny.

Protecting the anonymity we value requires, in the first instance, defining it accurately. One might posit that anonymity is, in effect, the ability to walk through the world unexamined. That is, however, not strictly accurate, for our conduct is examined numerous times every day. Sometimes the examination is by a private individual for example, one may notice that the individual sitting next to them on the train is wearing a wedding ring. Other routine examinations are by governmental authorities—the policeman in the car who watches the street or the security camera at the bank or airport, for example. As we drive down the road, any number of people might observe us.

So what we really must mean by anonymity is not a pure form of privacy akin to secrecy. Rather, what we mean is that even though one’s conduct is examined, routinely and regularly, both with and without one’s knowledge, nothing adverse should happen to you without good cause. In other words, the veil of anonymity previously protected by our “practical obscurity” that is now so readily pierced by technology must be protected by rules that limit when the piercing may happen as a means of protecting privacy and preventing governmental abuse. To put it more precisely, the key to this conception of privacy is that privacy’s principal virtue is a limitation on consequence. If there are no unjustified consequences (i.e., consequences that are the product of abuse or error or the application of an unwise policy) then, under this vision, there is no effect on a cognizable liberty/privacy interest. In other words, if nobody is there to hear the tree, or identify the actor, it really does not make a sound.

The appeal of this model is that it is, by and large, the model we already have for government/personal interactions in the physical world. The rule is not that the police cannot observe you; it is that they require authorization of some form from some authority in order to be permitted to engage in certain types of interactions, which are identified here as “consequences.” The police normally cannot stop you to question you without “reasonable suspicion,” cannot arrest you without “probable cause,” cannot search your house without “probable cause,” and cannot examine a corporation’s business records about you without a showing of “relevance” to an ongoing investigation. We can and should build structures that map the same rules-based model of authorization linked to consequence as the appropriate model for the world of dataveillance.

Thus, the questions to be asked of any dataveillance program are: What is the consequence of identification? What is the trigger for that consequence? Who decides when the trigger is met? These questions are the ones that really matter, and questions of collection limitation or purpose limitation, for example, are rightly seen as distractions from the main point. The right answers to these questions will vary, of course, depending on the context of the inquiry, but the critical first step is making sure that we are asking the right questions.[12]


Finally, let me close this statement of principles by noting that none of this is to diminish the significance of the transparency and oversight, generally. Transparency is a fundamental and vital aspect of democracy. Those who advance transparency concerns often, rightly, have recourse to the wisdom of James Madison, who observed that democracy without information is “but prologue to a farce or a tragedy.”[13]

Yet Madison understood that transparency was not a supreme value that trumped all other concerns. He also participated in the U.S. Constitutional Convention of 1787, the secrecy of whose proceedings was the key to its success. While governments may hide behind closed doors, U.S. democracy was also born behind them. It is not enough, then, to reflexively call for more transparency in all circumstances. The right amount is debatable, even for those, like Madison, who understand its utility.

What we need is to develop an heuristic for assessing the proper balance between opacity and transparency. To do so we must ask, why do we seek transparency in the first instance? Not for its own sake. Without need, transparency is little more than voyeurism. Rather, its ground is oversight--it enables us to limit and review the exercise of authority.

In the new domain of dataveillance, the form of oversight should vary depending upon the extent to which transparency and opacity are necessary to the new powers authorized.  Allowing some form of surveillance is vital to assure the protection of American interests. Conversely, allowing full public disclosure of our sources and methods is dangerous – identifying publicly how we conduct surveillance risks use of that information by terrorists and, in turn, draws a roadmap of which threats are not known. Thus, complete transparency will defeat the very purpose of disclosure and may even make us less secure.

What is required is a measured, flexible, adaptable transparency suited to the needs of oversight without frustrating the legitimate interests in limiting disclosure. Here, public disclosure through widespread debate in Congress should be rejected in favor of a model of delegated transparency -- Congressional and Executive Branch review (for example, random administrative and legislative auditing of how the government is using the information provided) that will guard against any theoretical potential for abuse while vindicating the manifest value of limited disclosure.

In short, Madison was not a hypocrite. Rather, opacity and transparency each have their place, in different measures as circumstances call for. The wisdom of Madison's insight--that both are necessary--remains as true today as it was 226 years ago.

Assessing Reforms of the NSA

With these principles in mind, let me now turn to an assessment of some of the more prominent proposals for reform to the NSA programs that have been talked about in the news and in the halls of Congress.  As you will gather, I favor those that create delegated or calibrated transparency and respond to the new paradigm of data analytics and privacy, while disfavoring those that don’t.

Adversarial Advocate: This proposal would create a standing team of attorneys to respond to and present a counter argument before the FISC to requests for permission to collect information against an individual or entity.  Presumably, this team of attorneys would either be from within the government (such as the DNI’s Civil Liberties and Privacy Officer) or a cadre of non-government attorney’s with clearances.

There is much to be said in favor of this proposal.  With regular criminal warrants the ex parte nature of the application for a warrant does not systematically create a lack of a check on overreaching because of the possibility for post-enforcement review during criminal prosecution with its adversarial process.  By contrast, in intelligence investigations that post-execution checking function of adversarial contest is often missing  -- few if any intelligence collection cases wind up before the courts.  As a result there is no systematic way of constraining the authority of the United States government in this context.  Providing for an adversarial advocate would give us the general benefits of adversarial presentation and provide a useful checking function on the overarching broad effect of FISA law on the public.

To be sure, this would be a novel process.  We don’t typically do pre-enforcement review of investigative techniques.  And if poorly implemented, this sort of process risks slowing down critical time sensitive investigations.  Perhaps most importantly, many worry (not without justification) that the adversarial advocate will in the end have an agenda that may distort legal developments.

On balance, this seems to be a positive idea – but only if it is implemented in a limited way for novel or unique questions of law.  It should be is limited to situations where the FISA court itself requests adversarial presentation.  That would limit the number or circumstances where the process was used to those few where new or seminal interpretations of law were being made.  The adversarial advocate should not appear routinely and should not appear on his or her own motion.   The court is, in my view, capable (and likely) to define when it can benefit from adversarial argument quite well.

Finally, as to the “who” of it, I am agnostic.  If Congress wants to enhance the credibility of the FISA process it might consider making the advocates non-government attorneys – though that would necessarily come with operational complications.  By contrast, assigning a dedicated government ombudsman attorney would make security issues easier to answer, albeit at the cost of public acceptance.

 Phone Company Data Retention:  Some have suggested that, instead of NSA collecting and retaining telephone call metadata, Congress should amend the law and impose a data retention requirement on phone companies and ISPs, requiring them to retain metadata for a fixed period of time, say five years.[14]  NSA and the FBI would, in turn, only be able to access this data set after a FISC court had passed on the validity of the request and determined that it met some evidentiary threshold, say, of relevance.

While the idea is attractive it is, in the end, more problematic than beneficial.  To begin with, the FISC pre-access review would be more privacy protective – but it would achieve this protection in the old fashioned way of limiting access to the underlying data.  More effective ways that focus on managing end uses rather than collection are to be preferred.

More to the point, this sort of system would be extremely cumbersome.  Searching on multiple distributed databases is always more difficult than searching a single database.  Worse, this architecture would require the disclosure of classified threat information to private actors, on a regular basis – a structure that we ought to try and avoid.   And, of course, though we might begin by limiting use of this database to counter-terrorism activities, I have no doubt that political pressures will soon push us down the slippery slope to other attractive uses (e.g. combatting drug cartels or child pornography).  IN the end, I fear that databases held by the private telecommunications companies would be the target of other legal process in the civil system.

Finally, at bottom, I am not so sure that large commercial data bases are actually more privacy protective than government ones.  As my friend and former boss Stewart Baker has said in assessing a comparable set of laws in Europe:  “Not only does the ‘data retention’ requirement in European law cover more personal information, it comes with far fewer safeguards. In Europe, unlike the United States, the authorities need only ask for stored data; companies can and do “volunteer” their data without any court order or other legal process.” [15]  I am skeptical that any system we design for use here in American would not be subject to the same sorts of issues.

Non-NSA Determination of Reasonable Articulable Suspicion:  One variant on the forgoing would break off a piece of the data retention proposal – namely the portion that requires external approval before NSA analysts access the Section 215 metadata database.  Logically, this requirement could be implemented even if the database were housed in NSA rather than, as proposed above, in distributed databases at the telecommunications companies.  In other words, Congress could add a requirement that every time the NSA determines that there is a reasonable articulable suspicion that a phone number is associated with terrorism, that the determination be promptly adjudicated before access is granted.  The identity of “who” might adjudicate the reasonable articulable suspicison and “when” is capable of many variations – it might be the FISC, before access is granted; or it might be post-access review by the FISC.  Or, it might even be pre-access review by some other portion of the Executive Branch, like DOJ’s NSSD.

In all its variants, this proposal has several positive aspects to it.  First, by requiring external non-NSA approval, we enhance the credibility of the determination of reasonable articulable suspicion.  Second, by invoking FISC jurisdiction or DOJ oversight we limit the likelihood that the database will be subject to mission creep and repurposed to non-counter-terrorism uses.  Third, and most saliently from a theoretical viewpoint, this paradigm of review after collection and in a control of use is more consistent with what I see as the technological reality of data analytics today.

Regarding the modality of review, my understanding from some I have spoken to is that pre-use review by the FISC would be, in their judgment, too slow and cumbersome.  I’m not sure I find that argument persuasive – after all, many warrant applications are approved on an emergency basis.  But if we were to reject pre-access judicial review the credibility of the section 215 program would be most enhanced by a combination of two other structures – pre-access approval outside of NSA within the Executive Branch (say at DOJ), followed by post-access approval by the FISC.

Revealing Data:  Many have suggested that the NSA be obliged to be more transparent in revealing the nature and frequency of certain types of data collection activities, or alternatively, the frequency of data collection requests to Internet Service Providers (ISPs).  This is one of those situations where the virtues of transparency, which are very real, need to be carefully calibrated to avoid unnecessary harm.

Here, we might ask what the ground of transparency is?  Presumably it is to enhance the confidence that Americans have in the operation of their security agencies.  If that is the case,  which I think it is, then the virtues of public oversight are served by the disclosure of aggregate numbers of requests and generic descriptions of type.  More details risk compromising sources and methods, but at a reasonable level of detail we can get much of the oversight we want without too grave a damage to our capabilities.

This assessment leads me to conclude that the efforts by individual companies to disclose requests made of them, individually, should be rejected.  That degree of specificity is certainly in the ISPs interests – but it isn’t in our collective interest.  Too much detail risks telling malicious actors which providers the government is focusing on (and, thus, which they should avoid) .  If we begin with the premise that NSA is a spy agency, we need it to operate effectively.   We should avoid systematically giving our opponents too much information that allows them to develop alternate strategies for avoiding surveillance.

Reforming the FISA Court:  A wide range of proposed reforms have been suggested for changing how the FISA court is staffed and operates.  These include suggestions to add more FISC judges to the process (i.e. have decisions made by panels); mandating more diversity of views among judges; changing the appointment authority, and so on.

The grounds for these proposed reforms are a series of false and pernicious premises – ones that, I regret to say, are fostered by our friends in the media.  They suggest that the Chief Justice has been preferentially appointing pro-government judges to the FISC,[16] and that the FISC is a rubber stamp for government action.[17]  While that Manichean view of justice is one that many liberal doubters of the court system espouse, it should be resisted with every fiber of our being.

In the first place, it simply isn’t true.  As anyone who has read the recent FISC opinions recognizes, the judges of that court have been vigilant (some critics even say too vigilant) in overseeing the NSA’s activities, having called large scale programs into question on at least three occasions that are publicly known and having declared at least one aspect of one program unconstitutional.  More to the point, we now know (and this, I submit, is actually one of the few good results stemming from the Snowden disclosures) that the FISC requires substantive modification to roughly one-quarter of all FISA warrant applications.[18]  I don’t know what the comparable figures are for traditional criminal investigations (and I don’t think they are collected) but my own experience as a prosecutor suggests that the rate of substantive amendment is far lower in that context.

And, of course, the premise of the entire argument is the ipse dixit that judges reflect their political views.  As a society we must reject that premise, lest law become nothing more than politics by other means.  It says everything you need to know about the validity of that premise that the original Section 215 order, authorizing metadata collection was (according to public reports – though it has not been declassified itself) issued in 2006 by Judge Colleen Kollar-Kotelly.  Judge Kollar-Kotelly was appointed to the FISC by Chief Justice Rehnquist (i.e. before Chief Justice Roberts’ tenure) and appointed to the bench by President Clinton.  If the most controversial decision of which we are aware is a counter-factual to the general charge, we should doubt the charge itself.

Worse yet, the cure would be worse than the disease.  Imagine, if you will, subjecting FISC appointments to Senate confirmation.  Nothing would be more likely to politicize the process.  Likewise, attempting to democratize the process by spreading it across the circuits would be impractical (since most FISC matters occur here in Washington) and would simply devolve the criticism one step lower.  The problem isn’t with the FISC so much as its critics.

Stating the Obvious:  There is sometimes, of course, value in stating the obvious.  One set of proposals would be to codify in law some of the requirements that have been developed as part of the common law of FISA warrant approvals.  Congress could, for example, put in statute the existing strict limits on access to Section 215 phone metadata records by explicitly prohibiting collection of the content of phone calls.  Congress might also codify the requirement that analysts must have a reasonable articulable suspicion that a phone number is associated with terrorism in order to query the database.  Inasmuch as these obligations are already judicially-imposed, the only downside to codifying them in statute is that they become entrenched and cannot be changed readily as circumstances change.  While that risk is not insignificant, the gains from enhanced credibility and the assertion of Congressional oversight probably make this aspect of reform worth considering.

Structural Changes:  Finally, given my views, you will not be surprised that I think that most of the more effective possible changes lie not in significant legislative tinkering, but rather in interstitial structural and operational reforms that improve the audit and oversight process without fundamentally altering the capabilities of NSA or the IC organizations.  Here are a few, listed just in bullet point form, that might be worth thinking about:

  • Make the NSA Inspector General, a presidential appointment, with Senate confirmation;
  • Require statutorily, the appointment of an NSA Civil Liberties & Privacy Officer;
  • Change the jurisdiction of the Privacy and Civil Liberties Oversight Board to include all intelligence activities, not just those with a counter-terrorism focus;
  • Create panels of cleared external reviewers for consultation by the DNI regarding new programs;
  • Institutionalize privacy and civil liberties concerns by making it a factor in performance reviews; and
  • Have the DNI annually report in a public forum on privacy and civil liberties matters.

Congressional Action

I will conclude with one final point, more about Congress and this Committee than the NSA.   Madison’s fundamental insight about transparency is that it is not an absolute value, but rather a relative one.  Since the mid-1970s, with the reforms prompted by the Church and Pike Committee investigations we in America have been engaged in an experiment – an experiment to see whether Madison’s insight can be converted to reality.  The question we have been asking is whether it is possible for a country like America to have covert operations under law – or, to coin a phrase, whether we can have intelligence collections within the bounds of democracy.

To my mind the system of delegated transparency, where Congress stands in for the general public, has worked reasonably well – allowing us to use intelligence capabilities while minimizing the risks of abuse of law.  Today, however, thanks to the Snowden disclosures, that system is under assault.  Most who challenge the system do so from the best of motives.  But I have little doubt that there are some whose calls for transparency mask the intention of diminishing American capabilities.

And that, I think, means that in this post-Snowden era, this Committee (and its Senate counterpart)[19] bear a great responsibility.   To you falls the task of defending the integrity of our current system of intelligence oversight.    While I have written in this statement of possible reforms to the NSA’s programs, both legislative and structural, the critical insight for me is that, despite the hue and cry, the system is not badly broken.  In can be improved, but in the main it has produced a reasonably effective system of oversight that, if the public record is an accurate reflection, resulted in precious little abuse of the sort we ought to fear.

You should be proud of that record and of your role in creating it.  Can this Committee, perhaps, do a better job of oversight?  I have no doubt.  But in the end, notwithstanding the calls for reform and the many plausible reforms you might consider, this Committee should defend the essential structure of our current system.  And that, in the end, means rejecting most calls for wholesale reform and complete transparency, and, instead, defending the role of graduated or delegated oversight.

[1] As Sir Isaac Newton said, if I see farther it is because I am “standing on the shoulders of giants.”  Letter to Robert Hooke (15 February 1676).

[2] Steven G. Bradbury, “Understanding the NSA Programs: Bulk Acquisition of Telephone Metadata Under Section 215 and Foreign-Targeted Collection Under Section 702,” 1 Lawfare Res. Paper Series No. 3 (Sept. 2013), //

[3] David S. Kris, “On the Bulk Collection of Tangible Things,” 1 Lawfare Res. Paper Series No. 4 (Sept. 2013), //

[4] An article by William Safire instigated a significant political controversy over an early data surveillance program, Total Information Awareness. See William Safire, ”You Are a Suspect,” The New York Times, Nov. 14, 2002, at A35. It led directly to the creation of a blue-ribbon panel, the Technology and Privacy Advisory Committee, and, eventually, to the cancellation of the Total Information Awareness program. The final report of the Technology and Privacy Advisory Committee is available at (last visited Feb. 23, 2010).

[5] See, e.g., Scott Shane & Eric Lipton, “Passengers’ Actions Thwart a Plan to Down a Jet,” The New York Times, Dec. 27, 2009, at A1.

[6] In these next two sections, I self-plagarize liberally from Chs. 9 and 10 of my book. Paul Rosenzweig, Cyber Warfare:  How Conflict in Cyberspace is Challenging America and Changing the World (Praeger Press 2013).

[7] Though the original statement may be apocryphal, many have quoted it since, including McNealy himself. See, e.g., Matt Hamblen, “McNealy Calls for Smart Cards,Computer World, Oct 12, 2001,

[8] See, e.g., Kyllo v. United States, 533 U.S. 27 (2001) (the use of thermal imagining outside the home without a warrant is an illegal search when it is used, even indirectly, to reveal activity taking place within the home).

[9] I learned this from discussions with ChoicePoint’s CEO Derek Smith and other industry practitioners. See also Ralph M. Stair & George W. Reynolds, Fundamentals of Information Systems 362 (2003) (discussing Experian’s collection of public records from government databases).

[10] Stephanie Clifford, “Online Ads Follow Web Users, and Get Much More Personal,” The New York Times, July 30, 2009, at A1.

[11] Peter Baker & Carl Hulse, ”Obama Hears of Signs That Should Have Grounded Plot,” The New York Times, Dec. 30, 2009, at A1.

[12] I have begun trying to apply these principles to specific cases in Whither Privacy?, Society & Surveillance Vol. 10 (3/4): 340 (2012),

[13] I first wrote about the thoughts in this section in Paul Rosenzweig, Calibrated Openness, Harv. Int’l Rev. (Summer 2004).

[14] As an aside, proposals to significantly shorten the time frame for data retention are, in my judgment, unwise.  If we have learned only one thing in the past decade it is that terrorist plots take a long time to mature and retrospective analysis may often look back as long as 8 or 10 years.  More to the point, old-style retention limitations misunderstand the new privacy paradigm I think controls – we should focus on controlling use and misuse, not on artificially limiting our own capabilities.

[15] Statement of Stewart A. Baker before the Committee on the Judiciary, United States Senate, July 31, 2013,

[16] Charlie Savage, “Roberts’s Picks Reshaping Secret Surveillance Court,” New York Times (July 25, 2013),

[17] NPR, FISA Court Appears To Be Rubber Stamp For Government Requests,

[18] Letter, Chief Judge Walton to Senator Leahy, (July 29, 2013),

[19] I am not alone in making this point.  My colleague Ben Wittes said something very similar to the Senate Select Committee on Intelligence last month.  Statement of Benjamin Wittes before the Select Committee on Intelligence, United States Senate (Sept. 26, 2013),